Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
|
|
|
/*
|
|
|
|
* Shared application/kernel submission and completion ring pairs, for
|
|
|
|
* supporting fast/efficient IO.
|
|
|
|
*
|
|
|
|
* A note on the read/write ordering memory barriers that are matched between
|
2019-04-24 21:54:16 +00:00
|
|
|
* the application and kernel side.
|
|
|
|
*
|
|
|
|
* After the application reads the CQ ring tail, it must use an
|
|
|
|
* appropriate smp_rmb() to pair with the smp_wmb() the kernel uses
|
|
|
|
* before writing the tail (using smp_load_acquire to read the tail will
|
|
|
|
* do). It also needs a smp_mb() before updating CQ head (ordering the
|
|
|
|
* entry load(s) with the head store), pairing with an implicit barrier
|
2021-05-16 21:58:11 +00:00
|
|
|
* through a control-dependency in io_get_cqe (smp_store_release to
|
2019-04-24 21:54:16 +00:00
|
|
|
* store head will do). Failure to do so could lead to reading invalid
|
|
|
|
* CQ entries.
|
|
|
|
*
|
|
|
|
* Likewise, the application must use an appropriate smp_wmb() before
|
|
|
|
* writing the SQ tail (ordering SQ entry stores with the tail store),
|
|
|
|
* which pairs with smp_load_acquire in io_get_sqring (smp_store_release
|
|
|
|
* to store the tail will do). And it needs a barrier ordering the SQ
|
|
|
|
* head load before writing new SQ entries (smp_load_acquire to read
|
|
|
|
* head will do).
|
|
|
|
*
|
|
|
|
* When using the SQ poll thread (IORING_SETUP_SQPOLL), the application
|
|
|
|
* needs to check the SQ flags for IORING_SQ_NEED_WAKEUP *after*
|
|
|
|
* updating the SQ tail; a full memory barrier smp_mb() is needed
|
|
|
|
* between.
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
*
|
|
|
|
* Also see the examples in the liburing library:
|
|
|
|
*
|
|
|
|
* git://git.kernel.dk/liburing
|
|
|
|
*
|
|
|
|
* io_uring also uses READ/WRITE_ONCE() for _any_ store or load that happens
|
|
|
|
* from data shared between the kernel and application. This is done both
|
|
|
|
* for ordering purposes, but also to ensure that once a value is loaded from
|
|
|
|
* data that the application could potentially modify, it remains stable.
|
|
|
|
*
|
|
|
|
* Copyright (C) 2018-2019 Jens Axboe
|
2019-01-11 16:43:02 +00:00
|
|
|
* Copyright (c) 2018-2019 Christoph Hellwig
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
*/
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/errno.h>
|
|
|
|
#include <linux/syscalls.h>
|
|
|
|
#include <linux/compat.h>
|
2020-02-27 17:15:42 +00:00
|
|
|
#include <net/compat.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#include <linux/refcount.h>
|
|
|
|
#include <linux/uio.h>
|
2020-01-18 17:22:41 +00:00
|
|
|
#include <linux/bits.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
#include <linux/sched/signal.h>
|
|
|
|
#include <linux/fs.h>
|
|
|
|
#include <linux/file.h>
|
|
|
|
#include <linux/fdtable.h>
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/mman.h>
|
|
|
|
#include <linux/percpu.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/blkdev.h>
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
#include <linux/bvec.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#include <linux/net.h>
|
|
|
|
#include <net/sock.h>
|
|
|
|
#include <net/af_unix.h>
|
2019-01-11 05:13:58 +00:00
|
|
|
#include <net/scm.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#include <linux/anon_inodes.h>
|
|
|
|
#include <linux/sched/mm.h>
|
|
|
|
#include <linux/uaccess.h>
|
|
|
|
#include <linux/nospec.h>
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
#include <linux/sizes.h>
|
|
|
|
#include <linux/hugetlb.h>
|
2019-11-29 17:14:00 +00:00
|
|
|
#include <linux/highmem.h>
|
2019-12-11 18:20:36 +00:00
|
|
|
#include <linux/namei.h>
|
|
|
|
#include <linux/fsnotify.h>
|
2019-12-26 05:03:45 +00:00
|
|
|
#include <linux/fadvise.h>
|
2020-01-08 22:18:09 +00:00
|
|
|
#include <linux/eventpoll.h>
|
2020-02-24 08:32:45 +00:00
|
|
|
#include <linux/splice.h>
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 16:52:41 +00:00
|
|
|
#include <linux/task_work.h>
|
2020-05-22 15:24:42 +00:00
|
|
|
#include <linux/pagemap.h>
|
2020-09-13 19:09:39 +00:00
|
|
|
#include <linux/io_uring.h>
|
2021-08-08 00:13:41 +00:00
|
|
|
#include <linux/tracehook.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 17:02:01 +00:00
|
|
|
#define CREATE_TRACE_POINTS
|
|
|
|
#include <trace/events/io_uring.h>
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#include <uapi/linux/io_uring.h>
|
|
|
|
|
|
|
|
#include "internal.h"
|
2019-10-24 13:25:42 +00:00
|
|
|
#include "io-wq.h"
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-09-14 21:23:45 +00:00
|
|
|
#define IORING_MAX_ENTRIES 32768
|
2019-10-04 18:10:03 +00:00
|
|
|
#define IORING_MAX_CQ_ENTRIES (2 * IORING_MAX_ENTRIES)
|
2021-06-23 18:50:18 +00:00
|
|
|
#define IORING_SQPOLL_CAP_ENTRIES_VALUE 8
|
2019-10-26 13:20:21 +00:00
|
|
|
|
2021-08-09 12:04:01 +00:00
|
|
|
/* 512 entries per page on 64-bit archs, 64 pages max */
|
|
|
|
#define IORING_MAX_FIXED_FILES (1U << 15)
|
2020-08-27 14:58:30 +00:00
|
|
|
#define IORING_MAX_RESTRICTIONS (IORING_RESTRICTION_LAST + \
|
|
|
|
IORING_REGISTER_LAST + IORING_OP_LAST)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2021-06-14 01:36:21 +00:00
|
|
|
#define IO_RSRC_TAG_TABLE_SHIFT 9
|
|
|
|
#define IO_RSRC_TAG_TABLE_MAX (1U << IO_RSRC_TAG_TABLE_SHIFT)
|
|
|
|
#define IO_RSRC_TAG_TABLE_MASK (IO_RSRC_TAG_TABLE_MAX - 1)
|
|
|
|
|
2021-05-14 11:06:44 +00:00
|
|
|
#define IORING_MAX_REG_BUFFERS (1U << 14)
|
|
|
|
|
2021-02-18 18:29:40 +00:00
|
|
|
#define SQE_VALID_FLAGS (IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK| \
|
|
|
|
IOSQE_IO_HARDLINK | IOSQE_ASYNC | \
|
|
|
|
IOSQE_BUFFER_SELECT)
|
2021-06-17 17:14:04 +00:00
|
|
|
#define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \
|
|
|
|
REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS)
|
2021-02-18 18:29:40 +00:00
|
|
|
|
2021-06-14 01:36:22 +00:00
|
|
|
#define IO_TCTX_REFS_CACHE_NR (1U << 10)
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
struct io_uring {
|
|
|
|
u32 head ____cacheline_aligned_in_smp;
|
|
|
|
u32 tail ____cacheline_aligned_in_smp;
|
|
|
|
};
|
|
|
|
|
2019-04-24 21:54:16 +00:00
|
|
|
/*
|
2019-08-26 17:23:46 +00:00
|
|
|
* This data is shared with the application through the mmap at offsets
|
|
|
|
* IORING_OFF_SQ_RING and IORING_OFF_CQ_RING.
|
2019-04-24 21:54:16 +00:00
|
|
|
*
|
|
|
|
* The offsets to the member fields are published through struct
|
|
|
|
* io_sqring_offsets when calling io_uring_setup.
|
|
|
|
*/
|
2019-08-26 17:23:46 +00:00
|
|
|
struct io_rings {
|
2019-04-24 21:54:16 +00:00
|
|
|
/*
|
|
|
|
* Head and tail offsets into the ring; the offsets need to be
|
|
|
|
* masked to get valid indices.
|
|
|
|
*
|
2019-08-26 17:23:46 +00:00
|
|
|
* The kernel controls head of the sq ring and the tail of the cq ring,
|
|
|
|
* and the application controls tail of the sq ring and the head of the
|
|
|
|
* cq ring.
|
2019-04-24 21:54:16 +00:00
|
|
|
*/
|
2019-08-26 17:23:46 +00:00
|
|
|
struct io_uring sq, cq;
|
2019-04-24 21:54:16 +00:00
|
|
|
/*
|
2019-08-26 17:23:46 +00:00
|
|
|
* Bitmasks to apply to head and tail offsets (constant, equals
|
2019-04-24 21:54:16 +00:00
|
|
|
* ring_entries - 1)
|
|
|
|
*/
|
2019-08-26 17:23:46 +00:00
|
|
|
u32 sq_ring_mask, cq_ring_mask;
|
|
|
|
/* Ring sizes (constant, power of 2) */
|
|
|
|
u32 sq_ring_entries, cq_ring_entries;
|
2019-04-24 21:54:16 +00:00
|
|
|
/*
|
|
|
|
* Number of invalid entries dropped by the kernel due to
|
|
|
|
* invalid index stored in array
|
|
|
|
*
|
|
|
|
* Written by the kernel, shouldn't be modified by the
|
|
|
|
* application (i.e. get number of "new events" by comparing to
|
|
|
|
* cached value).
|
|
|
|
*
|
|
|
|
* After a new SQ head value was read by the application this
|
|
|
|
* counter includes all submissions that were dropped reaching
|
|
|
|
* the new SQ head (and possibly more).
|
|
|
|
*/
|
2019-08-26 17:23:46 +00:00
|
|
|
u32 sq_dropped;
|
2019-04-24 21:54:16 +00:00
|
|
|
/*
|
2020-05-15 16:38:04 +00:00
|
|
|
* Runtime SQ flags
|
2019-04-24 21:54:16 +00:00
|
|
|
*
|
|
|
|
* Written by the kernel, shouldn't be modified by the
|
|
|
|
* application.
|
|
|
|
*
|
|
|
|
* The application needs a full memory barrier before checking
|
|
|
|
* for IORING_SQ_NEED_WAKEUP after updating the sq tail.
|
|
|
|
*/
|
2019-08-26 17:23:46 +00:00
|
|
|
u32 sq_flags;
|
2020-05-15 16:38:04 +00:00
|
|
|
/*
|
|
|
|
* Runtime CQ flags
|
|
|
|
*
|
|
|
|
* Written by the application, shouldn't be modified by the
|
|
|
|
* kernel.
|
|
|
|
*/
|
2021-06-24 14:09:57 +00:00
|
|
|
u32 cq_flags;
|
2019-04-24 21:54:16 +00:00
|
|
|
/*
|
|
|
|
* Number of completion events lost because the queue was full;
|
|
|
|
* this should be avoided by the application by making sure
|
2019-12-05 12:18:18 +00:00
|
|
|
* there are not more requests pending than there is space in
|
2019-04-24 21:54:16 +00:00
|
|
|
* the completion queue.
|
|
|
|
*
|
|
|
|
* Written by the kernel, shouldn't be modified by the
|
|
|
|
* application (i.e. get number of "new events" by comparing to
|
|
|
|
* cached value).
|
|
|
|
*
|
|
|
|
* As completion events come in out of order this counter is not
|
|
|
|
* ordered with any other data.
|
|
|
|
*/
|
2019-08-26 17:23:46 +00:00
|
|
|
u32 cq_overflow;
|
2019-04-24 21:54:16 +00:00
|
|
|
/*
|
|
|
|
* Ring buffer of completion events.
|
|
|
|
*
|
|
|
|
* The kernel writes completion events fresh every time they are
|
|
|
|
* produced, so the application is allowed to modify pending
|
|
|
|
* entries.
|
|
|
|
*/
|
2019-08-26 17:23:46 +00:00
|
|
|
struct io_uring_cqe cqes[] ____cacheline_aligned_in_smp;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
};
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
enum io_uring_cmd_flags {
|
|
|
|
IO_URING_F_NONBLOCK = 1,
|
2021-02-10 00:03:09 +00:00
|
|
|
IO_URING_F_COMPLETE_DEFER = 2,
|
2021-02-10 00:03:07 +00:00
|
|
|
};
|
|
|
|
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
struct io_mapped_ubuf {
|
|
|
|
u64 ubuf;
|
2021-04-01 14:43:55 +00:00
|
|
|
u64 ubuf_end;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
unsigned int nr_bvecs;
|
2020-09-17 22:19:16 +00:00
|
|
|
unsigned long acct_pages;
|
2021-04-25 13:32:23 +00:00
|
|
|
struct bio_vec bvec[];
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
};
|
|
|
|
|
2021-01-15 17:37:45 +00:00
|
|
|
struct io_ring_ctx;
|
|
|
|
|
2021-02-23 12:40:22 +00:00
|
|
|
struct io_overflow_cqe {
|
|
|
|
struct io_uring_cqe cqe;
|
|
|
|
struct list_head list;
|
|
|
|
};
|
|
|
|
|
2021-04-01 14:44:04 +00:00
|
|
|
struct io_fixed_file {
|
|
|
|
/* file * with additional FFS_* flags */
|
|
|
|
unsigned long file_ptr;
|
|
|
|
};
|
|
|
|
|
2021-01-15 17:37:44 +00:00
|
|
|
struct io_rsrc_put {
|
|
|
|
struct list_head list;
|
2021-04-25 13:32:18 +00:00
|
|
|
u64 tag;
|
2021-01-15 17:37:45 +00:00
|
|
|
union {
|
|
|
|
void *rsrc;
|
|
|
|
struct file *file;
|
2021-04-25 13:32:25 +00:00
|
|
|
struct io_mapped_ubuf *buf;
|
2021-01-15 17:37:45 +00:00
|
|
|
};
|
2021-01-15 17:37:44 +00:00
|
|
|
};
|
|
|
|
|
2021-04-11 00:46:37 +00:00
|
|
|
struct io_file_table {
|
2021-08-09 12:04:01 +00:00
|
|
|
struct io_fixed_file *files;
|
2019-01-19 05:56:34 +00:00
|
|
|
};
|
|
|
|
|
2021-04-01 14:43:40 +00:00
|
|
|
struct io_rsrc_node {
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 06:05:18 +00:00
|
|
|
struct percpu_ref refs;
|
|
|
|
struct list_head node;
|
2021-01-15 17:37:44 +00:00
|
|
|
struct list_head rsrc_list;
|
2021-04-01 14:43:40 +00:00
|
|
|
struct io_rsrc_data *rsrc_data;
|
2020-05-14 23:21:15 +00:00
|
|
|
struct llist_node llist;
|
2020-11-18 14:56:26 +00:00
|
|
|
bool done;
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 06:05:18 +00:00
|
|
|
};
|
|
|
|
|
2021-04-01 14:43:44 +00:00
|
|
|
typedef void (rsrc_put_fn)(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc);
|
|
|
|
|
2021-04-01 14:43:40 +00:00
|
|
|
struct io_rsrc_data {
|
2019-12-09 18:22:50 +00:00
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
|
2021-06-14 01:36:21 +00:00
|
|
|
u64 **tags;
|
|
|
|
unsigned int nr;
|
2021-04-01 14:43:44 +00:00
|
|
|
rsrc_put_fn *do_put;
|
2021-04-11 00:46:34 +00:00
|
|
|
atomic_t refs;
|
2019-12-09 18:22:50 +00:00
|
|
|
struct completion done;
|
2021-02-19 09:19:36 +00:00
|
|
|
bool quiesce;
|
2019-12-09 18:22:50 +00:00
|
|
|
};
|
|
|
|
|
2020-02-23 23:23:11 +00:00
|
|
|
struct io_buffer {
|
|
|
|
struct list_head list;
|
|
|
|
__u64 addr;
|
2021-05-05 12:47:06 +00:00
|
|
|
__u32 len;
|
2020-02-23 23:23:11 +00:00
|
|
|
__u16 bid;
|
|
|
|
};
|
|
|
|
|
2020-08-27 14:58:30 +00:00
|
|
|
struct io_restriction {
|
|
|
|
DECLARE_BITMAP(register_op, IORING_REGISTER_LAST);
|
|
|
|
DECLARE_BITMAP(sqe_op, IORING_OP_LAST);
|
|
|
|
u8 sqe_flags_allowed;
|
|
|
|
u8 sqe_flags_required;
|
2020-08-27 14:58:31 +00:00
|
|
|
bool registered;
|
2020-08-27 14:58:30 +00:00
|
|
|
};
|
|
|
|
|
2021-02-18 04:03:43 +00:00
|
|
|
enum {
|
|
|
|
IO_SQ_THREAD_SHOULD_STOP = 0,
|
|
|
|
IO_SQ_THREAD_SHOULD_PARK,
|
|
|
|
};
|
|
|
|
|
2020-09-02 19:52:19 +00:00
|
|
|
struct io_sq_data {
|
|
|
|
refcount_t refs;
|
2021-03-14 20:57:12 +00:00
|
|
|
atomic_t park_pending;
|
2021-03-14 20:57:10 +00:00
|
|
|
struct mutex lock;
|
2020-09-14 17:16:23 +00:00
|
|
|
|
|
|
|
/* ctx's that are using this sqd */
|
|
|
|
struct list_head ctx_list;
|
|
|
|
|
2020-09-02 19:52:19 +00:00
|
|
|
struct task_struct *thread;
|
|
|
|
struct wait_queue_head wait;
|
io_uring: refactor io_sq_thread() handling
There are some issues about current io_sq_thread() implementation:
1. The prepare_to_wait() usage in __io_sq_thread() is weird. If
multiple ctxs share one same poll thread, one ctx will put poll thread
in TASK_INTERRUPTIBLE, but if other ctxs have work to do, we don't
need to change task's stat at all. I think only if all ctxs don't have
work to do, we can do it.
2. We use round-robin strategy to make multiple ctxs share one same
poll thread, but there are various condition in __io_sq_thread(), which
seems complicated and may affect round-robin strategy.
To improve above issues, I take below actions:
1. If multiple ctxs share one same poll thread, only if all all ctxs
don't have work to do, we can call prepare_to_wait() and schedule() to
make poll thread enter sleep state.
2. To make round-robin strategy more straight, I simplify
__io_sq_thread() a bit, it just does io poll and sqes submit work once,
does not check various condition.
3. For multiple ctxs share one same poll thread, we choose the biggest
sq_thread_idle among these ctxs as timeout condition, and will update
it when ctx is in or out.
4. Not need to check EBUSY especially, if io_submit_sqes() returns
EBUSY, IORING_SQ_CQ_OVERFLOW should be set, helper in liburing should
be aware of cq overflow and enters kernel to flush work.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-11-03 06:15:59 +00:00
|
|
|
|
|
|
|
unsigned sq_thread_idle;
|
2021-02-18 04:03:43 +00:00
|
|
|
int sq_cpu;
|
|
|
|
pid_t task_pid;
|
2021-03-11 17:17:56 +00:00
|
|
|
pid_t task_tgid;
|
2021-02-18 04:03:43 +00:00
|
|
|
|
|
|
|
unsigned long state;
|
|
|
|
struct completion exited;
|
2020-09-02 19:52:19 +00:00
|
|
|
};
|
|
|
|
|
2021-02-10 00:03:13 +00:00
|
|
|
#define IO_COMPL_BATCH 32
|
2021-02-10 00:03:18 +00:00
|
|
|
#define IO_REQ_CACHE_SIZE 32
|
2021-02-10 00:03:17 +00:00
|
|
|
#define IO_REQ_ALLOC_BATCH 8
|
2021-02-10 00:03:10 +00:00
|
|
|
|
2021-02-18 18:29:42 +00:00
|
|
|
struct io_submit_link {
|
|
|
|
struct io_kiocb *head;
|
|
|
|
struct io_kiocb *last;
|
|
|
|
};
|
|
|
|
|
2021-02-10 00:03:10 +00:00
|
|
|
struct io_submit_state {
|
|
|
|
struct blk_plug plug;
|
2021-02-18 18:29:42 +00:00
|
|
|
struct io_submit_link link;
|
2021-02-10 00:03:10 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* io_kiocb alloc cache
|
|
|
|
*/
|
2021-02-10 00:03:17 +00:00
|
|
|
void *reqs[IO_REQ_CACHE_SIZE];
|
2021-02-10 00:03:10 +00:00
|
|
|
unsigned int free_reqs;
|
|
|
|
|
|
|
|
bool plug_started;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Batch completion logic
|
|
|
|
*/
|
2021-08-09 19:18:11 +00:00
|
|
|
struct io_kiocb *compl_reqs[IO_COMPL_BATCH];
|
|
|
|
unsigned int compl_nr;
|
|
|
|
/* inline/task_work completion list, under ->uring_lock */
|
|
|
|
struct list_head free_list;
|
2021-02-10 00:03:10 +00:00
|
|
|
|
|
|
|
unsigned int ios_left;
|
|
|
|
};
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
struct io_ring_ctx {
|
2021-06-14 22:37:21 +00:00
|
|
|
/* const or read-mostly hot data */
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
struct {
|
|
|
|
struct percpu_ref refs;
|
|
|
|
|
2021-06-14 22:37:21 +00:00
|
|
|
struct io_rings *rings;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
unsigned int flags;
|
2020-02-06 04:57:10 +00:00
|
|
|
unsigned int compat: 1;
|
|
|
|
unsigned int drain_next: 1;
|
|
|
|
unsigned int eventfd_async: 1;
|
2020-08-27 14:58:30 +00:00
|
|
|
unsigned int restricted: 1;
|
2021-06-14 22:37:25 +00:00
|
|
|
unsigned int off_timeout_used: 1;
|
2021-06-15 15:47:56 +00:00
|
|
|
unsigned int drain_active: 1;
|
2021-06-14 22:37:21 +00:00
|
|
|
} ____cacheline_aligned_in_smp;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2021-06-14 22:37:22 +00:00
|
|
|
/* submission data */
|
2021-06-14 22:37:21 +00:00
|
|
|
struct {
|
2021-06-14 22:37:29 +00:00
|
|
|
struct mutex uring_lock;
|
|
|
|
|
2019-08-26 17:23:46 +00:00
|
|
|
/*
|
|
|
|
* Ring buffer of indices into array of io_uring_sqe, which is
|
|
|
|
* mmapped by the application using the IORING_OFF_SQES offset.
|
|
|
|
*
|
|
|
|
* This indirection could e.g. be used to assign fixed
|
|
|
|
* io_uring_sqe entries to operations and only submit them to
|
|
|
|
* the queue when needed.
|
|
|
|
*
|
|
|
|
* The kernel modifies neither the indices array nor the entries
|
|
|
|
* array.
|
|
|
|
*/
|
|
|
|
u32 *sq_array;
|
2021-06-14 22:37:20 +00:00
|
|
|
struct io_uring_sqe *sq_sqes;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
unsigned cached_sq_head;
|
|
|
|
unsigned sq_entries;
|
2019-04-07 03:51:27 +00:00
|
|
|
struct list_head defer_list;
|
2021-06-14 22:37:22 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Fixed resources fast path, should be accessed only under
|
|
|
|
* uring_lock, and updated through io_uring_register(2)
|
|
|
|
*/
|
|
|
|
struct io_rsrc_node *rsrc_node;
|
|
|
|
struct io_file_table file_table;
|
|
|
|
unsigned nr_user_files;
|
|
|
|
unsigned nr_user_bufs;
|
|
|
|
struct io_mapped_ubuf **user_bufs;
|
|
|
|
|
|
|
|
struct io_submit_state submit_state;
|
2019-09-17 18:26:57 +00:00
|
|
|
struct list_head timeout_list;
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
struct list_head cq_overflow_list;
|
2021-06-14 22:37:22 +00:00
|
|
|
struct xarray io_buffers;
|
|
|
|
struct xarray personalities;
|
|
|
|
u32 pers_next;
|
|
|
|
unsigned sq_thread_idle;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
} ____cacheline_aligned_in_smp;
|
|
|
|
|
2021-05-16 21:58:12 +00:00
|
|
|
/* IRQ completion list, under ->completion_lock */
|
|
|
|
struct list_head locked_free_list;
|
|
|
|
unsigned int locked_free_nr;
|
2021-02-11 17:48:03 +00:00
|
|
|
|
2021-03-07 10:54:28 +00:00
|
|
|
const struct cred *sq_creds; /* cred used for __io_sq_thread() */
|
2020-09-02 19:52:19 +00:00
|
|
|
struct io_sq_data *sq_data; /* if using sq thread polling */
|
|
|
|
|
2020-09-03 18:12:41 +00:00
|
|
|
struct wait_queue_head sqo_sq_wait;
|
2020-09-14 17:16:23 +00:00
|
|
|
struct list_head sqd_list;
|
2019-08-26 17:23:46 +00:00
|
|
|
|
2021-06-14 22:37:27 +00:00
|
|
|
unsigned long check_cq_overflow;
|
|
|
|
|
2019-11-08 01:27:42 +00:00
|
|
|
struct {
|
|
|
|
unsigned cached_cq_tail;
|
|
|
|
unsigned cq_entries;
|
2021-06-14 22:37:29 +00:00
|
|
|
struct eventfd_ctx *cq_ev_fd;
|
2021-06-14 22:37:28 +00:00
|
|
|
struct wait_queue_head poll_wait;
|
2021-06-14 22:37:29 +00:00
|
|
|
struct wait_queue_head cq_wait;
|
|
|
|
unsigned cq_extra;
|
|
|
|
atomic_t cq_timeouts;
|
2019-11-08 01:27:42 +00:00
|
|
|
struct fasync_struct *cq_fasync;
|
2021-06-14 22:37:29 +00:00
|
|
|
unsigned cq_last_tm_flush;
|
2019-11-08 01:27:42 +00:00
|
|
|
} ____cacheline_aligned_in_smp;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
struct {
|
|
|
|
spinlock_t completion_lock;
|
2019-12-19 19:06:02 +00:00
|
|
|
|
2021-08-10 21:11:51 +00:00
|
|
|
spinlock_t timeout_lock;
|
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
/*
|
2020-07-13 20:37:09 +00:00
|
|
|
* ->iopoll_list is protected by the ctx->uring_lock for
|
2019-01-09 15:59:42 +00:00
|
|
|
* io_uring instances that don't use IORING_SETUP_SQPOLL.
|
|
|
|
* For SQPOLL, only the single threaded io_sq_thread() will
|
|
|
|
* manipulate the list, hence no extra locking is needed there.
|
|
|
|
*/
|
2020-07-13 20:37:09 +00:00
|
|
|
struct list_head iopoll_list;
|
2019-12-05 02:56:40 +00:00
|
|
|
struct hlist_head *cancel_hash;
|
|
|
|
unsigned cancel_hash_bits;
|
2021-06-27 21:37:30 +00:00
|
|
|
bool poll_multi_queue;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
} ____cacheline_aligned_in_smp;
|
2020-04-10 00:14:00 +00:00
|
|
|
|
2020-08-27 14:58:30 +00:00
|
|
|
struct io_restriction restrictions;
|
2021-02-11 17:48:03 +00:00
|
|
|
|
2021-05-16 21:58:07 +00:00
|
|
|
/* slow path rsrc auxilary data, used by update/register */
|
|
|
|
struct {
|
|
|
|
struct io_rsrc_node *rsrc_backup_node;
|
|
|
|
struct io_mapped_ubuf *dummy_ubuf;
|
|
|
|
struct io_rsrc_data *file_data;
|
|
|
|
struct io_rsrc_data *buf_data;
|
|
|
|
|
|
|
|
struct delayed_work rsrc_put_work;
|
|
|
|
struct llist_head rsrc_put_llist;
|
|
|
|
struct list_head rsrc_ref_list;
|
|
|
|
spinlock_t rsrc_ref_lock;
|
|
|
|
};
|
|
|
|
|
2021-02-11 17:48:03 +00:00
|
|
|
/* Keep this last, we don't need it for the fast path */
|
2021-05-16 21:58:06 +00:00
|
|
|
struct {
|
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
struct socket *ring_sock;
|
|
|
|
#endif
|
|
|
|
/* hashed buffered write serialization */
|
|
|
|
struct io_wq_hash *hash_map;
|
|
|
|
|
|
|
|
/* Only used for accounting purposes */
|
|
|
|
struct user_struct *user;
|
|
|
|
struct mm_struct *mm_account;
|
|
|
|
|
|
|
|
/* ctx exit and cancelation */
|
2021-06-30 20:54:03 +00:00
|
|
|
struct llist_head fallback_llist;
|
|
|
|
struct delayed_work fallback_work;
|
2021-05-16 21:58:06 +00:00
|
|
|
struct work_struct exit_work;
|
|
|
|
struct list_head tctx_list;
|
|
|
|
struct completion ref_comp;
|
|
|
|
};
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
};
|
|
|
|
|
2021-03-15 11:56:56 +00:00
|
|
|
struct io_uring_task {
|
|
|
|
/* submission side */
|
2021-06-14 01:36:22 +00:00
|
|
|
int cached_refs;
|
2021-03-15 11:56:56 +00:00
|
|
|
struct xarray xa;
|
|
|
|
struct wait_queue_head wait;
|
2021-03-15 11:56:57 +00:00
|
|
|
const struct io_ring_ctx *last;
|
|
|
|
struct io_wq *io_wq;
|
2021-03-15 11:56:56 +00:00
|
|
|
struct percpu_counter inflight;
|
2021-04-11 00:46:26 +00:00
|
|
|
atomic_t inflight_tracked;
|
2021-03-15 11:56:56 +00:00
|
|
|
atomic_t in_idle;
|
|
|
|
|
|
|
|
spinlock_t task_lock;
|
|
|
|
struct io_wq_work_list task_list;
|
|
|
|
struct callback_head task_work;
|
2021-08-10 16:53:55 +00:00
|
|
|
bool task_running;
|
2021-03-15 11:56:56 +00:00
|
|
|
};
|
|
|
|
|
2019-03-13 18:39:28 +00:00
|
|
|
/*
|
|
|
|
* First field must be the file pointer in all the
|
|
|
|
* iocb unions! See also 'struct kiocb' in <linux/fs.h>
|
|
|
|
*/
|
2019-01-17 16:41:58 +00:00
|
|
|
struct io_poll_iocb {
|
|
|
|
struct file *file;
|
2020-10-27 23:17:18 +00:00
|
|
|
struct wait_queue_head *head;
|
2019-01-17 16:41:58 +00:00
|
|
|
__poll_t events;
|
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-12 21:48:16 +00:00
|
|
|
bool done;
|
2019-01-17 16:41:58 +00:00
|
|
|
bool canceled;
|
2019-12-10 00:52:20 +00:00
|
|
|
struct wait_queue_entry wait;
|
2019-01-17 16:41:58 +00:00
|
|
|
};
|
|
|
|
|
2021-04-13 01:58:40 +00:00
|
|
|
struct io_poll_update {
|
2020-10-27 23:17:18 +00:00
|
|
|
struct file *file;
|
2021-04-13 01:58:40 +00:00
|
|
|
u64 old_user_data;
|
|
|
|
u64 new_user_data;
|
|
|
|
__poll_t events;
|
io_uring: allow events and user_data update of running poll requests
This adds two new POLL_ADD flags, IORING_POLL_UPDATE_EVENTS and
IORING_POLL_UPDATE_USER_DATA. As with the other POLL_ADD flag, these are
masked into sqe->len. If set, the POLL_ADD will have the following
behavior:
- sqe->addr must contain the the user_data of the poll request that
needs to be modified. This field is otherwise invalid for a POLL_ADD
command.
- If IORING_POLL_UPDATE_EVENTS is set, sqe->poll_events must contain the
new mask for the existing poll request. There are no checks for whether
these are identical or not, if a matching poll request is found, then it
is re-armed with the new mask.
- If IORING_POLL_UPDATE_USER_DATA is set, sqe->off must contain the new
user_data for the existing poll request.
A POLL_ADD with any of these flags set may complete with any of the
following results:
1) 0, which means that we successfully found the existing poll request
specified, and performed the re-arm procedure. Any error from that
re-arm will be exposed as a completion event for that original poll
request, not for the update request.
2) -ENOENT, if no existing poll request was found with the given
user_data.
3) -EALREADY, if the existing poll request was already in the process of
being removed/canceled/completing.
4) -EACCES, if an attempt was made to modify an internal poll request
(eg not one originally issued ass IORING_OP_POLL_ADD).
The usual -EINVAL cases apply as well, if any invalid fields are set
in the sqe for this command type.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-17 14:37:41 +00:00
|
|
|
bool update_events;
|
|
|
|
bool update_user_data;
|
2020-10-27 23:17:18 +00:00
|
|
|
};
|
|
|
|
|
2019-12-11 21:02:38 +00:00
|
|
|
struct io_close {
|
|
|
|
struct file *file;
|
|
|
|
int fd;
|
|
|
|
};
|
|
|
|
|
2019-11-15 15:49:11 +00:00
|
|
|
struct io_timeout_data {
|
|
|
|
struct io_kiocb *req;
|
|
|
|
struct hrtimer timer;
|
|
|
|
struct timespec64 ts;
|
|
|
|
enum hrtimer_mode mode;
|
|
|
|
};
|
|
|
|
|
2019-12-16 18:55:28 +00:00
|
|
|
struct io_accept {
|
|
|
|
struct file *file;
|
|
|
|
struct sockaddr __user *addr;
|
|
|
|
int __user *addr_len;
|
|
|
|
int flags;
|
2020-03-20 02:16:56 +00:00
|
|
|
unsigned long nofile;
|
2019-12-16 18:55:28 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct io_sync {
|
|
|
|
struct file *file;
|
|
|
|
loff_t len;
|
|
|
|
loff_t off;
|
|
|
|
int flags;
|
2019-12-10 17:38:56 +00:00
|
|
|
int mode;
|
2019-12-16 18:55:28 +00:00
|
|
|
};
|
|
|
|
|
2019-12-18 01:45:56 +00:00
|
|
|
struct io_cancel {
|
|
|
|
struct file *file;
|
|
|
|
u64 addr;
|
|
|
|
};
|
|
|
|
|
2019-12-18 01:50:29 +00:00
|
|
|
struct io_timeout {
|
|
|
|
struct file *file;
|
2020-05-30 11:54:18 +00:00
|
|
|
u32 off;
|
|
|
|
u32 target_seq;
|
2020-07-13 20:37:12 +00:00
|
|
|
struct list_head list;
|
2020-10-27 23:25:36 +00:00
|
|
|
/* head of the link, used by linked timeouts only */
|
|
|
|
struct io_kiocb *head;
|
2021-08-10 21:14:18 +00:00
|
|
|
/* for linked completions */
|
|
|
|
struct io_kiocb *prev;
|
2019-12-18 01:50:29 +00:00
|
|
|
};
|
|
|
|
|
2020-10-10 17:34:10 +00:00
|
|
|
struct io_timeout_rem {
|
|
|
|
struct file *file;
|
|
|
|
u64 addr;
|
2020-11-30 19:11:16 +00:00
|
|
|
|
|
|
|
/* timeout update */
|
|
|
|
struct timespec64 ts;
|
|
|
|
u32 flags;
|
2020-10-10 17:34:10 +00:00
|
|
|
};
|
|
|
|
|
2019-12-20 15:45:55 +00:00
|
|
|
struct io_rw {
|
|
|
|
/* NOTE: kiocb has the file as the first member, so don't do it here */
|
|
|
|
struct kiocb kiocb;
|
|
|
|
u64 addr;
|
|
|
|
u64 len;
|
|
|
|
};
|
|
|
|
|
2019-12-20 15:51:52 +00:00
|
|
|
struct io_connect {
|
|
|
|
struct file *file;
|
|
|
|
struct sockaddr __user *addr;
|
|
|
|
int addr_len;
|
|
|
|
};
|
|
|
|
|
2019-12-20 15:58:21 +00:00
|
|
|
struct io_sr_msg {
|
|
|
|
struct file *file;
|
2020-01-05 03:19:44 +00:00
|
|
|
union {
|
2021-04-11 00:46:30 +00:00
|
|
|
struct compat_msghdr __user *umsg_compat;
|
|
|
|
struct user_msghdr __user *umsg;
|
|
|
|
void __user *buf;
|
2020-01-05 03:19:44 +00:00
|
|
|
};
|
2019-12-20 15:58:21 +00:00
|
|
|
int msg_flags;
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
int bgid;
|
2020-01-05 03:19:44 +00:00
|
|
|
size_t len;
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
struct io_buffer *kbuf;
|
2019-12-20 15:58:21 +00:00
|
|
|
};
|
|
|
|
|
2019-12-11 18:20:36 +00:00
|
|
|
struct io_open {
|
|
|
|
struct file *file;
|
|
|
|
int dfd;
|
|
|
|
struct filename *filename;
|
2020-01-09 00:41:21 +00:00
|
|
|
struct open_how how;
|
2020-03-20 01:23:18 +00:00
|
|
|
unsigned long nofile;
|
2019-12-11 18:20:36 +00:00
|
|
|
};
|
|
|
|
|
2021-01-15 17:37:44 +00:00
|
|
|
struct io_rsrc_update {
|
2019-12-09 18:22:50 +00:00
|
|
|
struct file *file;
|
|
|
|
u64 arg;
|
|
|
|
u32 nr_args;
|
|
|
|
u32 offset;
|
|
|
|
};
|
|
|
|
|
2019-12-26 05:03:45 +00:00
|
|
|
struct io_fadvise {
|
|
|
|
struct file *file;
|
|
|
|
u64 offset;
|
|
|
|
u32 len;
|
|
|
|
u32 advice;
|
|
|
|
};
|
|
|
|
|
2019-12-26 05:18:28 +00:00
|
|
|
struct io_madvise {
|
|
|
|
struct file *file;
|
|
|
|
u64 addr;
|
|
|
|
u32 len;
|
|
|
|
u32 advice;
|
|
|
|
};
|
|
|
|
|
2020-01-08 22:18:09 +00:00
|
|
|
struct io_epoll {
|
|
|
|
struct file *file;
|
|
|
|
int epfd;
|
|
|
|
int op;
|
|
|
|
int fd;
|
|
|
|
struct epoll_event event;
|
2019-12-20 15:58:21 +00:00
|
|
|
};
|
|
|
|
|
2020-02-24 08:32:45 +00:00
|
|
|
struct io_splice {
|
|
|
|
struct file *file_out;
|
|
|
|
struct file *file_in;
|
|
|
|
loff_t off_out;
|
|
|
|
loff_t off_in;
|
|
|
|
u64 len;
|
|
|
|
unsigned int flags;
|
|
|
|
};
|
|
|
|
|
2020-02-23 23:41:33 +00:00
|
|
|
struct io_provide_buf {
|
|
|
|
struct file *file;
|
|
|
|
__u64 addr;
|
2021-04-15 12:07:39 +00:00
|
|
|
__u32 len;
|
2020-02-23 23:41:33 +00:00
|
|
|
__u32 bgid;
|
|
|
|
__u16 nbufs;
|
|
|
|
__u16 bid;
|
|
|
|
};
|
|
|
|
|
2020-05-23 04:31:16 +00:00
|
|
|
struct io_statx {
|
|
|
|
struct file *file;
|
|
|
|
int dfd;
|
|
|
|
unsigned int mask;
|
|
|
|
unsigned int flags;
|
2020-05-23 04:31:18 +00:00
|
|
|
const char __user *filename;
|
2020-05-23 04:31:16 +00:00
|
|
|
struct statx __user *buffer;
|
|
|
|
};
|
|
|
|
|
2020-09-05 17:14:22 +00:00
|
|
|
struct io_shutdown {
|
|
|
|
struct file *file;
|
|
|
|
int how;
|
|
|
|
};
|
|
|
|
|
2020-09-28 20:23:58 +00:00
|
|
|
struct io_rename {
|
|
|
|
struct file *file;
|
|
|
|
int old_dfd;
|
|
|
|
int new_dfd;
|
|
|
|
struct filename *oldpath;
|
|
|
|
struct filename *newpath;
|
|
|
|
int flags;
|
|
|
|
};
|
|
|
|
|
2020-09-28 20:27:37 +00:00
|
|
|
struct io_unlink {
|
|
|
|
struct file *file;
|
|
|
|
int dfd;
|
|
|
|
int flags;
|
|
|
|
struct filename *filename;
|
|
|
|
};
|
|
|
|
|
2020-07-13 20:37:08 +00:00
|
|
|
struct io_completion {
|
|
|
|
struct file *file;
|
2021-02-28 22:35:15 +00:00
|
|
|
u32 cflags;
|
2020-07-13 20:37:08 +00:00
|
|
|
};
|
|
|
|
|
2019-12-02 23:28:46 +00:00
|
|
|
struct io_async_connect {
|
|
|
|
struct sockaddr_storage address;
|
|
|
|
};
|
|
|
|
|
2019-12-03 01:50:25 +00:00
|
|
|
struct io_async_msghdr {
|
|
|
|
struct iovec fast_iov[UIO_FASTIOV];
|
2021-02-05 00:58:00 +00:00
|
|
|
/* points to an allocated iov, if NULL we use fast_iov instead */
|
|
|
|
struct iovec *free_iov;
|
2019-12-03 01:50:25 +00:00
|
|
|
struct sockaddr __user *uaddr;
|
|
|
|
struct msghdr msg;
|
2020-02-09 18:29:15 +00:00
|
|
|
struct sockaddr_storage addr;
|
2019-12-03 01:50:25 +00:00
|
|
|
};
|
|
|
|
|
2019-12-02 18:03:47 +00:00
|
|
|
struct io_async_rw {
|
|
|
|
struct iovec fast_iov[UIO_FASTIOV];
|
2020-08-13 15:47:43 +00:00
|
|
|
const struct iovec *free_iovec;
|
|
|
|
struct iov_iter iter;
|
2020-08-13 17:51:40 +00:00
|
|
|
size_t bytes_done;
|
2020-05-22 15:24:42 +00:00
|
|
|
struct wait_page_queue wpq;
|
2019-12-02 18:03:47 +00:00
|
|
|
};
|
|
|
|
|
2020-01-18 17:22:41 +00:00
|
|
|
enum {
|
|
|
|
REQ_F_FIXED_FILE_BIT = IOSQE_FIXED_FILE_BIT,
|
|
|
|
REQ_F_IO_DRAIN_BIT = IOSQE_IO_DRAIN_BIT,
|
|
|
|
REQ_F_LINK_BIT = IOSQE_IO_LINK_BIT,
|
|
|
|
REQ_F_HARDLINK_BIT = IOSQE_IO_HARDLINK_BIT,
|
|
|
|
REQ_F_FORCE_ASYNC_BIT = IOSQE_ASYNC_BIT,
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
REQ_F_BUFFER_SELECT_BIT = IOSQE_BUFFER_SELECT_BIT,
|
2020-01-18 17:22:41 +00:00
|
|
|
|
2021-04-27 15:13:52 +00:00
|
|
|
/* first byte is taken by user flags, shift it to not overlap */
|
2021-05-16 21:58:05 +00:00
|
|
|
REQ_F_FAIL_BIT = 8,
|
2020-01-18 17:22:41 +00:00
|
|
|
REQ_F_INFLIGHT_BIT,
|
|
|
|
REQ_F_CUR_POS_BIT,
|
|
|
|
REQ_F_NOWAIT_BIT,
|
|
|
|
REQ_F_LINK_TIMEOUT_BIT,
|
2020-02-07 19:04:45 +00:00
|
|
|
REQ_F_NEED_CLEANUP_BIT,
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
REQ_F_POLLED_BIT,
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
REQ_F_BUFFER_SELECTED_BIT,
|
2021-01-19 13:32:47 +00:00
|
|
|
REQ_F_COMPLETE_INLINE_BIT,
|
2021-04-02 02:41:15 +00:00
|
|
|
REQ_F_REISSUE_BIT,
|
2021-03-22 01:58:32 +00:00
|
|
|
REQ_F_DONT_REISSUE_BIT,
|
2021-06-17 17:14:02 +00:00
|
|
|
REQ_F_CREDS_BIT,
|
io_uring: skip request refcounting
As submission references are gone, there is only one initial reference
left. Instead of actually doing atomic refcounting, add a flag
indicating whether we're going to take more refs or doing any other sync
magic. The flag should be set before the request may get used in
parallel.
Together with the previous patch it saves 2 refcount atomics per request
for IOPOLL and IRQ completions, and 1 atomic per req for inline
completions, with some exceptions. In particular, currently, there are
three cases, when the refcounting have to be enabled:
- Polling, including apoll. Because double poll entries takes a ref.
Might get relaxed in the near future.
- Link timeouts, enabled for both, the timeout and the request it's
bound to, because they work in-parallel and we need to synchronise
to cancel one of them on completion.
- When a request gets in io-wq, because it doesn't hold uring_lock and
we need guarantees of submission references.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/8b204b6c5f6643062270a1913d6d3a7f8f795fd9.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 18:28:30 +00:00
|
|
|
REQ_F_REFCOUNT_BIT,
|
2021-08-15 09:40:24 +00:00
|
|
|
REQ_F_ARM_LTIMEOUT_BIT,
|
2021-03-12 15:30:14 +00:00
|
|
|
/* keep async read/write and isreg together and in order */
|
2021-08-09 12:04:03 +00:00
|
|
|
REQ_F_NOWAIT_READ_BIT,
|
|
|
|
REQ_F_NOWAIT_WRITE_BIT,
|
2021-03-12 15:30:14 +00:00
|
|
|
REQ_F_ISREG_BIT,
|
2020-03-03 22:28:17 +00:00
|
|
|
|
|
|
|
/* not a real bit, just to check we're not overflowing the space */
|
|
|
|
__REQ_F_LAST_BIT,
|
2020-01-18 17:22:41 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
enum {
|
|
|
|
/* ctx owns file */
|
|
|
|
REQ_F_FIXED_FILE = BIT(REQ_F_FIXED_FILE_BIT),
|
|
|
|
/* drain existing IO first */
|
|
|
|
REQ_F_IO_DRAIN = BIT(REQ_F_IO_DRAIN_BIT),
|
|
|
|
/* linked sqes */
|
|
|
|
REQ_F_LINK = BIT(REQ_F_LINK_BIT),
|
|
|
|
/* doesn't sever on completion < 0 */
|
|
|
|
REQ_F_HARDLINK = BIT(REQ_F_HARDLINK_BIT),
|
|
|
|
/* IOSQE_ASYNC */
|
|
|
|
REQ_F_FORCE_ASYNC = BIT(REQ_F_FORCE_ASYNC_BIT),
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
/* IOSQE_BUFFER_SELECT */
|
|
|
|
REQ_F_BUFFER_SELECT = BIT(REQ_F_BUFFER_SELECT_BIT),
|
2020-01-18 17:22:41 +00:00
|
|
|
|
|
|
|
/* fail rest of links */
|
2021-05-16 21:58:05 +00:00
|
|
|
REQ_F_FAIL = BIT(REQ_F_FAIL_BIT),
|
2021-03-04 13:59:24 +00:00
|
|
|
/* on inflight list, should be cancelled and waited on exit reliably */
|
2020-01-18 17:22:41 +00:00
|
|
|
REQ_F_INFLIGHT = BIT(REQ_F_INFLIGHT_BIT),
|
|
|
|
/* read/write uses file position */
|
|
|
|
REQ_F_CUR_POS = BIT(REQ_F_CUR_POS_BIT),
|
|
|
|
/* must not punt to workers */
|
|
|
|
REQ_F_NOWAIT = BIT(REQ_F_NOWAIT_BIT),
|
2020-10-19 15:39:16 +00:00
|
|
|
/* has or had linked timeout */
|
2020-01-18 17:22:41 +00:00
|
|
|
REQ_F_LINK_TIMEOUT = BIT(REQ_F_LINK_TIMEOUT_BIT),
|
2020-02-07 19:04:45 +00:00
|
|
|
/* needs cleanup */
|
|
|
|
REQ_F_NEED_CLEANUP = BIT(REQ_F_NEED_CLEANUP_BIT),
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
/* already went through poll handler */
|
|
|
|
REQ_F_POLLED = BIT(REQ_F_POLLED_BIT),
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
/* buffer already selected */
|
|
|
|
REQ_F_BUFFER_SELECTED = BIT(REQ_F_BUFFER_SELECTED_BIT),
|
2021-01-19 13:32:47 +00:00
|
|
|
/* completion is deferred through io_comp_state */
|
|
|
|
REQ_F_COMPLETE_INLINE = BIT(REQ_F_COMPLETE_INLINE_BIT),
|
2021-04-02 02:41:15 +00:00
|
|
|
/* caller should reissue async */
|
|
|
|
REQ_F_REISSUE = BIT(REQ_F_REISSUE_BIT),
|
2021-03-22 01:58:32 +00:00
|
|
|
/* don't attempt request reissue, see io_rw_reissue() */
|
|
|
|
REQ_F_DONT_REISSUE = BIT(REQ_F_DONT_REISSUE_BIT),
|
2021-03-12 15:30:14 +00:00
|
|
|
/* supports async reads */
|
2021-08-09 12:04:03 +00:00
|
|
|
REQ_F_NOWAIT_READ = BIT(REQ_F_NOWAIT_READ_BIT),
|
2021-03-12 15:30:14 +00:00
|
|
|
/* supports async writes */
|
2021-08-09 12:04:03 +00:00
|
|
|
REQ_F_NOWAIT_WRITE = BIT(REQ_F_NOWAIT_WRITE_BIT),
|
2021-03-12 15:30:14 +00:00
|
|
|
/* regular file */
|
|
|
|
REQ_F_ISREG = BIT(REQ_F_ISREG_BIT),
|
2021-06-17 17:14:02 +00:00
|
|
|
/* has creds assigned */
|
|
|
|
REQ_F_CREDS = BIT(REQ_F_CREDS_BIT),
|
io_uring: skip request refcounting
As submission references are gone, there is only one initial reference
left. Instead of actually doing atomic refcounting, add a flag
indicating whether we're going to take more refs or doing any other sync
magic. The flag should be set before the request may get used in
parallel.
Together with the previous patch it saves 2 refcount atomics per request
for IOPOLL and IRQ completions, and 1 atomic per req for inline
completions, with some exceptions. In particular, currently, there are
three cases, when the refcounting have to be enabled:
- Polling, including apoll. Because double poll entries takes a ref.
Might get relaxed in the near future.
- Link timeouts, enabled for both, the timeout and the request it's
bound to, because they work in-parallel and we need to synchronise
to cancel one of them on completion.
- When a request gets in io-wq, because it doesn't hold uring_lock and
we need guarantees of submission references.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/8b204b6c5f6643062270a1913d6d3a7f8f795fd9.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 18:28:30 +00:00
|
|
|
/* skip refcounting if not set */
|
|
|
|
REQ_F_REFCOUNT = BIT(REQ_F_REFCOUNT_BIT),
|
2021-08-15 09:40:24 +00:00
|
|
|
/* there is a linked timeout that has to be armed */
|
|
|
|
REQ_F_ARM_LTIMEOUT = BIT(REQ_F_ARM_LTIMEOUT_BIT),
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct async_poll {
|
|
|
|
struct io_poll_iocb poll;
|
2020-07-17 23:09:27 +00:00
|
|
|
struct io_poll_iocb *double_poll;
|
2020-01-18 17:22:41 +00:00
|
|
|
};
|
|
|
|
|
2021-06-30 20:54:04 +00:00
|
|
|
typedef void (*io_req_tw_func_t)(struct io_kiocb *req);
|
|
|
|
|
2021-02-10 00:03:20 +00:00
|
|
|
struct io_task_work {
|
2021-06-30 20:54:04 +00:00
|
|
|
union {
|
|
|
|
struct io_wq_work_node node;
|
|
|
|
struct llist_node fallback_node;
|
|
|
|
};
|
|
|
|
io_req_tw_func_t func;
|
2021-02-10 00:03:20 +00:00
|
|
|
};
|
|
|
|
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 15:37:37 +00:00
|
|
|
enum {
|
|
|
|
IORING_RSRC_FILE = 0,
|
|
|
|
IORING_RSRC_BUFFER = 1,
|
|
|
|
};
|
|
|
|
|
2019-03-13 18:39:28 +00:00
|
|
|
/*
|
|
|
|
* NOTE! Each of the iocb union members has the file pointer
|
|
|
|
* as the first entry in their struct definition. So you can
|
|
|
|
* access the file pointer through any of the sub-structs,
|
|
|
|
* or directly as just 'ki_filp' in this struct.
|
|
|
|
*/
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
struct io_kiocb {
|
2019-01-17 16:41:58 +00:00
|
|
|
union {
|
2019-03-13 18:39:28 +00:00
|
|
|
struct file *file;
|
2019-12-20 15:45:55 +00:00
|
|
|
struct io_rw rw;
|
2019-01-17 16:41:58 +00:00
|
|
|
struct io_poll_iocb poll;
|
2021-04-13 01:58:40 +00:00
|
|
|
struct io_poll_update poll_update;
|
2019-12-16 18:55:28 +00:00
|
|
|
struct io_accept accept;
|
|
|
|
struct io_sync sync;
|
2019-12-18 01:45:56 +00:00
|
|
|
struct io_cancel cancel;
|
2019-12-18 01:50:29 +00:00
|
|
|
struct io_timeout timeout;
|
2020-10-10 17:34:10 +00:00
|
|
|
struct io_timeout_rem timeout_rem;
|
2019-12-20 15:51:52 +00:00
|
|
|
struct io_connect connect;
|
2019-12-20 15:58:21 +00:00
|
|
|
struct io_sr_msg sr_msg;
|
2019-12-11 18:20:36 +00:00
|
|
|
struct io_open open;
|
2019-12-11 21:02:38 +00:00
|
|
|
struct io_close close;
|
2021-01-15 17:37:44 +00:00
|
|
|
struct io_rsrc_update rsrc_update;
|
2019-12-26 05:03:45 +00:00
|
|
|
struct io_fadvise fadvise;
|
2019-12-26 05:18:28 +00:00
|
|
|
struct io_madvise madvise;
|
2020-01-08 22:18:09 +00:00
|
|
|
struct io_epoll epoll;
|
2020-02-24 08:32:45 +00:00
|
|
|
struct io_splice splice;
|
2020-02-23 23:41:33 +00:00
|
|
|
struct io_provide_buf pbuf;
|
2020-05-23 04:31:16 +00:00
|
|
|
struct io_statx statx;
|
2020-09-05 17:14:22 +00:00
|
|
|
struct io_shutdown shutdown;
|
2020-09-28 20:23:58 +00:00
|
|
|
struct io_rename rename;
|
2020-09-28 20:27:37 +00:00
|
|
|
struct io_unlink unlink;
|
2020-07-13 20:37:08 +00:00
|
|
|
/* use only after cleaning per-op data, see io_clean_op() */
|
|
|
|
struct io_completion compl;
|
2019-01-17 16:41:58 +00:00
|
|
|
};
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2020-08-16 01:44:09 +00:00
|
|
|
/* opcode allocated if it needs to store data for async defer */
|
|
|
|
void *async_data;
|
2019-12-18 02:53:05 +00:00
|
|
|
u8 opcode;
|
io_uring: fix io_kiocb.flags modification race in IOPOLL mode
While testing io_uring in arm, we found sometimes io_sq_thread() keeps
polling io requests even though there are not inflight io requests in
block layer. After some investigations, found a possible race about
io_kiocb.flags, see below race codes:
1) in the end of io_write() or io_read()
req->flags &= ~REQ_F_NEED_CLEANUP;
kfree(iovec);
return ret;
2) in io_complete_rw_iopoll()
if (res != -EAGAIN)
req->flags |= REQ_F_IOPOLL_COMPLETED;
In IOPOLL mode, io requests still maybe completed by interrupt, then
above codes are not safe, concurrent modifications to req->flags, which
is not protected by lock or is not atomic modifications. I also had
disassemble io_complete_rw_iopoll() in arm:
req->flags |= REQ_F_IOPOLL_COMPLETED;
0xffff000008387b18 <+76>: ldr w0, [x19,#104]
0xffff000008387b1c <+80>: orr w0, w0, #0x1000
0xffff000008387b20 <+84>: str w0, [x19,#104]
Seems that the "req->flags |= REQ_F_IOPOLL_COMPLETED;" is load and
modification, two instructions, which obviously is not atomic.
To fix this issue, add a new iopoll_completed in io_kiocb to indicate
whether io request is completed.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-11 15:39:36 +00:00
|
|
|
/* polled IO has completed */
|
|
|
|
u8 iopoll_completed;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2020-05-19 21:52:49 +00:00
|
|
|
u16 buf_index;
|
2020-07-13 20:37:15 +00:00
|
|
|
u32 result;
|
2020-05-19 21:52:49 +00:00
|
|
|
|
2020-07-30 15:43:45 +00:00
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
unsigned int flags;
|
io_uring: switch to atomic_t for io_kiocb reference count
io_uring manipulates references twice for each request, and hence is very
sensitive to performance of the reference count. This commit borrows a
trick from:
commit f958d7b528b1b40c44cfda5eabe2d82760d868c3
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date: Thu Apr 11 10:06:20 2019 -0700
mm: make page ref count overflow check tighter and more explicit
and switches to atomic_t for references, while still retaining overflow
and underflow checks.
This is good for a 2-3% increase in peak IOPS on a single core. Before:
IOPS=2970879, IOS/call=31/31, inflight=128 (128)
IOPS=2952597, IOS/call=31/31, inflight=128 (128)
IOPS=2943904, IOS/call=31/31, inflight=128 (128)
IOPS=2930006, IOS/call=31/31, inflight=96 (96)
and after:
IOPS=3054354, IOS/call=31/31, inflight=128 (128)
IOPS=3059038, IOS/call=31/31, inflight=128 (128)
IOPS=3060320, IOS/call=31/31, inflight=128 (128)
IOPS=3068256, IOS/call=31/31, inflight=96 (96)
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-24 20:32:30 +00:00
|
|
|
atomic_t refs;
|
2020-07-30 15:43:45 +00:00
|
|
|
struct task_struct *task;
|
|
|
|
u64 user_data;
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
|
2020-10-27 23:25:37 +00:00
|
|
|
struct io_kiocb *link;
|
2021-01-15 17:37:44 +00:00
|
|
|
struct percpu_ref *fixed_rsrc_refs;
|
2019-10-24 18:39:47 +00:00
|
|
|
|
2021-04-11 00:46:26 +00:00
|
|
|
/* used with ctx->iopoll_list with reads/writes */
|
2020-07-30 15:43:45 +00:00
|
|
|
struct list_head inflight_entry;
|
2021-06-30 20:54:04 +00:00
|
|
|
struct io_task_work io_task_work;
|
2020-07-30 15:43:45 +00:00
|
|
|
/* for polled requests, i.e. IORING_OP_POLL_ADD and async armed poll */
|
|
|
|
struct hlist_node hash_node;
|
|
|
|
struct async_poll *apoll;
|
|
|
|
struct io_wq_work work;
|
2021-06-24 14:09:57 +00:00
|
|
|
const struct cred *creds;
|
2021-06-17 17:14:01 +00:00
|
|
|
|
2021-04-25 13:32:24 +00:00
|
|
|
/* store used ubuf, so we can prevent reloading */
|
|
|
|
struct io_mapped_ubuf *imu;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
};
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 06:05:18 +00:00
|
|
|
|
2021-03-06 11:02:12 +00:00
|
|
|
struct io_tctx_node {
|
|
|
|
struct list_head ctx_node;
|
|
|
|
struct task_struct *task;
|
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
};
|
|
|
|
|
2020-07-13 20:37:14 +00:00
|
|
|
struct io_defer_entry {
|
|
|
|
struct list_head list;
|
|
|
|
struct io_kiocb *req;
|
2020-07-13 20:37:15 +00:00
|
|
|
u32 seq;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
};
|
|
|
|
|
2019-12-18 16:50:26 +00:00
|
|
|
struct io_op_def {
|
|
|
|
/* needs req->file assigned */
|
|
|
|
unsigned needs_file : 1;
|
|
|
|
/* hash wq insertion if file is a regular file */
|
|
|
|
unsigned hash_reg_file : 1;
|
|
|
|
/* unbound wq insertion if file is a non-regular file */
|
|
|
|
unsigned unbound_nonreg_file : 1;
|
2020-01-16 22:36:52 +00:00
|
|
|
/* opcode is not supported by this kernel */
|
|
|
|
unsigned not_supported : 1;
|
2020-02-20 16:59:44 +00:00
|
|
|
/* set if opcode supports polled "wait" */
|
|
|
|
unsigned pollin : 1;
|
|
|
|
unsigned pollout : 1;
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
/* op supports buffer selection */
|
|
|
|
unsigned buffer_select : 1;
|
2021-02-28 22:35:18 +00:00
|
|
|
/* do prep async if is going to be punted */
|
|
|
|
unsigned needs_async_setup : 1;
|
2020-10-28 15:33:23 +00:00
|
|
|
/* should block plug */
|
|
|
|
unsigned plug : 1;
|
2020-08-16 01:44:09 +00:00
|
|
|
/* size of async data needed, if any */
|
|
|
|
unsigned short async_size;
|
2019-12-18 16:50:26 +00:00
|
|
|
};
|
|
|
|
|
2020-10-13 21:01:40 +00:00
|
|
|
static const struct io_op_def io_op_defs[] = {
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_NOP] = {},
|
|
|
|
[IORING_OP_READV] = {
|
2019-12-18 16:50:26 +00:00
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
2020-02-20 16:59:44 +00:00
|
|
|
.pollin = 1,
|
2020-02-27 14:31:19 +00:00
|
|
|
.buffer_select = 1,
|
2021-02-28 22:35:18 +00:00
|
|
|
.needs_async_setup = 1,
|
2020-10-28 15:33:23 +00:00
|
|
|
.plug = 1,
|
2020-08-16 01:44:09 +00:00
|
|
|
.async_size = sizeof(struct io_async_rw),
|
2019-12-18 16:50:26 +00:00
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_WRITEV] = {
|
2019-12-18 16:50:26 +00:00
|
|
|
.needs_file = 1,
|
|
|
|
.hash_reg_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
2020-02-20 16:59:44 +00:00
|
|
|
.pollout = 1,
|
2021-02-28 22:35:18 +00:00
|
|
|
.needs_async_setup = 1,
|
2020-10-28 15:33:23 +00:00
|
|
|
.plug = 1,
|
2020-08-16 01:44:09 +00:00
|
|
|
.async_size = sizeof(struct io_async_rw),
|
2019-12-18 16:50:26 +00:00
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_FSYNC] = {
|
2019-12-18 16:50:26 +00:00
|
|
|
.needs_file = 1,
|
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_READ_FIXED] = {
|
2019-12-18 16:50:26 +00:00
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
2020-02-20 16:59:44 +00:00
|
|
|
.pollin = 1,
|
2020-10-28 15:33:23 +00:00
|
|
|
.plug = 1,
|
2020-08-16 01:44:09 +00:00
|
|
|
.async_size = sizeof(struct io_async_rw),
|
2019-12-18 16:50:26 +00:00
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_WRITE_FIXED] = {
|
2019-12-18 16:50:26 +00:00
|
|
|
.needs_file = 1,
|
|
|
|
.hash_reg_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
2020-02-20 16:59:44 +00:00
|
|
|
.pollout = 1,
|
2020-10-28 15:33:23 +00:00
|
|
|
.plug = 1,
|
2020-08-16 01:44:09 +00:00
|
|
|
.async_size = sizeof(struct io_async_rw),
|
2019-12-18 16:50:26 +00:00
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_POLL_ADD] = {
|
2019-12-18 16:50:26 +00:00
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_POLL_REMOVE] = {},
|
|
|
|
[IORING_OP_SYNC_FILE_RANGE] = {
|
2019-12-18 16:50:26 +00:00
|
|
|
.needs_file = 1,
|
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_SENDMSG] = {
|
2019-12-18 16:50:26 +00:00
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
2020-02-20 16:59:44 +00:00
|
|
|
.pollout = 1,
|
2021-02-28 22:35:18 +00:00
|
|
|
.needs_async_setup = 1,
|
2020-08-16 01:44:09 +00:00
|
|
|
.async_size = sizeof(struct io_async_msghdr),
|
2019-12-18 16:50:26 +00:00
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_RECVMSG] = {
|
2019-12-18 16:50:26 +00:00
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
2020-02-20 16:59:44 +00:00
|
|
|
.pollin = 1,
|
2020-02-27 17:15:42 +00:00
|
|
|
.buffer_select = 1,
|
2021-02-28 22:35:18 +00:00
|
|
|
.needs_async_setup = 1,
|
2020-08-16 01:44:09 +00:00
|
|
|
.async_size = sizeof(struct io_async_msghdr),
|
2019-12-18 16:50:26 +00:00
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_TIMEOUT] = {
|
2020-08-16 01:44:09 +00:00
|
|
|
.async_size = sizeof(struct io_timeout_data),
|
2019-12-18 16:50:26 +00:00
|
|
|
},
|
2020-11-30 19:11:16 +00:00
|
|
|
[IORING_OP_TIMEOUT_REMOVE] = {
|
|
|
|
/* used by timeout updates' prep() */
|
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_ACCEPT] = {
|
2019-12-18 16:50:26 +00:00
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
2020-02-20 16:59:44 +00:00
|
|
|
.pollin = 1,
|
2019-12-18 16:50:26 +00:00
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_ASYNC_CANCEL] = {},
|
|
|
|
[IORING_OP_LINK_TIMEOUT] = {
|
2020-08-16 01:44:09 +00:00
|
|
|
.async_size = sizeof(struct io_timeout_data),
|
2019-12-18 16:50:26 +00:00
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_CONNECT] = {
|
2019-12-18 16:50:26 +00:00
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
2020-02-20 16:59:44 +00:00
|
|
|
.pollout = 1,
|
2021-02-28 22:35:18 +00:00
|
|
|
.needs_async_setup = 1,
|
2020-08-16 01:44:09 +00:00
|
|
|
.async_size = sizeof(struct io_async_connect),
|
2019-12-18 16:50:26 +00:00
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_FALLOCATE] = {
|
2019-12-18 16:50:26 +00:00
|
|
|
.needs_file = 1,
|
|
|
|
},
|
2021-02-15 20:32:18 +00:00
|
|
|
[IORING_OP_OPENAT] = {},
|
|
|
|
[IORING_OP_CLOSE] = {},
|
|
|
|
[IORING_OP_FILES_UPDATE] = {},
|
|
|
|
[IORING_OP_STATX] = {},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_READ] = {
|
2019-12-22 22:19:35 +00:00
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
2020-02-20 16:59:44 +00:00
|
|
|
.pollin = 1,
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
.buffer_select = 1,
|
2020-10-28 15:33:23 +00:00
|
|
|
.plug = 1,
|
2020-08-16 01:44:09 +00:00
|
|
|
.async_size = sizeof(struct io_async_rw),
|
2019-12-22 22:19:35 +00:00
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_WRITE] = {
|
2019-12-22 22:19:35 +00:00
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
2020-02-20 16:59:44 +00:00
|
|
|
.pollout = 1,
|
2020-10-28 15:33:23 +00:00
|
|
|
.plug = 1,
|
2020-08-16 01:44:09 +00:00
|
|
|
.async_size = sizeof(struct io_async_rw),
|
2019-12-22 22:19:35 +00:00
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_FADVISE] = {
|
2019-12-26 05:03:45 +00:00
|
|
|
.needs_file = 1,
|
2019-12-26 05:18:28 +00:00
|
|
|
},
|
2021-02-15 20:32:18 +00:00
|
|
|
[IORING_OP_MADVISE] = {},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_SEND] = {
|
2020-01-05 03:19:44 +00:00
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
2020-02-20 16:59:44 +00:00
|
|
|
.pollout = 1,
|
2020-01-05 03:19:44 +00:00
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_RECV] = {
|
2020-01-05 03:19:44 +00:00
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
2020-02-20 16:59:44 +00:00
|
|
|
.pollin = 1,
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
.buffer_select = 1,
|
2020-01-05 03:19:44 +00:00
|
|
|
},
|
2020-01-18 18:35:38 +00:00
|
|
|
[IORING_OP_OPENAT2] = {
|
2020-01-09 00:59:24 +00:00
|
|
|
},
|
2020-01-08 22:18:09 +00:00
|
|
|
[IORING_OP_EPOLL_CTL] = {
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
},
|
2020-02-24 08:32:45 +00:00
|
|
|
[IORING_OP_SPLICE] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.hash_reg_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
2020-02-23 23:41:33 +00:00
|
|
|
},
|
|
|
|
[IORING_OP_PROVIDE_BUFFERS] = {},
|
2020-03-02 23:32:28 +00:00
|
|
|
[IORING_OP_REMOVE_BUFFERS] = {},
|
2020-05-17 11:18:06 +00:00
|
|
|
[IORING_OP_TEE] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.hash_reg_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
},
|
2020-09-05 17:14:22 +00:00
|
|
|
[IORING_OP_SHUTDOWN] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
},
|
2021-02-15 20:32:18 +00:00
|
|
|
[IORING_OP_RENAMEAT] = {},
|
|
|
|
[IORING_OP_UNLINKAT] = {},
|
2019-12-18 16:50:26 +00:00
|
|
|
};
|
|
|
|
|
2021-08-15 09:40:25 +00:00
|
|
|
/* requests with any of those set should undergo io_disarm_next() */
|
|
|
|
#define IO_DISARM_MASK (REQ_F_ARM_LTIMEOUT | REQ_F_LINK_TIMEOUT | REQ_F_FAIL)
|
|
|
|
|
2021-03-09 00:37:59 +00:00
|
|
|
static bool io_disarm_next(struct io_kiocb *req);
|
2021-06-14 01:36:15 +00:00
|
|
|
static void io_uring_del_tctx_node(unsigned long index);
|
2021-02-04 13:51:56 +00:00
|
|
|
static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
|
|
|
|
struct task_struct *task,
|
2021-05-16 21:58:04 +00:00
|
|
|
bool cancel_all);
|
2021-06-14 01:36:23 +00:00
|
|
|
static void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd);
|
2020-12-30 21:34:15 +00:00
|
|
|
|
2021-04-25 13:32:17 +00:00
|
|
|
static bool io_cqring_fill_event(struct io_ring_ctx *ctx, u64 user_data,
|
|
|
|
long res, unsigned int cflags);
|
2019-11-08 15:50:36 +00:00
|
|
|
static void io_put_req(struct io_kiocb *req);
|
2021-08-11 18:28:28 +00:00
|
|
|
static void io_put_req_deferred(struct io_kiocb *req);
|
2021-02-10 02:53:37 +00:00
|
|
|
static void io_dismantle_req(struct io_kiocb *req);
|
2019-11-15 02:39:52 +00:00
|
|
|
static void io_queue_linked_timeout(struct io_kiocb *req);
|
2021-04-25 13:32:20 +00:00
|
|
|
static int __io_register_rsrc_update(struct io_ring_ctx *ctx, unsigned type,
|
2021-04-25 13:32:22 +00:00
|
|
|
struct io_uring_rsrc_update2 *up,
|
2021-04-25 13:32:19 +00:00
|
|
|
unsigned nr_args);
|
2021-03-19 17:22:41 +00:00
|
|
|
static void io_clean_op(struct io_kiocb *req);
|
2021-08-09 12:04:02 +00:00
|
|
|
static struct file *io_file_get(struct io_ring_ctx *ctx,
|
2020-10-10 17:34:08 +00:00
|
|
|
struct io_kiocb *req, int fd, bool fixed);
|
2021-02-10 00:03:22 +00:00
|
|
|
static void __io_queue_sqe(struct io_kiocb *req);
|
2021-01-15 17:37:44 +00:00
|
|
|
static void io_rsrc_put_work(struct work_struct *work);
|
2019-04-07 03:51:27 +00:00
|
|
|
|
2021-01-26 23:35:10 +00:00
|
|
|
static void io_req_task_queue(struct io_kiocb *req);
|
2021-06-17 17:14:00 +00:00
|
|
|
static void io_submit_flush_completions(struct io_ring_ctx *ctx);
|
2021-02-28 22:35:20 +00:00
|
|
|
static int io_req_prep_async(struct io_kiocb *req);
|
2019-04-07 03:51:27 +00:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static struct kmem_cache *req_cachep;
|
|
|
|
|
2020-10-13 21:01:40 +00:00
|
|
|
static const struct file_operations io_uring_fops;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
struct sock *io_uring_get_socket(struct file *file)
|
|
|
|
{
|
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
if (file->f_op == &io_uring_fops) {
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
|
|
|
|
return ctx->ring_sock->sk;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(io_uring_get_socket);
|
|
|
|
|
2020-10-27 23:25:37 +00:00
|
|
|
#define io_for_each_link(pos, head) \
|
|
|
|
for (pos = (head); pos; pos = pos->link)
|
|
|
|
|
2021-08-11 18:28:27 +00:00
|
|
|
/*
|
|
|
|
* Shamelessly stolen from the mm implementation of page reference checking,
|
|
|
|
* see commit f958d7b528b1 for details.
|
|
|
|
*/
|
|
|
|
#define req_ref_zero_or_close_to_overflow(req) \
|
|
|
|
((unsigned int) atomic_read(&(req->refs)) + 127u <= 127u)
|
|
|
|
|
|
|
|
static inline bool req_ref_inc_not_zero(struct io_kiocb *req)
|
|
|
|
{
|
io_uring: skip request refcounting
As submission references are gone, there is only one initial reference
left. Instead of actually doing atomic refcounting, add a flag
indicating whether we're going to take more refs or doing any other sync
magic. The flag should be set before the request may get used in
parallel.
Together with the previous patch it saves 2 refcount atomics per request
for IOPOLL and IRQ completions, and 1 atomic per req for inline
completions, with some exceptions. In particular, currently, there are
three cases, when the refcounting have to be enabled:
- Polling, including apoll. Because double poll entries takes a ref.
Might get relaxed in the near future.
- Link timeouts, enabled for both, the timeout and the request it's
bound to, because they work in-parallel and we need to synchronise
to cancel one of them on completion.
- When a request gets in io-wq, because it doesn't hold uring_lock and
we need guarantees of submission references.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/8b204b6c5f6643062270a1913d6d3a7f8f795fd9.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 18:28:30 +00:00
|
|
|
WARN_ON_ONCE(!(req->flags & REQ_F_REFCOUNT));
|
2021-08-11 18:28:27 +00:00
|
|
|
return atomic_inc_not_zero(&req->refs);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool req_ref_put_and_test(struct io_kiocb *req)
|
|
|
|
{
|
io_uring: skip request refcounting
As submission references are gone, there is only one initial reference
left. Instead of actually doing atomic refcounting, add a flag
indicating whether we're going to take more refs or doing any other sync
magic. The flag should be set before the request may get used in
parallel.
Together with the previous patch it saves 2 refcount atomics per request
for IOPOLL and IRQ completions, and 1 atomic per req for inline
completions, with some exceptions. In particular, currently, there are
three cases, when the refcounting have to be enabled:
- Polling, including apoll. Because double poll entries takes a ref.
Might get relaxed in the near future.
- Link timeouts, enabled for both, the timeout and the request it's
bound to, because they work in-parallel and we need to synchronise
to cancel one of them on completion.
- When a request gets in io-wq, because it doesn't hold uring_lock and
we need guarantees of submission references.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/8b204b6c5f6643062270a1913d6d3a7f8f795fd9.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 18:28:30 +00:00
|
|
|
if (likely(!(req->flags & REQ_F_REFCOUNT)))
|
|
|
|
return true;
|
|
|
|
|
2021-08-11 18:28:27 +00:00
|
|
|
WARN_ON_ONCE(req_ref_zero_or_close_to_overflow(req));
|
|
|
|
return atomic_dec_and_test(&req->refs);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void req_ref_put(struct io_kiocb *req)
|
|
|
|
{
|
io_uring: skip request refcounting
As submission references are gone, there is only one initial reference
left. Instead of actually doing atomic refcounting, add a flag
indicating whether we're going to take more refs or doing any other sync
magic. The flag should be set before the request may get used in
parallel.
Together with the previous patch it saves 2 refcount atomics per request
for IOPOLL and IRQ completions, and 1 atomic per req for inline
completions, with some exceptions. In particular, currently, there are
three cases, when the refcounting have to be enabled:
- Polling, including apoll. Because double poll entries takes a ref.
Might get relaxed in the near future.
- Link timeouts, enabled for both, the timeout and the request it's
bound to, because they work in-parallel and we need to synchronise
to cancel one of them on completion.
- When a request gets in io-wq, because it doesn't hold uring_lock and
we need guarantees of submission references.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/8b204b6c5f6643062270a1913d6d3a7f8f795fd9.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 18:28:30 +00:00
|
|
|
WARN_ON_ONCE(!(req->flags & REQ_F_REFCOUNT));
|
2021-08-11 18:28:27 +00:00
|
|
|
WARN_ON_ONCE(req_ref_put_and_test(req));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void req_ref_get(struct io_kiocb *req)
|
|
|
|
{
|
io_uring: skip request refcounting
As submission references are gone, there is only one initial reference
left. Instead of actually doing atomic refcounting, add a flag
indicating whether we're going to take more refs or doing any other sync
magic. The flag should be set before the request may get used in
parallel.
Together with the previous patch it saves 2 refcount atomics per request
for IOPOLL and IRQ completions, and 1 atomic per req for inline
completions, with some exceptions. In particular, currently, there are
three cases, when the refcounting have to be enabled:
- Polling, including apoll. Because double poll entries takes a ref.
Might get relaxed in the near future.
- Link timeouts, enabled for both, the timeout and the request it's
bound to, because they work in-parallel and we need to synchronise
to cancel one of them on completion.
- When a request gets in io-wq, because it doesn't hold uring_lock and
we need guarantees of submission references.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/8b204b6c5f6643062270a1913d6d3a7f8f795fd9.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 18:28:30 +00:00
|
|
|
WARN_ON_ONCE(!(req->flags & REQ_F_REFCOUNT));
|
2021-08-11 18:28:27 +00:00
|
|
|
WARN_ON_ONCE(req_ref_zero_or_close_to_overflow(req));
|
|
|
|
atomic_inc(&req->refs);
|
|
|
|
}
|
|
|
|
|
2021-08-15 09:40:18 +00:00
|
|
|
static inline void __io_req_set_refcount(struct io_kiocb *req, int nr)
|
io_uring: skip request refcounting
As submission references are gone, there is only one initial reference
left. Instead of actually doing atomic refcounting, add a flag
indicating whether we're going to take more refs or doing any other sync
magic. The flag should be set before the request may get used in
parallel.
Together with the previous patch it saves 2 refcount atomics per request
for IOPOLL and IRQ completions, and 1 atomic per req for inline
completions, with some exceptions. In particular, currently, there are
three cases, when the refcounting have to be enabled:
- Polling, including apoll. Because double poll entries takes a ref.
Might get relaxed in the near future.
- Link timeouts, enabled for both, the timeout and the request it's
bound to, because they work in-parallel and we need to synchronise
to cancel one of them on completion.
- When a request gets in io-wq, because it doesn't hold uring_lock and
we need guarantees of submission references.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/8b204b6c5f6643062270a1913d6d3a7f8f795fd9.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 18:28:30 +00:00
|
|
|
{
|
|
|
|
if (!(req->flags & REQ_F_REFCOUNT)) {
|
|
|
|
req->flags |= REQ_F_REFCOUNT;
|
2021-08-15 09:40:18 +00:00
|
|
|
atomic_set(&req->refs, nr);
|
io_uring: skip request refcounting
As submission references are gone, there is only one initial reference
left. Instead of actually doing atomic refcounting, add a flag
indicating whether we're going to take more refs or doing any other sync
magic. The flag should be set before the request may get used in
parallel.
Together with the previous patch it saves 2 refcount atomics per request
for IOPOLL and IRQ completions, and 1 atomic per req for inline
completions, with some exceptions. In particular, currently, there are
three cases, when the refcounting have to be enabled:
- Polling, including apoll. Because double poll entries takes a ref.
Might get relaxed in the near future.
- Link timeouts, enabled for both, the timeout and the request it's
bound to, because they work in-parallel and we need to synchronise
to cancel one of them on completion.
- When a request gets in io-wq, because it doesn't hold uring_lock and
we need guarantees of submission references.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/8b204b6c5f6643062270a1913d6d3a7f8f795fd9.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 18:28:30 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-08-15 09:40:18 +00:00
|
|
|
static inline void io_req_set_refcount(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
__io_req_set_refcount(req, 1);
|
|
|
|
}
|
|
|
|
|
2021-04-01 14:43:40 +00:00
|
|
|
static inline void io_req_set_rsrc_node(struct io_kiocb *req)
|
2020-11-18 19:57:26 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2021-01-15 17:37:44 +00:00
|
|
|
if (!req->fixed_rsrc_refs) {
|
2021-04-01 14:43:46 +00:00
|
|
|
req->fixed_rsrc_refs = &ctx->rsrc_node->refs;
|
2021-01-15 17:37:44 +00:00
|
|
|
percpu_ref_get(req->fixed_rsrc_refs);
|
2020-11-18 19:57:26 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-04-11 00:46:40 +00:00
|
|
|
static void io_refs_resurrect(struct percpu_ref *ref, struct completion *compl)
|
|
|
|
{
|
|
|
|
bool got = percpu_ref_tryget(ref);
|
|
|
|
|
|
|
|
/* already at zero, wait for ->release() */
|
|
|
|
if (!got)
|
|
|
|
wait_for_completion(compl);
|
|
|
|
percpu_ref_resurrect(ref);
|
|
|
|
if (got)
|
|
|
|
percpu_ref_put(ref);
|
|
|
|
}
|
|
|
|
|
2021-05-16 21:58:04 +00:00
|
|
|
static bool io_match_task(struct io_kiocb *head, struct task_struct *task,
|
|
|
|
bool cancel_all)
|
2020-11-06 13:00:22 +00:00
|
|
|
{
|
|
|
|
struct io_kiocb *req;
|
|
|
|
|
2021-03-22 01:58:25 +00:00
|
|
|
if (task && head->task != task)
|
2020-11-06 13:00:22 +00:00
|
|
|
return false;
|
2021-05-16 21:58:04 +00:00
|
|
|
if (cancel_all)
|
2020-11-06 13:00:22 +00:00
|
|
|
return true;
|
|
|
|
|
|
|
|
io_for_each_link(req, head) {
|
2021-03-04 13:59:24 +00:00
|
|
|
if (req->flags & REQ_F_INFLIGHT)
|
2021-01-23 22:49:31 +00:00
|
|
|
return true;
|
2020-11-06 13:00:22 +00:00
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2021-05-16 21:58:05 +00:00
|
|
|
static inline void req_set_fail(struct io_kiocb *req)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
{
|
2021-05-16 21:58:05 +00:00
|
|
|
req->flags |= REQ_F_FAIL;
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
}
|
2020-05-14 23:21:15 +00:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static void io_ring_ctx_ref_free(struct percpu_ref *ref)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = container_of(ref, struct io_ring_ctx, refs);
|
|
|
|
|
2020-05-14 23:18:39 +00:00
|
|
|
complete(&ctx->ref_comp);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2020-06-29 10:13:02 +00:00
|
|
|
static inline bool io_is_timeout_noseq(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
return !req->timeout.off;
|
|
|
|
}
|
|
|
|
|
2021-08-09 19:18:07 +00:00
|
|
|
static void io_fallback_req_func(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx,
|
|
|
|
fallback_work.work);
|
|
|
|
struct llist_node *node = llist_del_all(&ctx->fallback_llist);
|
|
|
|
struct io_kiocb *req, *tmp;
|
|
|
|
|
|
|
|
percpu_ref_get(&ctx->refs);
|
|
|
|
llist_for_each_entry_safe(req, tmp, node, io_task_work.fallback_node)
|
|
|
|
req->io_task_work.func(req);
|
|
|
|
percpu_ref_put(&ctx->refs);
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
2019-12-05 02:56:40 +00:00
|
|
|
int hash_bits;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
|
|
|
|
if (!ctx)
|
|
|
|
return NULL;
|
|
|
|
|
2019-12-05 02:56:40 +00:00
|
|
|
/*
|
|
|
|
* Use 5 bits less than the max cq entries, that should give us around
|
|
|
|
* 32 entries per hash list if totally full and uniformly spread.
|
|
|
|
*/
|
|
|
|
hash_bits = ilog2(p->cq_entries);
|
|
|
|
hash_bits -= 5;
|
|
|
|
if (hash_bits <= 0)
|
|
|
|
hash_bits = 1;
|
|
|
|
ctx->cancel_hash_bits = hash_bits;
|
|
|
|
ctx->cancel_hash = kmalloc((1U << hash_bits) * sizeof(struct hlist_head),
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!ctx->cancel_hash)
|
|
|
|
goto err;
|
|
|
|
__hash_init(ctx->cancel_hash, 1U << hash_bits);
|
|
|
|
|
2021-04-28 12:11:29 +00:00
|
|
|
ctx->dummy_ubuf = kzalloc(sizeof(*ctx->dummy_ubuf), GFP_KERNEL);
|
|
|
|
if (!ctx->dummy_ubuf)
|
|
|
|
goto err;
|
|
|
|
/* set invalid range, so io_import_fixed() fails meeting it */
|
|
|
|
ctx->dummy_ubuf->ubuf = -1UL;
|
|
|
|
|
2019-05-07 17:01:48 +00:00
|
|
|
if (percpu_ref_init(&ctx->refs, io_ring_ctx_ref_free,
|
2019-11-08 01:27:42 +00:00
|
|
|
PERCPU_REF_ALLOW_REINIT, GFP_KERNEL))
|
|
|
|
goto err;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
ctx->flags = p->flags;
|
2020-09-03 18:12:41 +00:00
|
|
|
init_waitqueue_head(&ctx->sqo_sq_wait);
|
2020-09-14 17:16:23 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->sqd_list);
|
2021-06-14 22:37:28 +00:00
|
|
|
init_waitqueue_head(&ctx->poll_wait);
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->cq_overflow_list);
|
2020-05-14 23:18:39 +00:00
|
|
|
init_completion(&ctx->ref_comp);
|
2021-03-13 19:29:43 +00:00
|
|
|
xa_init_flags(&ctx->io_buffers, XA_FLAGS_ALLOC1);
|
2021-03-08 14:16:16 +00:00
|
|
|
xa_init_flags(&ctx->personalities, XA_FLAGS_ALLOC1);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
mutex_init(&ctx->uring_lock);
|
2021-06-14 22:37:28 +00:00
|
|
|
init_waitqueue_head(&ctx->cq_wait);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
spin_lock_init(&ctx->completion_lock);
|
2021-08-10 21:11:51 +00:00
|
|
|
spin_lock_init(&ctx->timeout_lock);
|
2020-07-13 20:37:09 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->iopoll_list);
|
2019-04-07 03:51:27 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->defer_list);
|
2019-09-17 18:26:57 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->timeout_list);
|
2021-01-15 17:37:46 +00:00
|
|
|
spin_lock_init(&ctx->rsrc_ref_lock);
|
|
|
|
INIT_LIST_HEAD(&ctx->rsrc_ref_list);
|
2021-01-15 17:37:44 +00:00
|
|
|
INIT_DELAYED_WORK(&ctx->rsrc_put_work, io_rsrc_put_work);
|
|
|
|
init_llist_head(&ctx->rsrc_put_llist);
|
2021-03-06 11:02:12 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->tctx_list);
|
2021-08-09 19:18:11 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->submit_state.free_list);
|
2021-05-16 21:58:12 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->locked_free_list);
|
2021-06-30 20:54:03 +00:00
|
|
|
INIT_DELAYED_WORK(&ctx->fallback_work, io_fallback_req_func);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return ctx;
|
2019-11-08 01:27:42 +00:00
|
|
|
err:
|
2021-04-28 12:11:29 +00:00
|
|
|
kfree(ctx->dummy_ubuf);
|
2019-12-05 02:56:40 +00:00
|
|
|
kfree(ctx->cancel_hash);
|
2019-11-08 01:27:42 +00:00
|
|
|
kfree(ctx);
|
|
|
|
return NULL;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2021-05-16 21:58:10 +00:00
|
|
|
static void io_account_cq_overflow(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_rings *r = ctx->rings;
|
|
|
|
|
|
|
|
WRITE_ONCE(r->cq_overflow, READ_ONCE(r->cq_overflow) + 1);
|
|
|
|
ctx->cq_extra--;
|
|
|
|
}
|
|
|
|
|
2020-07-13 20:37:15 +00:00
|
|
|
static bool req_need_defer(struct io_kiocb *req, u32 seq)
|
2019-10-11 03:42:58 +00:00
|
|
|
{
|
2020-07-09 15:43:27 +00:00
|
|
|
if (unlikely(req->flags & REQ_F_IO_DRAIN)) {
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2019-11-08 15:09:12 +00:00
|
|
|
|
2021-05-16 21:58:10 +00:00
|
|
|
return seq + READ_ONCE(ctx->cq_extra) != ctx->cached_cq_tail;
|
2020-07-09 15:43:27 +00:00
|
|
|
}
|
2019-04-07 03:51:27 +00:00
|
|
|
|
2019-11-13 10:06:25 +00:00
|
|
|
return false;
|
2019-04-07 03:51:27 +00:00
|
|
|
}
|
|
|
|
|
2021-08-09 12:04:04 +00:00
|
|
|
#define FFS_ASYNC_READ 0x1UL
|
|
|
|
#define FFS_ASYNC_WRITE 0x2UL
|
|
|
|
#ifdef CONFIG_64BIT
|
|
|
|
#define FFS_ISREG 0x4UL
|
|
|
|
#else
|
|
|
|
#define FFS_ISREG 0x0UL
|
|
|
|
#endif
|
|
|
|
#define FFS_MASK ~(FFS_ASYNC_READ|FFS_ASYNC_WRITE|FFS_ISREG)
|
|
|
|
|
|
|
|
static inline bool io_req_ffs_set(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
return IS_ENABLED(CONFIG_64BIT) && (req->flags & REQ_F_FIXED_FILE);
|
|
|
|
}
|
|
|
|
|
2021-02-01 18:59:55 +00:00
|
|
|
static void io_req_track_inflight(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
if (!(req->flags & REQ_F_INFLIGHT)) {
|
|
|
|
req->flags |= REQ_F_INFLIGHT;
|
2021-04-11 00:46:26 +00:00
|
|
|
atomic_inc(¤t->io_uring->inflight_tracked);
|
2021-02-01 18:59:55 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-08-15 09:40:26 +00:00
|
|
|
static inline void io_unprep_linked_timeout(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
req->flags &= ~REQ_F_LINK_TIMEOUT;
|
|
|
|
}
|
|
|
|
|
2021-08-11 18:28:31 +00:00
|
|
|
static struct io_kiocb *__io_prep_linked_timeout(struct io_kiocb *req)
|
|
|
|
{
|
2021-08-15 09:40:26 +00:00
|
|
|
if (WARN_ON_ONCE(!req->link))
|
|
|
|
return NULL;
|
|
|
|
|
2021-08-15 09:40:24 +00:00
|
|
|
req->flags &= ~REQ_F_ARM_LTIMEOUT;
|
|
|
|
req->flags |= REQ_F_LINK_TIMEOUT;
|
2021-08-11 18:28:31 +00:00
|
|
|
|
|
|
|
/* linked timeouts should have two refs once prep'ed */
|
2021-08-15 09:40:18 +00:00
|
|
|
io_req_set_refcount(req);
|
2021-08-15 09:40:24 +00:00
|
|
|
__io_req_set_refcount(req->link, 2);
|
|
|
|
return req->link;
|
2021-08-11 18:28:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct io_kiocb *io_prep_linked_timeout(struct io_kiocb *req)
|
|
|
|
{
|
2021-08-15 09:40:24 +00:00
|
|
|
if (likely(!(req->flags & REQ_F_ARM_LTIMEOUT)))
|
2021-08-11 18:28:31 +00:00
|
|
|
return NULL;
|
|
|
|
return __io_prep_linked_timeout(req);
|
|
|
|
}
|
|
|
|
|
2020-10-15 14:46:24 +00:00
|
|
|
static void io_prep_async_work(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
const struct io_op_def *def = &io_op_defs[req->opcode];
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2021-06-17 17:14:02 +00:00
|
|
|
if (!(req->flags & REQ_F_CREDS)) {
|
|
|
|
req->flags |= REQ_F_CREDS;
|
2021-06-17 17:14:01 +00:00
|
|
|
req->creds = get_current_cred();
|
2021-06-17 17:14:02 +00:00
|
|
|
}
|
2021-03-06 16:22:27 +00:00
|
|
|
|
2021-03-22 01:58:29 +00:00
|
|
|
req->work.list.next = NULL;
|
|
|
|
req->work.flags = 0;
|
2020-10-22 15:47:16 +00:00
|
|
|
if (req->flags & REQ_F_FORCE_ASYNC)
|
|
|
|
req->work.flags |= IO_WQ_WORK_CONCURRENT;
|
|
|
|
|
2020-10-15 14:46:24 +00:00
|
|
|
if (req->flags & REQ_F_ISREG) {
|
|
|
|
if (def->hash_reg_file || (ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
io_wq_hash_work(&req->work, file_inode(req->file));
|
2021-04-01 14:38:34 +00:00
|
|
|
} else if (!req->file || !S_ISBLK(file_inode(req->file)->i_mode)) {
|
2020-10-15 14:46:24 +00:00
|
|
|
if (def->unbound_nonreg_file)
|
|
|
|
req->work.flags |= IO_WQ_WORK_UNBOUND;
|
|
|
|
}
|
2021-03-22 01:58:29 +00:00
|
|
|
|
|
|
|
switch (req->opcode) {
|
|
|
|
case IORING_OP_SPLICE:
|
|
|
|
case IORING_OP_TEE:
|
|
|
|
if (!S_ISREG(file_inode(req->splice.file_in)->i_mode))
|
|
|
|
req->work.flags |= IO_WQ_WORK_UNBOUND;
|
|
|
|
break;
|
|
|
|
}
|
2019-10-24 13:25:42 +00:00
|
|
|
}
|
2020-01-27 23:34:48 +00:00
|
|
|
|
2020-06-29 16:18:43 +00:00
|
|
|
static void io_prep_async_link(struct io_kiocb *req)
|
2019-10-24 13:25:42 +00:00
|
|
|
{
|
2020-06-29 16:18:43 +00:00
|
|
|
struct io_kiocb *cur;
|
2019-09-10 15:15:04 +00:00
|
|
|
|
2021-07-26 13:14:31 +00:00
|
|
|
if (req->flags & REQ_F_LINK_TIMEOUT) {
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-07-26 13:14:31 +00:00
|
|
|
io_for_each_link(cur, req)
|
|
|
|
io_prep_async_work(cur);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-07-26 13:14:31 +00:00
|
|
|
} else {
|
|
|
|
io_for_each_link(cur, req)
|
|
|
|
io_prep_async_work(cur);
|
|
|
|
}
|
2019-10-24 13:25:42 +00:00
|
|
|
}
|
|
|
|
|
2021-03-01 18:20:47 +00:00
|
|
|
static void io_queue_async_work(struct io_kiocb *req)
|
2019-10-24 13:25:42 +00:00
|
|
|
{
|
2019-11-08 15:09:12 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2020-06-29 16:18:43 +00:00
|
|
|
struct io_kiocb *link = io_prep_linked_timeout(req);
|
2021-02-16 19:56:50 +00:00
|
|
|
struct io_uring_task *tctx = req->task->io_uring;
|
2019-10-24 13:25:42 +00:00
|
|
|
|
2021-02-16 21:15:30 +00:00
|
|
|
BUG_ON(!tctx);
|
|
|
|
BUG_ON(!tctx->io_wq);
|
2019-10-24 13:25:42 +00:00
|
|
|
|
2020-06-29 16:18:43 +00:00
|
|
|
/* init ->work of the whole link before punting */
|
|
|
|
io_prep_async_link(req);
|
2021-07-23 17:53:54 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Not expected to happen, but if we do have a bug where this _can_
|
|
|
|
* happen, catch it here and ensure the request is marked as
|
|
|
|
* canceled. That will make io-wq go through the usual work cancel
|
|
|
|
* procedure rather than attempt to run this request (or create a new
|
|
|
|
* worker for it).
|
|
|
|
*/
|
|
|
|
if (WARN_ON_ONCE(!same_thread_group(req->task, current)))
|
|
|
|
req->work.flags |= IO_WQ_WORK_CANCEL;
|
|
|
|
|
2021-03-22 01:45:58 +00:00
|
|
|
trace_io_uring_queue_async_work(ctx, io_wq_is_hashed(&req->work), req,
|
|
|
|
&req->work, req->flags);
|
2021-03-01 18:20:47 +00:00
|
|
|
io_wq_enqueue(tctx->io_wq, &req->work);
|
2020-08-10 15:55:22 +00:00
|
|
|
if (link)
|
|
|
|
io_queue_linked_timeout(link);
|
2020-06-29 16:18:43 +00:00
|
|
|
}
|
|
|
|
|
2021-03-25 18:32:42 +00:00
|
|
|
static void io_kill_timeout(struct io_kiocb *req, int status)
|
2021-04-13 01:58:41 +00:00
|
|
|
__must_hold(&req->ctx->completion_lock)
|
2021-08-10 21:11:51 +00:00
|
|
|
__must_hold(&req->ctx->timeout_lock)
|
2019-09-17 18:26:57 +00:00
|
|
|
{
|
2020-08-16 01:44:09 +00:00
|
|
|
struct io_timeout_data *io = req->async_data;
|
2019-09-17 18:26:57 +00:00
|
|
|
|
2021-04-13 01:58:42 +00:00
|
|
|
if (hrtimer_try_to_cancel(&io->timer) != -1) {
|
2020-07-30 15:43:50 +00:00
|
|
|
atomic_set(&req->ctx->cq_timeouts,
|
|
|
|
atomic_read(&req->ctx->cq_timeouts) + 1);
|
2020-07-13 20:37:12 +00:00
|
|
|
list_del_init(&req->timeout.list);
|
2021-04-25 13:32:17 +00:00
|
|
|
io_cqring_fill_event(req->ctx, req->user_data, status, 0);
|
2021-08-11 18:28:28 +00:00
|
|
|
io_put_req_deferred(req);
|
2019-09-17 18:26:57 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-06-14 22:37:31 +00:00
|
|
|
static void io_queue_deferred(struct io_ring_ctx *ctx)
|
2019-04-07 03:51:27 +00:00
|
|
|
{
|
2021-06-14 22:37:31 +00:00
|
|
|
while (!list_empty(&ctx->defer_list)) {
|
2020-07-13 20:37:14 +00:00
|
|
|
struct io_defer_entry *de = list_first_entry(&ctx->defer_list,
|
|
|
|
struct io_defer_entry, list);
|
2019-04-07 03:51:27 +00:00
|
|
|
|
2020-07-13 20:37:15 +00:00
|
|
|
if (req_need_defer(de->req, de->seq))
|
2020-05-26 17:34:05 +00:00
|
|
|
break;
|
2020-07-13 20:37:14 +00:00
|
|
|
list_del_init(&de->list);
|
2021-01-26 23:35:10 +00:00
|
|
|
io_req_task_queue(de->req);
|
2020-07-13 20:37:14 +00:00
|
|
|
kfree(de);
|
2021-06-14 22:37:31 +00:00
|
|
|
}
|
2020-05-26 17:34:05 +00:00
|
|
|
}
|
|
|
|
|
2020-05-30 11:54:17 +00:00
|
|
|
static void io_flush_timeouts(struct io_ring_ctx *ctx)
|
2021-08-10 21:11:51 +00:00
|
|
|
__must_hold(&ctx->completion_lock)
|
2019-04-07 03:51:27 +00:00
|
|
|
{
|
2021-06-14 22:37:31 +00:00
|
|
|
u32 seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
|
2021-01-15 16:54:40 +00:00
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock_irq(&ctx->timeout_lock);
|
2021-06-14 22:37:25 +00:00
|
|
|
while (!list_empty(&ctx->timeout_list)) {
|
2021-01-15 16:54:40 +00:00
|
|
|
u32 events_needed, events_got;
|
2020-05-30 11:54:17 +00:00
|
|
|
struct io_kiocb *req = list_first_entry(&ctx->timeout_list,
|
2020-07-13 20:37:12 +00:00
|
|
|
struct io_kiocb, timeout.list);
|
2019-04-07 03:51:27 +00:00
|
|
|
|
2020-06-29 10:13:02 +00:00
|
|
|
if (io_is_timeout_noseq(req))
|
2020-05-30 11:54:17 +00:00
|
|
|
break;
|
2021-01-15 16:54:40 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Since seq can easily wrap around over time, subtract
|
|
|
|
* the last seq at which timeouts were flushed before comparing.
|
|
|
|
* Assuming not more than 2^31-1 events have happened since,
|
|
|
|
* these subtractions won't have wrapped, so we can check if
|
|
|
|
* target is in [last_seq, current_seq] by comparing the two.
|
|
|
|
*/
|
|
|
|
events_needed = req->timeout.target_seq - ctx->cq_last_tm_flush;
|
|
|
|
events_got = seq - ctx->cq_last_tm_flush;
|
|
|
|
if (events_got < events_needed)
|
2020-05-30 11:54:17 +00:00
|
|
|
break;
|
2020-05-30 11:54:18 +00:00
|
|
|
|
2020-07-13 20:37:12 +00:00
|
|
|
list_del_init(&req->timeout.list);
|
2021-03-25 18:32:42 +00:00
|
|
|
io_kill_timeout(req, 0);
|
2021-06-14 22:37:25 +00:00
|
|
|
}
|
2021-01-15 16:54:40 +00:00
|
|
|
ctx->cq_last_tm_flush = seq;
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock_irq(&ctx->timeout_lock);
|
2020-05-30 11:54:17 +00:00
|
|
|
}
|
2019-09-17 18:26:57 +00:00
|
|
|
|
2021-06-15 15:47:58 +00:00
|
|
|
static void __io_commit_cqring_flush(struct io_ring_ctx *ctx)
|
2020-05-30 11:54:17 +00:00
|
|
|
{
|
2021-06-15 15:47:58 +00:00
|
|
|
if (ctx->off_timeout_used)
|
|
|
|
io_flush_timeouts(ctx);
|
|
|
|
if (ctx->drain_active)
|
|
|
|
io_queue_deferred(ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void io_commit_cqring(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
if (unlikely(ctx->off_timeout_used || ctx->drain_active))
|
|
|
|
__io_commit_cqring_flush(ctx);
|
2021-01-19 13:32:38 +00:00
|
|
|
/* order cqe stores with ring update */
|
|
|
|
smp_store_release(&ctx->rings->cq.tail, ctx->cached_cq_tail);
|
2019-04-07 03:51:27 +00:00
|
|
|
}
|
|
|
|
|
2020-09-03 18:12:41 +00:00
|
|
|
static inline bool io_sqring_full(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_rings *r = ctx->rings;
|
|
|
|
|
2021-05-16 21:58:08 +00:00
|
|
|
return READ_ONCE(r->sq.tail) - ctx->cached_sq_head == ctx->sq_entries;
|
2020-09-03 18:12:41 +00:00
|
|
|
}
|
|
|
|
|
2021-01-19 13:32:39 +00:00
|
|
|
static inline unsigned int __io_cqring_events(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
return ctx->cached_cq_tail - READ_ONCE(ctx->rings->cq.head);
|
|
|
|
}
|
|
|
|
|
2021-05-16 21:58:11 +00:00
|
|
|
static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2019-08-26 17:23:46 +00:00
|
|
|
struct io_rings *rings = ctx->rings;
|
2021-05-16 21:58:09 +00:00
|
|
|
unsigned tail, mask = ctx->cq_entries - 1;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-04-24 21:54:18 +00:00
|
|
|
/*
|
|
|
|
* writes to the cq entry need to come after reading head; the
|
|
|
|
* control dependency is enough as we're using WRITE_ONCE to
|
|
|
|
* fill the cq entry
|
|
|
|
*/
|
2021-05-16 21:58:08 +00:00
|
|
|
if (__io_cqring_events(ctx) == ctx->cq_entries)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return NULL;
|
|
|
|
|
2021-01-19 13:32:39 +00:00
|
|
|
tail = ctx->cached_cq_tail++;
|
2021-05-16 21:58:09 +00:00
|
|
|
return &rings->cqes[tail & mask];
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2020-01-08 18:04:00 +00:00
|
|
|
static inline bool io_should_trigger_evfd(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2021-04-11 00:46:31 +00:00
|
|
|
if (likely(!ctx->cq_ev_fd))
|
2020-02-02 04:30:11 +00:00
|
|
|
return false;
|
2020-05-15 16:38:05 +00:00
|
|
|
if (READ_ONCE(ctx->rings->cq_flags) & IORING_CQ_EVENTFD_DISABLED)
|
|
|
|
return false;
|
2021-04-11 00:46:31 +00:00
|
|
|
return !ctx->eventfd_async || io_wq_current_is_worker();
|
2020-01-08 18:04:00 +00:00
|
|
|
}
|
|
|
|
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 16:52:41 +00:00
|
|
|
static void io_cqring_ev_posted(struct io_ring_ctx *ctx)
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
{
|
2021-08-06 20:04:31 +00:00
|
|
|
/*
|
|
|
|
* wake_up_all() may seem excessive, but io_wake_function() and
|
|
|
|
* io_should_wake() handle the termination of the loop and only
|
|
|
|
* wake as many waiters as we need to.
|
|
|
|
*/
|
|
|
|
if (wq_has_sleeper(&ctx->cq_wait))
|
|
|
|
wake_up_all(&ctx->cq_wait);
|
2020-09-02 19:52:19 +00:00
|
|
|
if (ctx->sq_data && waitqueue_active(&ctx->sq_data->wait))
|
|
|
|
wake_up(&ctx->sq_data->wait);
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 16:52:41 +00:00
|
|
|
if (io_should_trigger_evfd(ctx))
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
eventfd_signal(ctx->cq_ev_fd, 1);
|
2021-06-14 22:37:28 +00:00
|
|
|
if (waitqueue_active(&ctx->poll_wait)) {
|
|
|
|
wake_up_interruptible(&ctx->poll_wait);
|
2021-01-07 03:15:42 +00:00
|
|
|
kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN);
|
|
|
|
}
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
}
|
|
|
|
|
2021-01-07 03:15:41 +00:00
|
|
|
static void io_cqring_ev_posted_iopoll(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
if (ctx->flags & IORING_SETUP_SQPOLL) {
|
2021-08-06 20:04:31 +00:00
|
|
|
if (wq_has_sleeper(&ctx->cq_wait))
|
|
|
|
wake_up_all(&ctx->cq_wait);
|
2021-01-07 03:15:41 +00:00
|
|
|
}
|
|
|
|
if (io_should_trigger_evfd(ctx))
|
|
|
|
eventfd_signal(ctx->cq_ev_fd, 1);
|
2021-06-14 22:37:28 +00:00
|
|
|
if (waitqueue_active(&ctx->poll_wait)) {
|
|
|
|
wake_up_interruptible(&ctx->poll_wait);
|
2021-01-07 03:15:42 +00:00
|
|
|
kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN);
|
|
|
|
}
|
2021-01-07 03:15:41 +00:00
|
|
|
}
|
|
|
|
|
2019-11-22 04:01:26 +00:00
|
|
|
/* Returns true if there are no backlogged entries after the flush */
|
2021-02-23 12:40:22 +00:00
|
|
|
static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force)
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
{
|
2021-01-24 23:58:56 +00:00
|
|
|
bool all_flushed, posted;
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
|
2021-05-16 21:58:08 +00:00
|
|
|
if (!force && __io_cqring_events(ctx) == ctx->cq_entries)
|
2020-12-17 00:24:37 +00:00
|
|
|
return false;
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
|
2021-01-24 23:58:56 +00:00
|
|
|
posted = false;
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-02-23 12:40:22 +00:00
|
|
|
while (!list_empty(&ctx->cq_overflow_list)) {
|
2021-05-16 21:58:11 +00:00
|
|
|
struct io_uring_cqe *cqe = io_get_cqe(ctx);
|
2021-02-23 12:40:22 +00:00
|
|
|
struct io_overflow_cqe *ocqe;
|
2020-09-28 19:10:13 +00:00
|
|
|
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
if (!cqe && !force)
|
|
|
|
break;
|
2021-02-23 12:40:22 +00:00
|
|
|
ocqe = list_first_entry(&ctx->cq_overflow_list,
|
|
|
|
struct io_overflow_cqe, list);
|
|
|
|
if (cqe)
|
|
|
|
memcpy(cqe, &ocqe->cqe, sizeof(*cqe));
|
|
|
|
else
|
2021-05-16 21:58:10 +00:00
|
|
|
io_account_cq_overflow(ctx);
|
|
|
|
|
2021-01-24 23:58:56 +00:00
|
|
|
posted = true;
|
2021-02-23 12:40:22 +00:00
|
|
|
list_del(&ocqe->list);
|
|
|
|
kfree(ocqe);
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
}
|
|
|
|
|
2020-12-17 00:24:38 +00:00
|
|
|
all_flushed = list_empty(&ctx->cq_overflow_list);
|
|
|
|
if (all_flushed) {
|
2021-06-14 22:37:27 +00:00
|
|
|
clear_bit(0, &ctx->check_cq_overflow);
|
2021-08-08 00:13:42 +00:00
|
|
|
WRITE_ONCE(ctx->rings->sq_flags,
|
|
|
|
ctx->rings->sq_flags & ~IORING_SQ_CQ_OVERFLOW);
|
2020-12-17 00:24:38 +00:00
|
|
|
}
|
2020-07-30 15:43:49 +00:00
|
|
|
|
2021-01-24 23:58:56 +00:00
|
|
|
if (posted)
|
|
|
|
io_commit_cqring(ctx);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-01-24 23:58:56 +00:00
|
|
|
if (posted)
|
|
|
|
io_cqring_ev_posted(ctx);
|
2020-12-17 00:24:38 +00:00
|
|
|
return all_flushed;
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
}
|
|
|
|
|
2021-08-09 19:18:12 +00:00
|
|
|
static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx)
|
2021-01-04 20:36:36 +00:00
|
|
|
{
|
2021-03-05 00:15:48 +00:00
|
|
|
bool ret = true;
|
|
|
|
|
2021-06-14 22:37:27 +00:00
|
|
|
if (test_bit(0, &ctx->check_cq_overflow)) {
|
2021-01-04 20:36:36 +00:00
|
|
|
/* iopoll syncs against uring_lock, not completion_lock */
|
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL)
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-08-09 19:18:12 +00:00
|
|
|
ret = __io_cqring_overflow_flush(ctx, false);
|
2021-01-04 20:36:36 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL)
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
2021-03-05 00:15:48 +00:00
|
|
|
|
|
|
|
return ret;
|
2021-01-04 20:36:36 +00:00
|
|
|
}
|
|
|
|
|
2021-08-09 12:04:13 +00:00
|
|
|
/* must to be called somewhat shortly after putting a request */
|
|
|
|
static inline void io_put_task(struct task_struct *task, int nr)
|
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = task->io_uring;
|
|
|
|
|
|
|
|
percpu_counter_sub(&tctx->inflight, nr);
|
|
|
|
if (unlikely(atomic_read(&tctx->in_idle)))
|
|
|
|
wake_up(&tctx->wait);
|
|
|
|
put_task_struct_many(task, nr);
|
|
|
|
}
|
|
|
|
|
2021-04-25 13:32:17 +00:00
|
|
|
static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
|
|
|
|
long res, unsigned int cflags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2021-04-13 01:58:44 +00:00
|
|
|
struct io_overflow_cqe *ocqe;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2021-04-13 01:58:44 +00:00
|
|
|
ocqe = kmalloc(sizeof(*ocqe), GFP_ATOMIC | __GFP_ACCOUNT);
|
|
|
|
if (!ocqe) {
|
|
|
|
/*
|
|
|
|
* If we're in ring overflow flush mode, or in task cancel mode,
|
|
|
|
* or cannot allocate an overflow entry, then we need to drop it
|
|
|
|
* on the floor.
|
|
|
|
*/
|
2021-05-16 21:58:10 +00:00
|
|
|
io_account_cq_overflow(ctx);
|
2021-04-13 01:58:44 +00:00
|
|
|
return false;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
2021-04-13 01:58:44 +00:00
|
|
|
if (list_empty(&ctx->cq_overflow_list)) {
|
2021-06-14 22:37:27 +00:00
|
|
|
set_bit(0, &ctx->check_cq_overflow);
|
2021-08-08 00:13:42 +00:00
|
|
|
WRITE_ONCE(ctx->rings->sq_flags,
|
|
|
|
ctx->rings->sq_flags | IORING_SQ_CQ_OVERFLOW);
|
|
|
|
|
2021-04-13 01:58:44 +00:00
|
|
|
}
|
2021-04-25 13:32:17 +00:00
|
|
|
ocqe->cqe.user_data = user_data;
|
2021-04-13 01:58:44 +00:00
|
|
|
ocqe->cqe.res = res;
|
|
|
|
ocqe->cqe.flags = cflags;
|
|
|
|
list_add_tail(&ocqe->list, &ctx->cq_overflow_list);
|
|
|
|
return true;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2021-04-25 13:32:17 +00:00
|
|
|
static inline bool __io_cqring_fill_event(struct io_ring_ctx *ctx, u64 user_data,
|
|
|
|
long res, unsigned int cflags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct io_uring_cqe *cqe;
|
|
|
|
|
2021-04-25 13:32:17 +00:00
|
|
|
trace_io_uring_complete(ctx, user_data, res, cflags);
|
2019-11-03 13:52:50 +00:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
/*
|
|
|
|
* If we can't get a cq entry, userspace overflowed the
|
|
|
|
* submission (by quite a lot). Increment the overflow count in
|
|
|
|
* the ring.
|
|
|
|
*/
|
2021-05-16 21:58:11 +00:00
|
|
|
cqe = io_get_cqe(ctx);
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
if (likely(cqe)) {
|
2021-04-25 13:32:17 +00:00
|
|
|
WRITE_ONCE(cqe->user_data, user_data);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
WRITE_ONCE(cqe->res, res);
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
WRITE_ONCE(cqe->flags, cflags);
|
2021-04-11 00:46:33 +00:00
|
|
|
return true;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
2021-04-25 13:32:17 +00:00
|
|
|
return io_cqring_event_overflow(ctx, user_data, res, cflags);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2021-04-11 00:46:33 +00:00
|
|
|
/* not as hot to bloat with inlining */
|
2021-04-25 13:32:17 +00:00
|
|
|
static noinline bool io_cqring_fill_event(struct io_ring_ctx *ctx, u64 user_data,
|
|
|
|
long res, unsigned int cflags)
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
{
|
2021-04-25 13:32:17 +00:00
|
|
|
return __io_cqring_fill_event(ctx, user_data, res, cflags);
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
}
|
|
|
|
|
2021-03-09 00:37:59 +00:00
|
|
|
static void io_req_complete_post(struct io_kiocb *req, long res,
|
|
|
|
unsigned int cflags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2019-11-06 22:21:34 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-04-25 13:32:17 +00:00
|
|
|
__io_cqring_fill_event(ctx, req->user_data, res, cflags);
|
2021-02-10 02:53:37 +00:00
|
|
|
/*
|
|
|
|
* If we're the last reference to this request, add to our locked
|
|
|
|
* free_list cache.
|
|
|
|
*/
|
2021-02-24 20:28:27 +00:00
|
|
|
if (req_ref_put_and_test(req)) {
|
2021-03-09 00:37:59 +00:00
|
|
|
if (req->flags & (REQ_F_LINK | REQ_F_HARDLINK)) {
|
2021-08-15 09:40:25 +00:00
|
|
|
if (req->flags & IO_DISARM_MASK)
|
2021-03-09 00:37:59 +00:00
|
|
|
io_disarm_next(req);
|
|
|
|
if (req->link) {
|
|
|
|
io_req_task_queue(req->link);
|
|
|
|
req->link = NULL;
|
|
|
|
}
|
|
|
|
}
|
2021-02-10 02:53:37 +00:00
|
|
|
io_dismantle_req(req);
|
|
|
|
io_put_task(req->task, 1);
|
2021-08-09 19:18:10 +00:00
|
|
|
list_add(&req->inflight_entry, &ctx->locked_free_list);
|
2021-05-16 21:58:12 +00:00
|
|
|
ctx->locked_free_nr++;
|
2021-03-14 20:57:09 +00:00
|
|
|
} else {
|
|
|
|
if (!percpu_ref_tryget(&ctx->refs))
|
|
|
|
req = NULL;
|
|
|
|
}
|
2021-03-09 00:37:59 +00:00
|
|
|
io_commit_cqring(ctx);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-03-09 00:37:59 +00:00
|
|
|
|
2021-03-14 20:57:09 +00:00
|
|
|
if (req) {
|
|
|
|
io_cqring_ev_posted(ctx);
|
2021-02-10 02:53:37 +00:00
|
|
|
percpu_ref_put(&ctx->refs);
|
2021-03-14 20:57:09 +00:00
|
|
|
}
|
2020-06-22 16:13:11 +00:00
|
|
|
}
|
|
|
|
|
2021-04-15 23:44:34 +00:00
|
|
|
static inline bool io_req_needs_clean(struct io_kiocb *req)
|
|
|
|
{
|
2021-06-17 17:14:04 +00:00
|
|
|
return req->flags & IO_REQ_CLEAN_FLAGS;
|
2021-04-15 23:44:34 +00:00
|
|
|
}
|
|
|
|
|
2021-01-19 13:32:45 +00:00
|
|
|
static void io_req_complete_state(struct io_kiocb *req, long res,
|
2021-02-10 00:03:09 +00:00
|
|
|
unsigned int cflags)
|
2020-06-22 16:13:11 +00:00
|
|
|
{
|
2021-04-15 23:44:34 +00:00
|
|
|
if (io_req_needs_clean(req))
|
2021-03-19 17:22:41 +00:00
|
|
|
io_clean_op(req);
|
2021-01-19 13:32:45 +00:00
|
|
|
req->result = res;
|
|
|
|
req->compl.cflags = cflags;
|
2021-01-19 13:32:47 +00:00
|
|
|
req->flags |= REQ_F_COMPLETE_INLINE;
|
2020-06-22 15:17:17 +00:00
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
static inline void __io_req_complete(struct io_kiocb *req, unsigned issue_flags,
|
|
|
|
long res, unsigned cflags)
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
{
|
2021-02-10 00:03:09 +00:00
|
|
|
if (issue_flags & IO_URING_F_COMPLETE_DEFER)
|
|
|
|
io_req_complete_state(req, res, cflags);
|
2021-01-19 13:32:45 +00:00
|
|
|
else
|
2021-02-10 02:53:37 +00:00
|
|
|
io_req_complete_post(req, res, cflags);
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
}
|
|
|
|
|
2021-01-19 13:32:45 +00:00
|
|
|
static inline void io_req_complete(struct io_kiocb *req, long res)
|
2019-11-08 15:52:53 +00:00
|
|
|
{
|
2021-02-10 00:03:09 +00:00
|
|
|
__io_req_complete(req, 0, res, 0);
|
2019-11-08 15:52:53 +00:00
|
|
|
}
|
|
|
|
|
2021-02-28 22:35:12 +00:00
|
|
|
static void io_req_complete_failed(struct io_kiocb *req, long res)
|
|
|
|
{
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-02-28 22:35:12 +00:00
|
|
|
io_req_complete_post(req, res, 0);
|
|
|
|
}
|
|
|
|
|
2021-08-09 12:04:08 +00:00
|
|
|
/*
|
|
|
|
* Don't initialise the fields below on every allocation, but do that in
|
|
|
|
* advance and keep them valid across allocations.
|
|
|
|
*/
|
|
|
|
static void io_preinit_req(struct io_kiocb *req, struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
req->ctx = ctx;
|
|
|
|
req->link = NULL;
|
|
|
|
req->async_data = NULL;
|
|
|
|
/* not necessary, but safer to zero */
|
|
|
|
req->result = 0;
|
|
|
|
}
|
|
|
|
|
2021-03-19 17:22:39 +00:00
|
|
|
static void io_flush_cached_locked_reqs(struct io_ring_ctx *ctx,
|
2021-08-09 19:18:11 +00:00
|
|
|
struct io_submit_state *state)
|
2021-03-19 17:22:39 +00:00
|
|
|
{
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-08-09 19:18:11 +00:00
|
|
|
list_splice_init(&ctx->locked_free_list, &state->free_list);
|
2021-05-16 21:58:12 +00:00
|
|
|
ctx->locked_free_nr = 0;
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-03-19 17:22:39 +00:00
|
|
|
}
|
|
|
|
|
2021-03-19 17:22:35 +00:00
|
|
|
/* Returns true IFF there are requests in the cache */
|
2021-02-10 02:53:37 +00:00
|
|
|
static bool io_flush_cached_reqs(struct io_ring_ctx *ctx)
|
2019-11-08 15:52:53 +00:00
|
|
|
{
|
2021-02-10 02:53:37 +00:00
|
|
|
struct io_submit_state *state = &ctx->submit_state;
|
2021-03-19 17:22:35 +00:00
|
|
|
int nr;
|
2019-11-08 15:52:53 +00:00
|
|
|
|
2021-02-10 02:53:37 +00:00
|
|
|
/*
|
|
|
|
* If we have more than a batch's worth of requests in our IRQ side
|
|
|
|
* locked cache, grab the lock and move them over to our submission
|
|
|
|
* side cache.
|
|
|
|
*/
|
2021-05-16 21:58:12 +00:00
|
|
|
if (READ_ONCE(ctx->locked_free_nr) > IO_COMPL_BATCH)
|
2021-08-09 19:18:11 +00:00
|
|
|
io_flush_cached_locked_reqs(ctx, state);
|
2019-11-08 15:52:53 +00:00
|
|
|
|
2021-03-19 17:22:35 +00:00
|
|
|
nr = state->free_reqs;
|
2021-08-09 19:18:11 +00:00
|
|
|
while (!list_empty(&state->free_list)) {
|
|
|
|
struct io_kiocb *req = list_first_entry(&state->free_list,
|
2021-08-09 19:18:10 +00:00
|
|
|
struct io_kiocb, inflight_entry);
|
2021-03-19 17:22:35 +00:00
|
|
|
|
2021-08-09 19:18:10 +00:00
|
|
|
list_del(&req->inflight_entry);
|
2021-03-19 17:22:35 +00:00
|
|
|
state->reqs[nr++] = req;
|
|
|
|
if (nr == ARRAY_SIZE(state->reqs))
|
2021-02-10 00:03:23 +00:00
|
|
|
break;
|
2021-02-10 00:03:19 +00:00
|
|
|
}
|
|
|
|
|
2021-03-19 17:22:35 +00:00
|
|
|
state->free_reqs = nr;
|
|
|
|
return nr != 0;
|
2019-11-08 15:52:53 +00:00
|
|
|
}
|
|
|
|
|
io_uring: remove submission references
Requests are by default given with two references, submission and
completion. Completion references are straightforward, they represent
request ownership and are put when a request is completed or so.
Submission references are a bit more trickier. They're needed when
io_issue_sqe() followed deep into the submission stack (e.g. in fs,
block, drivers, etc.), request may have given away for concurrent
execution or already completed, and the code unwinding back to
io_issue_sqe() may be accessing some pieces of our requests, e.g.
file or iov.
Now, we prevent such async/in-depth completions by pushing requests
through task_work. Punting to io-wq is also done through task_works,
apart from a couple of cases with a pretty well known context. So,
there're two cases:
1) io_issue_sqe() from the task context and protected by ->uring_lock.
Either requests return back to io_uring or handed to task_work, which
won't be executed because we're currently controlling that task. So,
we can be sure that requests are staying alive all the time and we don't
need submission references to pin them.
2) io_issue_sqe() from io-wq, which doesn't hold the mutex. The role of
submission reference is played by io-wq reference, which is put by
io_wq_submit_work(). Hence, it should be fine.
Considering that, we can carefully kill the submission reference.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/6b68f1c763229a590f2a27148aee77767a8d7750.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 18:28:29 +00:00
|
|
|
/*
|
|
|
|
* A request might get retired back into the request caches even before opcode
|
|
|
|
* handlers and io_issue_sqe() are done with it, e.g. inline completion path.
|
|
|
|
* Because of that, io_alloc_req() should be called only under ->uring_lock
|
|
|
|
* and with extra caution to not get a request that is still worked on.
|
|
|
|
*/
|
2021-02-10 00:03:23 +00:00
|
|
|
static struct io_kiocb *io_alloc_req(struct io_ring_ctx *ctx)
|
io_uring: remove submission references
Requests are by default given with two references, submission and
completion. Completion references are straightforward, they represent
request ownership and are put when a request is completed or so.
Submission references are a bit more trickier. They're needed when
io_issue_sqe() followed deep into the submission stack (e.g. in fs,
block, drivers, etc.), request may have given away for concurrent
execution or already completed, and the code unwinding back to
io_issue_sqe() may be accessing some pieces of our requests, e.g.
file or iov.
Now, we prevent such async/in-depth completions by pushing requests
through task_work. Punting to io-wq is also done through task_works,
apart from a couple of cases with a pretty well known context. So,
there're two cases:
1) io_issue_sqe() from the task context and protected by ->uring_lock.
Either requests return back to io_uring or handed to task_work, which
won't be executed because we're currently controlling that task. So,
we can be sure that requests are staying alive all the time and we don't
need submission references to pin them.
2) io_issue_sqe() from io-wq, which doesn't hold the mutex. The role of
submission reference is played by io-wq reference, which is put by
io_wq_submit_work(). Hence, it should be fine.
Considering that, we can carefully kill the submission reference.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/6b68f1c763229a590f2a27148aee77767a8d7750.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 18:28:29 +00:00
|
|
|
__must_hold(&ctx->uring_lock)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2021-02-10 00:03:23 +00:00
|
|
|
struct io_submit_state *state = &ctx->submit_state;
|
2021-08-09 12:04:08 +00:00
|
|
|
gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
|
|
|
|
int ret, i;
|
2021-02-10 00:03:23 +00:00
|
|
|
|
2021-06-24 14:09:57 +00:00
|
|
|
BUILD_BUG_ON(ARRAY_SIZE(state->reqs) < IO_REQ_ALLOC_BATCH);
|
2021-02-10 00:03:23 +00:00
|
|
|
|
2021-08-09 12:04:08 +00:00
|
|
|
if (likely(state->free_reqs || io_flush_cached_reqs(ctx)))
|
|
|
|
goto got_req;
|
2021-02-10 00:03:23 +00:00
|
|
|
|
2021-08-09 12:04:08 +00:00
|
|
|
ret = kmem_cache_alloc_bulk(req_cachep, gfp, IO_REQ_ALLOC_BATCH,
|
|
|
|
state->reqs);
|
2019-03-14 22:30:06 +00:00
|
|
|
|
2021-08-09 12:04:08 +00:00
|
|
|
/*
|
|
|
|
* Bulk alloc is all-or-nothing. If we fail to get a batch,
|
|
|
|
* retry single alloc to be on the safe side.
|
|
|
|
*/
|
|
|
|
if (unlikely(ret <= 0)) {
|
|
|
|
state->reqs[0] = kmem_cache_alloc(req_cachep, gfp);
|
|
|
|
if (!state->reqs[0])
|
|
|
|
return NULL;
|
|
|
|
ret = 1;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
2021-08-09 12:04:08 +00:00
|
|
|
|
|
|
|
for (i = 0; i < ret; i++)
|
|
|
|
io_preinit_req(state->reqs[i], ctx);
|
|
|
|
state->free_reqs = ret;
|
2021-02-10 00:03:23 +00:00
|
|
|
got_req:
|
2020-09-30 19:57:01 +00:00
|
|
|
state->free_reqs--;
|
|
|
|
return state->reqs[state->free_reqs];
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2021-03-19 17:22:43 +00:00
|
|
|
static inline void io_put_file(struct file *file)
|
2020-02-24 08:32:44 +00:00
|
|
|
{
|
2021-03-19 17:22:43 +00:00
|
|
|
if (file)
|
2020-02-24 08:32:44 +00:00
|
|
|
fput(file);
|
|
|
|
}
|
|
|
|
|
2020-10-13 08:43:59 +00:00
|
|
|
static void io_dismantle_req(struct io_kiocb *req)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2021-03-19 17:22:42 +00:00
|
|
|
unsigned int flags = req->flags;
|
2020-02-18 21:19:09 +00:00
|
|
|
|
2021-04-20 11:03:31 +00:00
|
|
|
if (io_req_needs_clean(req))
|
|
|
|
io_clean_op(req);
|
2021-03-19 17:22:43 +00:00
|
|
|
if (!(flags & REQ_F_FIXED_FILE))
|
|
|
|
io_put_file(req->file);
|
2021-01-15 17:37:44 +00:00
|
|
|
if (req->fixed_rsrc_refs)
|
|
|
|
percpu_ref_put(req->fixed_rsrc_refs);
|
2021-06-26 20:40:49 +00:00
|
|
|
if (req->async_data) {
|
2021-03-19 17:22:42 +00:00
|
|
|
kfree(req->async_data);
|
2021-06-26 20:40:49 +00:00
|
|
|
req->async_data = NULL;
|
|
|
|
}
|
2019-03-12 16:16:44 +00:00
|
|
|
}
|
|
|
|
|
2020-10-13 08:44:00 +00:00
|
|
|
static void __io_free_req(struct io_kiocb *req)
|
2019-12-28 19:11:08 +00:00
|
|
|
{
|
2020-08-10 16:55:56 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2019-12-28 19:11:08 +00:00
|
|
|
|
2020-10-13 08:44:00 +00:00
|
|
|
io_dismantle_req(req);
|
2021-01-25 11:42:21 +00:00
|
|
|
io_put_task(req->task, 1);
|
2019-12-28 19:11:08 +00:00
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-08-09 19:18:10 +00:00
|
|
|
list_add(&req->inflight_entry, &ctx->locked_free_list);
|
2021-08-09 19:18:08 +00:00
|
|
|
ctx->locked_free_nr++;
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-08-09 19:18:08 +00:00
|
|
|
|
2020-06-29 10:13:03 +00:00
|
|
|
percpu_ref_put(&ctx->refs);
|
2019-03-12 16:16:44 +00:00
|
|
|
}
|
|
|
|
|
2020-10-27 23:25:37 +00:00
|
|
|
static inline void io_remove_next_linked(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
struct io_kiocb *nxt = req->link;
|
|
|
|
|
|
|
|
req->link = nxt->link;
|
|
|
|
nxt->link = NULL;
|
|
|
|
}
|
|
|
|
|
2021-03-09 00:37:58 +00:00
|
|
|
static bool io_kill_linked_timeout(struct io_kiocb *req)
|
|
|
|
__must_hold(&req->ctx->completion_lock)
|
2021-08-10 21:14:18 +00:00
|
|
|
__must_hold(&req->ctx->timeout_lock)
|
2019-11-05 19:40:47 +00:00
|
|
|
{
|
2021-03-09 00:37:58 +00:00
|
|
|
struct io_kiocb *link = req->link;
|
2020-10-27 23:25:37 +00:00
|
|
|
|
2021-08-15 09:40:23 +00:00
|
|
|
if (link && link->opcode == IORING_OP_LINK_TIMEOUT) {
|
2020-10-22 15:43:11 +00:00
|
|
|
struct io_timeout_data *io = link->async_data;
|
2020-06-29 10:12:59 +00:00
|
|
|
|
2020-10-27 23:25:37 +00:00
|
|
|
io_remove_next_linked(req);
|
2020-10-27 23:25:36 +00:00
|
|
|
link->timeout.head = NULL;
|
2021-04-13 01:58:42 +00:00
|
|
|
if (hrtimer_try_to_cancel(&io->timer) != -1) {
|
2021-04-25 13:32:17 +00:00
|
|
|
io_cqring_fill_event(link->ctx, link->user_data,
|
|
|
|
-ECANCELED, 0);
|
2021-08-11 18:28:28 +00:00
|
|
|
io_put_req_deferred(link);
|
2021-03-22 01:58:24 +00:00
|
|
|
return true;
|
2020-10-22 15:43:11 +00:00
|
|
|
}
|
|
|
|
}
|
2021-03-22 01:58:24 +00:00
|
|
|
return false;
|
2020-06-29 10:12:59 +00:00
|
|
|
}
|
|
|
|
|
2020-10-18 09:17:39 +00:00
|
|
|
static void io_fail_links(struct io_kiocb *req)
|
2021-03-09 00:37:58 +00:00
|
|
|
__must_hold(&req->ctx->completion_lock)
|
2019-05-10 22:07:28 +00:00
|
|
|
{
|
2021-03-09 00:37:58 +00:00
|
|
|
struct io_kiocb *nxt, *link = req->link;
|
2019-05-10 22:07:28 +00:00
|
|
|
|
2020-10-27 23:25:37 +00:00
|
|
|
req->link = NULL;
|
|
|
|
while (link) {
|
|
|
|
nxt = link->link;
|
|
|
|
link->link = NULL;
|
2019-11-05 19:40:47 +00:00
|
|
|
|
2020-10-27 23:25:37 +00:00
|
|
|
trace_io_uring_fail_link(req, link);
|
2021-04-25 13:32:17 +00:00
|
|
|
io_cqring_fill_event(link->ctx, link->user_data, -ECANCELED, 0);
|
2021-08-11 18:28:28 +00:00
|
|
|
io_put_req_deferred(link);
|
2020-10-27 23:25:37 +00:00
|
|
|
link = nxt;
|
2019-05-10 22:07:28 +00:00
|
|
|
}
|
2021-03-09 00:37:58 +00:00
|
|
|
}
|
2019-05-10 22:07:28 +00:00
|
|
|
|
2021-03-09 00:37:58 +00:00
|
|
|
static bool io_disarm_next(struct io_kiocb *req)
|
|
|
|
__must_hold(&req->ctx->completion_lock)
|
|
|
|
{
|
|
|
|
bool posted = false;
|
|
|
|
|
2021-08-15 09:40:25 +00:00
|
|
|
if (req->flags & REQ_F_ARM_LTIMEOUT) {
|
|
|
|
struct io_kiocb *link = req->link;
|
|
|
|
|
2021-08-15 09:40:26 +00:00
|
|
|
req->flags &= ~REQ_F_ARM_LTIMEOUT;
|
2021-08-15 09:40:25 +00:00
|
|
|
if (link && link->opcode == IORING_OP_LINK_TIMEOUT) {
|
|
|
|
io_remove_next_linked(req);
|
|
|
|
io_cqring_fill_event(link->ctx, link->user_data,
|
|
|
|
-ECANCELED, 0);
|
|
|
|
io_put_req_deferred(link);
|
|
|
|
posted = true;
|
|
|
|
}
|
|
|
|
} else if (req->flags & REQ_F_LINK_TIMEOUT) {
|
2021-08-10 21:14:18 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
|
|
|
spin_lock_irq(&ctx->timeout_lock);
|
2021-03-09 00:37:58 +00:00
|
|
|
posted = io_kill_linked_timeout(req);
|
2021-08-10 21:14:18 +00:00
|
|
|
spin_unlock_irq(&ctx->timeout_lock);
|
|
|
|
}
|
2021-05-16 21:58:05 +00:00
|
|
|
if (unlikely((req->flags & REQ_F_FAIL) &&
|
2021-04-11 00:46:39 +00:00
|
|
|
!(req->flags & REQ_F_HARDLINK))) {
|
2021-03-09 00:37:58 +00:00
|
|
|
posted |= (req->link != NULL);
|
|
|
|
io_fail_links(req);
|
|
|
|
}
|
|
|
|
return posted;
|
2019-05-10 22:07:28 +00:00
|
|
|
}
|
|
|
|
|
2020-06-30 12:20:43 +00:00
|
|
|
static struct io_kiocb *__io_req_find_next(struct io_kiocb *req)
|
2019-11-09 03:00:08 +00:00
|
|
|
{
|
2021-03-09 00:37:58 +00:00
|
|
|
struct io_kiocb *nxt;
|
2019-11-21 20:21:01 +00:00
|
|
|
|
2019-05-10 22:07:28 +00:00
|
|
|
/*
|
|
|
|
* If LINK is set, we have dependent requests in this chain. If we
|
|
|
|
* didn't fail this request, queue the first one up, moving any other
|
|
|
|
* dependencies to the next request. In case of failure, fail the rest
|
|
|
|
* of the chain.
|
|
|
|
*/
|
2021-08-15 09:40:25 +00:00
|
|
|
if (req->flags & IO_DISARM_MASK) {
|
2021-03-09 00:37:58 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
bool posted;
|
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-03-09 00:37:58 +00:00
|
|
|
posted = io_disarm_next(req);
|
|
|
|
if (posted)
|
|
|
|
io_commit_cqring(req->ctx);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-03-09 00:37:58 +00:00
|
|
|
if (posted)
|
|
|
|
io_cqring_ev_posted(ctx);
|
2020-10-27 23:25:37 +00:00
|
|
|
}
|
2021-03-09 00:37:58 +00:00
|
|
|
nxt = req->link;
|
|
|
|
req->link = NULL;
|
|
|
|
return nxt;
|
2019-11-20 20:03:52 +00:00
|
|
|
}
|
2019-05-10 22:07:28 +00:00
|
|
|
|
2020-10-27 23:25:37 +00:00
|
|
|
static inline struct io_kiocb *io_req_find_next(struct io_kiocb *req)
|
2020-06-30 12:20:43 +00:00
|
|
|
{
|
2021-02-12 18:41:16 +00:00
|
|
|
if (likely(!(req->flags & (REQ_F_LINK|REQ_F_HARDLINK))))
|
2020-06-30 12:20:43 +00:00
|
|
|
return NULL;
|
|
|
|
return __io_req_find_next(req);
|
|
|
|
}
|
|
|
|
|
io_uring: fix __tctx_task_work() ctx race
There is an unlikely but possible race using a freed context. That's
because req->task_work.func() can free a request, but we won't
necessarily find a completion in submit_state.comp and so all ctx refs
may be put by the time we do mutex_lock(&ctx->uring_ctx);
There are several reasons why it can miss going through
submit_state.comp: 1) req->task_work.func() didn't complete it itself,
but punted to iowq (e.g. reissue) and it got freed later, or a similar
situation with it overflowing and getting flushed by someone else, or
being submitted to IRQ completion, 2) As we don't hold the uring_lock,
someone else can do io_submit_flush_completions() and put our ref.
3) Bugs and code obscurities, e.g. failing to propagate issue_flags
properly.
One example is as follows
CPU1 | CPU2
=======================================================================
@req->task_work.func() |
-> @req overflwed, |
so submit_state.comp,nr==0 |
| flush overflows, and free @req
| ctx refs == 0, free it
ctx is dead, but we do |
lock + flush + unlock |
So take a ctx reference for each new ctx we see in __tctx_task_work(),
and do release it until we do all our flushing.
Fixes: 65453d1efbd2 ("io_uring: enable req cache for task_work items")
Reported-by: syzbot+a157ac7c03a56397f553@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
[axboe: fold in my one-liner and fix ref mismatch]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-28 22:04:53 +00:00
|
|
|
static void ctx_flush_and_put(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
if (!ctx)
|
|
|
|
return;
|
2021-08-09 19:18:11 +00:00
|
|
|
if (ctx->submit_state.compl_nr) {
|
io_uring: fix __tctx_task_work() ctx race
There is an unlikely but possible race using a freed context. That's
because req->task_work.func() can free a request, but we won't
necessarily find a completion in submit_state.comp and so all ctx refs
may be put by the time we do mutex_lock(&ctx->uring_ctx);
There are several reasons why it can miss going through
submit_state.comp: 1) req->task_work.func() didn't complete it itself,
but punted to iowq (e.g. reissue) and it got freed later, or a similar
situation with it overflowing and getting flushed by someone else, or
being submitted to IRQ completion, 2) As we don't hold the uring_lock,
someone else can do io_submit_flush_completions() and put our ref.
3) Bugs and code obscurities, e.g. failing to propagate issue_flags
properly.
One example is as follows
CPU1 | CPU2
=======================================================================
@req->task_work.func() |
-> @req overflwed, |
so submit_state.comp,nr==0 |
| flush overflows, and free @req
| ctx refs == 0, free it
ctx is dead, but we do |
lock + flush + unlock |
So take a ctx reference for each new ctx we see in __tctx_task_work(),
and do release it until we do all our flushing.
Fixes: 65453d1efbd2 ("io_uring: enable req cache for task_work items")
Reported-by: syzbot+a157ac7c03a56397f553@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
[axboe: fold in my one-liner and fix ref mismatch]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-28 22:04:53 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-06-17 17:14:00 +00:00
|
|
|
io_submit_flush_completions(ctx);
|
io_uring: fix __tctx_task_work() ctx race
There is an unlikely but possible race using a freed context. That's
because req->task_work.func() can free a request, but we won't
necessarily find a completion in submit_state.comp and so all ctx refs
may be put by the time we do mutex_lock(&ctx->uring_ctx);
There are several reasons why it can miss going through
submit_state.comp: 1) req->task_work.func() didn't complete it itself,
but punted to iowq (e.g. reissue) and it got freed later, or a similar
situation with it overflowing and getting flushed by someone else, or
being submitted to IRQ completion, 2) As we don't hold the uring_lock,
someone else can do io_submit_flush_completions() and put our ref.
3) Bugs and code obscurities, e.g. failing to propagate issue_flags
properly.
One example is as follows
CPU1 | CPU2
=======================================================================
@req->task_work.func() |
-> @req overflwed, |
so submit_state.comp,nr==0 |
| flush overflows, and free @req
| ctx refs == 0, free it
ctx is dead, but we do |
lock + flush + unlock |
So take a ctx reference for each new ctx we see in __tctx_task_work(),
and do release it until we do all our flushing.
Fixes: 65453d1efbd2 ("io_uring: enable req cache for task_work items")
Reported-by: syzbot+a157ac7c03a56397f553@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
[axboe: fold in my one-liner and fix ref mismatch]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-28 22:04:53 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
percpu_ref_put(&ctx->refs);
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:20 +00:00
|
|
|
static void tctx_task_work(struct callback_head *cb)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
{
|
2021-06-17 17:14:07 +00:00
|
|
|
struct io_ring_ctx *ctx = NULL;
|
2021-06-17 17:14:06 +00:00
|
|
|
struct io_uring_task *tctx = container_of(cb, struct io_uring_task,
|
|
|
|
task_work);
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
|
2021-06-17 17:14:09 +00:00
|
|
|
while (1) {
|
2021-06-17 17:14:06 +00:00
|
|
|
struct io_wq_work_node *node;
|
|
|
|
|
|
|
|
spin_lock_irq(&tctx->task_lock);
|
2021-06-17 17:14:08 +00:00
|
|
|
node = tctx->task_list.first;
|
2021-06-17 17:14:06 +00:00
|
|
|
INIT_WQ_LIST(&tctx->task_list);
|
2021-08-10 16:53:55 +00:00
|
|
|
if (!node)
|
|
|
|
tctx->task_running = false;
|
2021-06-17 17:14:06 +00:00
|
|
|
spin_unlock_irq(&tctx->task_lock);
|
2021-08-10 16:53:55 +00:00
|
|
|
if (!node)
|
|
|
|
break;
|
2021-06-17 17:14:06 +00:00
|
|
|
|
2021-08-10 16:53:55 +00:00
|
|
|
do {
|
2021-06-17 17:14:06 +00:00
|
|
|
struct io_wq_work_node *next = node->next;
|
|
|
|
struct io_kiocb *req = container_of(node, struct io_kiocb,
|
|
|
|
io_task_work.node);
|
|
|
|
|
|
|
|
if (req->ctx != ctx) {
|
|
|
|
ctx_flush_and_put(ctx);
|
|
|
|
ctx = req->ctx;
|
|
|
|
percpu_ref_get(&ctx->refs);
|
|
|
|
}
|
2021-06-30 20:54:04 +00:00
|
|
|
req->io_task_work.func(req);
|
2021-06-17 17:14:06 +00:00
|
|
|
node = next;
|
2021-08-10 16:53:55 +00:00
|
|
|
} while (node);
|
|
|
|
|
2021-02-10 00:03:20 +00:00
|
|
|
cond_resched();
|
2021-06-17 17:14:06 +00:00
|
|
|
}
|
2021-06-17 17:14:07 +00:00
|
|
|
|
|
|
|
ctx_flush_and_put(ctx);
|
2021-02-10 00:03:20 +00:00
|
|
|
}
|
|
|
|
|
2021-07-01 12:26:05 +00:00
|
|
|
static void io_req_task_work_add(struct io_kiocb *req)
|
2021-02-10 00:03:20 +00:00
|
|
|
{
|
2021-03-19 17:22:44 +00:00
|
|
|
struct task_struct *tsk = req->task;
|
2021-02-10 00:03:20 +00:00
|
|
|
struct io_uring_task *tctx = tsk->io_uring;
|
2021-03-19 17:22:44 +00:00
|
|
|
enum task_work_notify_mode notify;
|
2021-07-01 12:26:05 +00:00
|
|
|
struct io_wq_work_node *node;
|
2021-02-16 17:33:53 +00:00
|
|
|
unsigned long flags;
|
2021-08-10 16:53:55 +00:00
|
|
|
bool running;
|
2021-02-10 00:03:20 +00:00
|
|
|
|
|
|
|
WARN_ON_ONCE(!tctx);
|
|
|
|
|
2021-02-16 17:33:53 +00:00
|
|
|
spin_lock_irqsave(&tctx->task_lock, flags);
|
2021-02-10 00:03:20 +00:00
|
|
|
wq_list_add_tail(&req->io_task_work.node, &tctx->task_list);
|
2021-08-10 16:53:55 +00:00
|
|
|
running = tctx->task_running;
|
|
|
|
if (!running)
|
|
|
|
tctx->task_running = true;
|
2021-02-16 17:33:53 +00:00
|
|
|
spin_unlock_irqrestore(&tctx->task_lock, flags);
|
2021-02-10 00:03:20 +00:00
|
|
|
|
|
|
|
/* task_work already pending, we're done */
|
2021-08-10 16:53:55 +00:00
|
|
|
if (running)
|
2021-07-01 12:26:05 +00:00
|
|
|
return;
|
2021-02-10 00:03:20 +00:00
|
|
|
|
2021-03-19 17:22:44 +00:00
|
|
|
/*
|
|
|
|
* SQPOLL kernel thread doesn't need notification, just a wakeup. For
|
|
|
|
* all other cases, use TWA_SIGNAL unconditionally to ensure we're
|
|
|
|
* processing task_work. There's no reliable way to tell if TWA_RESUME
|
|
|
|
* will do the job.
|
|
|
|
*/
|
|
|
|
notify = (req->ctx->flags & IORING_SETUP_SQPOLL) ? TWA_NONE : TWA_SIGNAL;
|
|
|
|
if (!task_work_add(tsk, &tctx->task_work, notify)) {
|
|
|
|
wake_up_process(tsk);
|
2021-07-01 12:26:05 +00:00
|
|
|
return;
|
2021-03-19 17:22:44 +00:00
|
|
|
}
|
2021-08-09 12:04:06 +00:00
|
|
|
|
2021-02-16 17:33:53 +00:00
|
|
|
spin_lock_irqsave(&tctx->task_lock, flags);
|
2021-08-10 16:53:55 +00:00
|
|
|
tctx->task_running = false;
|
2021-07-01 12:26:05 +00:00
|
|
|
node = tctx->task_list.first;
|
|
|
|
INIT_WQ_LIST(&tctx->task_list);
|
2021-02-16 17:33:53 +00:00
|
|
|
spin_unlock_irqrestore(&tctx->task_lock, flags);
|
2021-02-10 00:03:20 +00:00
|
|
|
|
2021-07-01 12:26:05 +00:00
|
|
|
while (node) {
|
|
|
|
req = container_of(node, struct io_kiocb, io_task_work.node);
|
|
|
|
node = node->next;
|
|
|
|
if (llist_add(&req->io_task_work.fallback_node,
|
|
|
|
&req->ctx->fallback_llist))
|
|
|
|
schedule_delayed_work(&req->ctx->fallback_work, 1);
|
|
|
|
}
|
2021-01-19 13:32:42 +00:00
|
|
|
}
|
|
|
|
|
2021-06-30 20:54:04 +00:00
|
|
|
static void io_req_task_cancel(struct io_kiocb *req)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
{
|
2020-09-14 14:20:12 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
|
2021-02-28 22:35:09 +00:00
|
|
|
/* ctx is guaranteed to stay alive while we hold uring_lock */
|
2021-02-18 22:32:51 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-03-19 17:22:40 +00:00
|
|
|
io_req_complete_failed(req, req->result);
|
2021-02-18 22:32:51 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
}
|
|
|
|
|
2021-06-30 20:54:04 +00:00
|
|
|
static void io_req_task_submit(struct io_kiocb *req)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2021-02-12 03:23:54 +00:00
|
|
|
/* ctx stays valid until unlock, even if we drop all ours ctx->refs */
|
2021-01-04 20:36:35 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-08-09 12:04:19 +00:00
|
|
|
if (likely(!(req->task->flags & PF_EXITING)))
|
2021-02-10 00:03:22 +00:00
|
|
|
__io_queue_sqe(req);
|
2021-01-04 20:36:35 +00:00
|
|
|
else
|
2021-03-19 17:22:40 +00:00
|
|
|
io_req_complete_failed(req, -EFAULT);
|
2021-01-04 20:36:35 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
}
|
|
|
|
|
2021-02-28 22:35:10 +00:00
|
|
|
static void io_req_task_queue_fail(struct io_kiocb *req, int ret)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
{
|
2021-02-28 22:35:10 +00:00
|
|
|
req->result = ret;
|
2021-06-30 20:54:04 +00:00
|
|
|
req->io_task_work.func = io_req_task_cancel;
|
2021-07-01 12:26:05 +00:00
|
|
|
io_req_task_work_add(req);
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
}
|
|
|
|
|
2021-02-28 22:35:10 +00:00
|
|
|
static void io_req_task_queue(struct io_kiocb *req)
|
2021-02-18 22:32:52 +00:00
|
|
|
{
|
2021-06-30 20:54:04 +00:00
|
|
|
req->io_task_work.func = io_req_task_submit;
|
2021-07-01 12:26:05 +00:00
|
|
|
io_req_task_work_add(req);
|
2021-02-18 22:32:52 +00:00
|
|
|
}
|
|
|
|
|
2021-07-27 16:25:55 +00:00
|
|
|
static void io_req_task_queue_reissue(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
req->io_task_work.func = io_queue_async_work;
|
|
|
|
io_req_task_work_add(req);
|
|
|
|
}
|
|
|
|
|
2020-10-27 23:25:37 +00:00
|
|
|
static inline void io_queue_next(struct io_kiocb *req)
|
2019-11-09 03:00:08 +00:00
|
|
|
{
|
2020-06-29 10:13:00 +00:00
|
|
|
struct io_kiocb *nxt = io_req_find_next(req);
|
2019-11-21 20:21:01 +00:00
|
|
|
|
|
|
|
if (nxt)
|
2020-06-27 11:04:55 +00:00
|
|
|
io_req_task_queue(nxt);
|
2019-11-09 03:00:08 +00:00
|
|
|
}
|
|
|
|
|
2020-06-28 09:52:32 +00:00
|
|
|
static void io_free_req(struct io_kiocb *req)
|
2020-03-03 18:33:13 +00:00
|
|
|
{
|
2020-06-28 09:52:32 +00:00
|
|
|
io_queue_next(req);
|
|
|
|
__io_free_req(req);
|
|
|
|
}
|
2020-03-13 21:31:04 +00:00
|
|
|
|
2020-06-28 09:52:33 +00:00
|
|
|
struct req_batch {
|
2020-07-18 08:32:52 +00:00
|
|
|
struct task_struct *task;
|
|
|
|
int task_refs;
|
2021-02-10 00:03:19 +00:00
|
|
|
int ctx_refs;
|
2020-06-28 09:52:33 +00:00
|
|
|
};
|
|
|
|
|
2020-07-18 08:32:52 +00:00
|
|
|
static inline void io_init_req_batch(struct req_batch *rb)
|
|
|
|
{
|
|
|
|
rb->task_refs = 0;
|
2021-02-10 00:03:16 +00:00
|
|
|
rb->ctx_refs = 0;
|
2020-07-18 08:32:52 +00:00
|
|
|
rb->task = NULL;
|
|
|
|
}
|
|
|
|
|
2020-06-28 09:52:33 +00:00
|
|
|
static void io_req_free_batch_finish(struct io_ring_ctx *ctx,
|
|
|
|
struct req_batch *rb)
|
|
|
|
{
|
2021-02-10 00:03:16 +00:00
|
|
|
if (rb->ctx_refs)
|
|
|
|
percpu_ref_put_many(&ctx->refs, rb->ctx_refs);
|
2021-08-09 12:04:20 +00:00
|
|
|
if (rb->task == current)
|
|
|
|
current->io_uring->cached_refs += rb->task_refs;
|
|
|
|
else if (rb->task)
|
|
|
|
io_put_task(rb->task, rb->task_refs);
|
2020-06-28 09:52:33 +00:00
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:18 +00:00
|
|
|
static void io_req_free_batch(struct req_batch *rb, struct io_kiocb *req,
|
|
|
|
struct io_submit_state *state)
|
2020-06-28 09:52:33 +00:00
|
|
|
{
|
2020-10-27 23:25:37 +00:00
|
|
|
io_queue_next(req);
|
2021-03-19 17:22:32 +00:00
|
|
|
io_dismantle_req(req);
|
2020-06-28 09:52:33 +00:00
|
|
|
|
2020-09-24 14:45:57 +00:00
|
|
|
if (req->task != rb->task) {
|
2021-01-25 11:42:21 +00:00
|
|
|
if (rb->task)
|
|
|
|
io_put_task(rb->task, rb->task_refs);
|
2020-09-24 14:45:57 +00:00
|
|
|
rb->task = req->task;
|
|
|
|
rb->task_refs = 0;
|
2020-07-18 08:32:52 +00:00
|
|
|
}
|
2020-09-24 14:45:57 +00:00
|
|
|
rb->task_refs++;
|
2021-02-10 00:03:16 +00:00
|
|
|
rb->ctx_refs++;
|
2020-07-18 08:32:52 +00:00
|
|
|
|
2021-02-12 03:23:50 +00:00
|
|
|
if (state->free_reqs != ARRAY_SIZE(state->reqs))
|
2021-02-10 00:03:18 +00:00
|
|
|
state->reqs[state->free_reqs++] = req;
|
2021-02-12 03:23:50 +00:00
|
|
|
else
|
2021-08-09 19:18:11 +00:00
|
|
|
list_add(&req->inflight_entry, &state->free_list);
|
2020-03-03 18:33:13 +00:00
|
|
|
}
|
|
|
|
|
2021-06-17 17:14:00 +00:00
|
|
|
static void io_submit_flush_completions(struct io_ring_ctx *ctx)
|
2021-08-12 18:48:34 +00:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2021-02-10 00:03:14 +00:00
|
|
|
{
|
2021-08-09 19:18:11 +00:00
|
|
|
struct io_submit_state *state = &ctx->submit_state;
|
|
|
|
int i, nr = state->compl_nr;
|
2021-02-10 00:03:14 +00:00
|
|
|
struct req_batch rb;
|
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-02-10 00:03:14 +00:00
|
|
|
for (i = 0; i < nr; i++) {
|
2021-08-09 19:18:11 +00:00
|
|
|
struct io_kiocb *req = state->compl_reqs[i];
|
2021-06-26 20:40:48 +00:00
|
|
|
|
2021-04-25 13:32:17 +00:00
|
|
|
__io_cqring_fill_event(ctx, req->user_data, req->result,
|
|
|
|
req->compl.cflags);
|
2021-02-10 00:03:14 +00:00
|
|
|
}
|
|
|
|
io_commit_cqring(ctx);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-02-10 00:03:14 +00:00
|
|
|
io_cqring_ev_posted(ctx);
|
2021-06-26 20:40:48 +00:00
|
|
|
|
|
|
|
io_init_req_batch(&rb);
|
2021-02-10 00:03:14 +00:00
|
|
|
for (i = 0; i < nr; i++) {
|
2021-08-09 19:18:11 +00:00
|
|
|
struct io_kiocb *req = state->compl_reqs[i];
|
2021-02-10 00:03:14 +00:00
|
|
|
|
2021-08-11 18:28:28 +00:00
|
|
|
if (req_ref_put_and_test(req))
|
2021-02-10 00:03:18 +00:00
|
|
|
io_req_free_batch(&rb, req, &ctx->submit_state);
|
2021-02-10 00:03:14 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
io_req_free_batch_finish(ctx, &rb);
|
2021-08-09 19:18:11 +00:00
|
|
|
state->compl_nr = 0;
|
2020-03-03 18:33:13 +00:00
|
|
|
}
|
|
|
|
|
2019-09-28 17:36:45 +00:00
|
|
|
/*
|
|
|
|
* Drop reference to request, return next in chain (if there is one) if this
|
|
|
|
* was the last reference to this request.
|
|
|
|
*/
|
2021-03-19 17:22:37 +00:00
|
|
|
static inline struct io_kiocb *io_put_req_find_next(struct io_kiocb *req)
|
2019-03-12 16:16:44 +00:00
|
|
|
{
|
2020-06-29 10:13:00 +00:00
|
|
|
struct io_kiocb *nxt = NULL;
|
|
|
|
|
2021-02-24 20:28:27 +00:00
|
|
|
if (req_ref_put_and_test(req)) {
|
2020-06-29 10:13:00 +00:00
|
|
|
nxt = io_req_find_next(req);
|
2019-11-20 20:03:52 +00:00
|
|
|
__io_free_req(req);
|
2020-02-25 20:25:41 +00:00
|
|
|
}
|
2020-06-29 10:13:00 +00:00
|
|
|
return nxt;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2021-03-19 17:22:37 +00:00
|
|
|
static inline void io_put_req(struct io_kiocb *req)
|
2019-03-12 16:16:44 +00:00
|
|
|
{
|
2021-02-24 20:28:27 +00:00
|
|
|
if (req_ref_put_and_test(req))
|
2019-03-12 16:16:44 +00:00
|
|
|
io_free_req(req);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2021-08-11 18:28:28 +00:00
|
|
|
static inline void io_put_req_deferred(struct io_kiocb *req)
|
2020-10-13 08:44:00 +00:00
|
|
|
{
|
2021-08-11 18:28:28 +00:00
|
|
|
if (req_ref_put_and_test(req)) {
|
2021-08-09 12:04:15 +00:00
|
|
|
req->io_task_work.func = io_free_req;
|
|
|
|
io_req_task_work_add(req);
|
|
|
|
}
|
2020-10-13 08:44:00 +00:00
|
|
|
}
|
|
|
|
|
2021-01-04 20:36:36 +00:00
|
|
|
static unsigned io_cqring_events(struct io_ring_ctx *ctx)
|
2019-08-20 17:03:11 +00:00
|
|
|
{
|
|
|
|
/* See comment at the top of this file */
|
|
|
|
smp_rmb();
|
2020-12-17 00:24:37 +00:00
|
|
|
return __io_cqring_events(ctx);
|
2019-08-20 17:03:11 +00:00
|
|
|
}
|
|
|
|
|
2019-10-25 09:31:30 +00:00
|
|
|
static inline unsigned int io_sqring_entries(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_rings *rings = ctx->rings;
|
|
|
|
|
|
|
|
/* make sure SQ entry isn't read before tail */
|
|
|
|
return smp_load_acquire(&rings->sq.tail) - ctx->cached_sq_head;
|
|
|
|
}
|
|
|
|
|
2020-07-16 20:28:04 +00:00
|
|
|
static unsigned int io_put_kbuf(struct io_kiocb *req, struct io_buffer *kbuf)
|
2019-12-19 19:06:02 +00:00
|
|
|
{
|
2020-07-16 20:28:04 +00:00
|
|
|
unsigned int cflags;
|
2019-12-19 19:06:02 +00:00
|
|
|
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
cflags = kbuf->bid << IORING_CQE_BUFFER_SHIFT;
|
|
|
|
cflags |= IORING_CQE_F_BUFFER;
|
2020-07-16 20:28:02 +00:00
|
|
|
req->flags &= ~REQ_F_BUFFER_SELECTED;
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
kfree(kbuf);
|
|
|
|
return cflags;
|
2019-12-19 19:06:02 +00:00
|
|
|
}
|
|
|
|
|
2020-07-16 20:28:04 +00:00
|
|
|
static inline unsigned int io_put_rw_kbuf(struct io_kiocb *req)
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
{
|
2020-02-27 14:31:19 +00:00
|
|
|
struct io_buffer *kbuf;
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
|
2021-08-17 19:28:08 +00:00
|
|
|
if (likely(!(req->flags & REQ_F_BUFFER_SELECTED)))
|
|
|
|
return 0;
|
2020-02-27 14:31:19 +00:00
|
|
|
kbuf = (struct io_buffer *) (unsigned long) req->rw.addr;
|
2020-07-16 20:28:04 +00:00
|
|
|
return io_put_kbuf(req, kbuf);
|
|
|
|
}
|
|
|
|
|
2020-07-01 17:29:10 +00:00
|
|
|
static inline bool io_run_task_work(void)
|
|
|
|
{
|
2021-08-08 00:13:41 +00:00
|
|
|
if (test_thread_flag(TIF_NOTIFY_SIGNAL) || current->task_works) {
|
2020-07-01 17:29:10 +00:00
|
|
|
__set_current_state(TASK_RUNNING);
|
2021-08-08 00:13:41 +00:00
|
|
|
tracehook_notify_signal();
|
2020-07-01 17:29:10 +00:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
}
|
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
/*
|
|
|
|
* Find and free completed poll iocbs
|
|
|
|
*/
|
|
|
|
static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
|
2021-08-15 09:40:21 +00:00
|
|
|
struct list_head *done)
|
2019-01-09 15:59:42 +00:00
|
|
|
{
|
2019-12-28 17:48:22 +00:00
|
|
|
struct req_batch rb;
|
2019-01-09 15:59:42 +00:00
|
|
|
struct io_kiocb *req;
|
2020-06-15 18:06:38 +00:00
|
|
|
|
|
|
|
/* order with ->result store in io_complete_rw_iopoll() */
|
|
|
|
smp_rmb();
|
2019-01-09 15:59:42 +00:00
|
|
|
|
2020-07-18 08:32:52 +00:00
|
|
|
io_init_req_batch(&rb);
|
2019-01-09 15:59:42 +00:00
|
|
|
while (!list_empty(done)) {
|
2020-07-13 20:37:10 +00:00
|
|
|
req = list_first_entry(done, struct io_kiocb, inflight_entry);
|
2021-02-11 18:28:21 +00:00
|
|
|
list_del(&req->inflight_entry);
|
|
|
|
|
2021-08-15 09:40:21 +00:00
|
|
|
if (READ_ONCE(req->result) == -EAGAIN &&
|
2021-03-22 01:58:32 +00:00
|
|
|
!(req->flags & REQ_F_DONT_REISSUE)) {
|
2020-06-15 18:06:38 +00:00
|
|
|
req->iopoll_completed = 0;
|
2021-07-27 16:25:55 +00:00
|
|
|
io_req_task_queue_reissue(req);
|
2021-03-22 01:58:32 +00:00
|
|
|
continue;
|
2020-06-15 18:06:38 +00:00
|
|
|
}
|
2019-01-09 15:59:42 +00:00
|
|
|
|
2021-08-17 19:28:08 +00:00
|
|
|
__io_cqring_fill_event(ctx, req->user_data, req->result,
|
|
|
|
io_put_rw_kbuf(req));
|
2019-01-09 15:59:42 +00:00
|
|
|
(*nr_events)++;
|
|
|
|
|
2021-02-24 20:28:27 +00:00
|
|
|
if (req_ref_put_and_test(req))
|
2021-02-10 00:03:18 +00:00
|
|
|
io_req_free_batch(&rb, req, &ctx->submit_state);
|
2019-01-09 15:59:42 +00:00
|
|
|
}
|
|
|
|
|
2019-03-13 18:39:28 +00:00
|
|
|
io_commit_cqring(ctx);
|
2021-01-07 03:15:41 +00:00
|
|
|
io_cqring_ev_posted_iopoll(ctx);
|
2020-06-28 09:52:33 +00:00
|
|
|
io_req_free_batch_finish(ctx, &rb);
|
2020-04-03 20:51:33 +00:00
|
|
|
}
|
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
|
2021-08-15 09:40:21 +00:00
|
|
|
long min)
|
2019-01-09 15:59:42 +00:00
|
|
|
{
|
|
|
|
struct io_kiocb *req, *tmp;
|
|
|
|
LIST_HEAD(done);
|
|
|
|
bool spin;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Only spin for completions if we don't have multiple devices hanging
|
|
|
|
* off our complete list, and we're under the requested amount.
|
|
|
|
*/
|
2021-06-27 21:37:30 +00:00
|
|
|
spin = !ctx->poll_multi_queue && *nr_events < min;
|
2019-01-09 15:59:42 +00:00
|
|
|
|
2020-07-13 20:37:10 +00:00
|
|
|
list_for_each_entry_safe(req, tmp, &ctx->iopoll_list, inflight_entry) {
|
2019-12-20 15:45:55 +00:00
|
|
|
struct kiocb *kiocb = &req->rw.kiocb;
|
2021-08-09 12:04:09 +00:00
|
|
|
int ret;
|
2019-01-09 15:59:42 +00:00
|
|
|
|
|
|
|
/*
|
2020-04-03 20:51:33 +00:00
|
|
|
* Move completed and retryable entries to our local lists.
|
|
|
|
* If we find a request that requires polling, break out
|
|
|
|
* and complete those lists first, if we have entries there.
|
2019-01-09 15:59:42 +00:00
|
|
|
*/
|
io_uring: fix io_kiocb.flags modification race in IOPOLL mode
While testing io_uring in arm, we found sometimes io_sq_thread() keeps
polling io requests even though there are not inflight io requests in
block layer. After some investigations, found a possible race about
io_kiocb.flags, see below race codes:
1) in the end of io_write() or io_read()
req->flags &= ~REQ_F_NEED_CLEANUP;
kfree(iovec);
return ret;
2) in io_complete_rw_iopoll()
if (res != -EAGAIN)
req->flags |= REQ_F_IOPOLL_COMPLETED;
In IOPOLL mode, io requests still maybe completed by interrupt, then
above codes are not safe, concurrent modifications to req->flags, which
is not protected by lock or is not atomic modifications. I also had
disassemble io_complete_rw_iopoll() in arm:
req->flags |= REQ_F_IOPOLL_COMPLETED;
0xffff000008387b18 <+76>: ldr w0, [x19,#104]
0xffff000008387b1c <+80>: orr w0, w0, #0x1000
0xffff000008387b20 <+84>: str w0, [x19,#104]
Seems that the "req->flags |= REQ_F_IOPOLL_COMPLETED;" is load and
modification, two instructions, which obviously is not atomic.
To fix this issue, add a new iopoll_completed in io_kiocb to indicate
whether io request is completed.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-11 15:39:36 +00:00
|
|
|
if (READ_ONCE(req->iopoll_completed)) {
|
2020-07-13 20:37:10 +00:00
|
|
|
list_move_tail(&req->inflight_entry, &done);
|
2019-01-09 15:59:42 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (!list_empty(&done))
|
|
|
|
break;
|
|
|
|
|
|
|
|
ret = kiocb->ki_filp->f_op->iopoll(kiocb, spin);
|
2021-08-09 12:04:09 +00:00
|
|
|
if (unlikely(ret < 0))
|
|
|
|
return ret;
|
|
|
|
else if (ret)
|
|
|
|
spin = false;
|
2019-01-09 15:59:42 +00:00
|
|
|
|
2020-07-06 14:59:29 +00:00
|
|
|
/* iopoll may have completed current req */
|
|
|
|
if (READ_ONCE(req->iopoll_completed))
|
2020-07-13 20:37:10 +00:00
|
|
|
list_move_tail(&req->inflight_entry, &done);
|
2019-01-09 15:59:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (!list_empty(&done))
|
2021-08-15 09:40:21 +00:00
|
|
|
io_iopoll_complete(ctx, nr_events, &done);
|
2019-01-09 15:59:42 +00:00
|
|
|
|
2021-08-09 12:04:09 +00:00
|
|
|
return 0;
|
2019-01-09 15:59:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We can't just wait for polled events to come to us, we have to actively
|
|
|
|
* find and complete them.
|
|
|
|
*/
|
2020-07-07 13:36:22 +00:00
|
|
|
static void io_iopoll_try_reap_events(struct io_ring_ctx *ctx)
|
2019-01-09 15:59:42 +00:00
|
|
|
{
|
|
|
|
if (!(ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return;
|
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
2020-07-13 20:37:09 +00:00
|
|
|
while (!list_empty(&ctx->iopoll_list)) {
|
2019-01-09 15:59:42 +00:00
|
|
|
unsigned int nr_events = 0;
|
|
|
|
|
2021-08-15 09:40:21 +00:00
|
|
|
io_do_iopoll(ctx, &nr_events, 0);
|
2019-08-22 04:19:11 +00:00
|
|
|
|
2020-07-07 13:36:22 +00:00
|
|
|
/* let it sleep and repeat later if can't complete a request */
|
|
|
|
if (nr_events == 0)
|
|
|
|
break;
|
2019-08-22 04:19:11 +00:00
|
|
|
/*
|
|
|
|
* Ensure we allow local-to-the-cpu processing to take place,
|
|
|
|
* in this case we need to ensure that we reap all events.
|
2020-07-06 14:59:31 +00:00
|
|
|
* Also let task_work, etc. to progress by releasing the mutex
|
2019-08-22 04:19:11 +00:00
|
|
|
*/
|
2020-07-06 14:59:31 +00:00
|
|
|
if (need_resched()) {
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
cond_resched();
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
}
|
2019-01-09 15:59:42 +00:00
|
|
|
}
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
|
2020-07-07 13:36:21 +00:00
|
|
|
static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
|
2019-01-09 15:59:42 +00:00
|
|
|
{
|
2020-07-07 13:36:21 +00:00
|
|
|
unsigned int nr_events = 0;
|
2021-04-13 01:58:45 +00:00
|
|
|
int ret = 0;
|
2019-08-19 18:15:59 +00:00
|
|
|
|
io_uring: fix __io_iopoll_check deadlock in io_sq_thread
Since commit a3a0e43fd770 ("io_uring: don't enter poll loop if we have
CQEs pending"), if we already events pending, we won't enter poll loop.
In case SETUP_IOPOLL and SETUP_SQPOLL are both enabled, if app has
been terminated and don't reap pending events which are already in cq
ring, and there are some reqs in poll_list, io_sq_thread will enter
__io_iopoll_check(), and find pending events, then return, this loop
will never have a chance to exit.
I have seen this issue in fio stress tests, to fix this issue, let
io_sq_thread call io_iopoll_getevents() with argument 'min' being zero,
and remove __io_iopoll_check().
Fixes: a3a0e43fd770 ("io_uring: don't enter poll loop if we have CQEs pending")
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-22 06:46:05 +00:00
|
|
|
/*
|
|
|
|
* We disallow the app entering submit/complete with polling, but we
|
|
|
|
* still need to lock the ring to prevent racing with polled issue
|
|
|
|
* that got punted to a workqueue.
|
|
|
|
*/
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-04-13 01:58:46 +00:00
|
|
|
/*
|
|
|
|
* Don't enter poll loop if we already have events pending.
|
|
|
|
* If we do, we can potentially be spinning for commands that
|
|
|
|
* already triggered a CQE (eg in error).
|
|
|
|
*/
|
2021-06-14 22:37:27 +00:00
|
|
|
if (test_bit(0, &ctx->check_cq_overflow))
|
2021-04-13 01:58:46 +00:00
|
|
|
__io_cqring_overflow_flush(ctx, false);
|
|
|
|
if (io_cqring_events(ctx))
|
|
|
|
goto out;
|
2019-01-09 15:59:42 +00:00
|
|
|
do {
|
2019-08-19 18:15:59 +00:00
|
|
|
/*
|
|
|
|
* If a submit got punted to a workqueue, we can have the
|
|
|
|
* application entering polling for a command before it gets
|
|
|
|
* issued. That app will hold the uring_lock for the duration
|
|
|
|
* of the poll right here, so we need to take a breather every
|
|
|
|
* now and then to ensure that the issue has a chance to add
|
|
|
|
* the poll to the issued list. Otherwise we can spin here
|
|
|
|
* forever, while the workqueue is stuck trying to acquire the
|
|
|
|
* very same mutex.
|
|
|
|
*/
|
2021-04-13 01:58:45 +00:00
|
|
|
if (list_empty(&ctx->iopoll_list)) {
|
2021-07-08 12:37:06 +00:00
|
|
|
u32 tail = ctx->cached_cq_tail;
|
|
|
|
|
2019-08-19 18:15:59 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2020-07-01 17:29:10 +00:00
|
|
|
io_run_task_work();
|
2019-08-19 18:15:59 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2019-01-09 15:59:42 +00:00
|
|
|
|
2021-07-08 12:37:06 +00:00
|
|
|
/* some requests don't go through iopoll_list */
|
|
|
|
if (tail != ctx->cached_cq_tail ||
|
|
|
|
list_empty(&ctx->iopoll_list))
|
2021-04-13 01:58:45 +00:00
|
|
|
break;
|
2019-08-19 18:15:59 +00:00
|
|
|
}
|
2021-08-15 09:40:21 +00:00
|
|
|
ret = io_do_iopoll(ctx, &nr_events, min);
|
2021-04-13 01:58:46 +00:00
|
|
|
} while (!ret && nr_events < min && !need_resched());
|
|
|
|
out:
|
2019-08-19 18:15:59 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2019-01-09 15:59:42 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-10-17 15:20:46 +00:00
|
|
|
static void kiocb_end_write(struct io_kiocb *req)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2019-10-17 15:20:46 +00:00
|
|
|
/*
|
|
|
|
* Tell lockdep we inherited freeze protection from submission
|
|
|
|
* thread.
|
|
|
|
*/
|
|
|
|
if (req->flags & REQ_F_ISREG) {
|
2021-03-22 01:58:31 +00:00
|
|
|
struct super_block *sb = file_inode(req->file)->i_sb;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2021-03-22 01:58:31 +00:00
|
|
|
__sb_writers_acquired(sb, SB_FREEZE_WRITE);
|
|
|
|
sb_end_write(sb);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-06-04 17:28:00 +00:00
|
|
|
#ifdef CONFIG_BLOCK
|
2021-01-19 13:32:35 +00:00
|
|
|
static bool io_resubmit_prep(struct io_kiocb *req)
|
2020-06-04 17:28:00 +00:00
|
|
|
{
|
2021-03-22 01:58:33 +00:00
|
|
|
struct io_async_rw *rw = req->async_data;
|
2020-06-04 17:28:00 +00:00
|
|
|
|
2021-03-22 01:58:33 +00:00
|
|
|
if (!rw)
|
|
|
|
return !io_req_prep_async(req);
|
|
|
|
/* may have left rw->iter inconsistent on -EIOCBQUEUED */
|
|
|
|
iov_iter_revert(&rw->iter, req->result - iov_iter_count(&rw->iter));
|
|
|
|
return true;
|
2020-06-04 17:28:00 +00:00
|
|
|
}
|
|
|
|
|
2021-03-01 20:56:00 +00:00
|
|
|
static bool io_rw_should_reissue(struct io_kiocb *req)
|
2020-06-04 17:28:00 +00:00
|
|
|
{
|
2020-09-02 15:30:31 +00:00
|
|
|
umode_t mode = file_inode(req->file)->i_mode;
|
2021-03-01 20:56:00 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2020-06-04 17:28:00 +00:00
|
|
|
|
2020-09-02 15:30:31 +00:00
|
|
|
if (!S_ISBLK(mode) && !S_ISREG(mode))
|
|
|
|
return false;
|
2021-03-01 20:56:00 +00:00
|
|
|
if ((req->flags & REQ_F_NOWAIT) || (io_wq_current_is_worker() &&
|
|
|
|
!(ctx->flags & IORING_SETUP_IOPOLL)))
|
2020-06-04 17:28:00 +00:00
|
|
|
return false;
|
2021-02-24 02:17:35 +00:00
|
|
|
/*
|
|
|
|
* If ref is dying, we might be running poll reap from the exit work.
|
|
|
|
* Don't attempt to reissue from that path, just let it fail with
|
|
|
|
* -EAGAIN.
|
|
|
|
*/
|
2021-03-01 20:56:00 +00:00
|
|
|
if (percpu_ref_is_dying(&ctx->refs))
|
|
|
|
return false;
|
2021-07-27 16:50:31 +00:00
|
|
|
/*
|
|
|
|
* Play it safe and assume not safe to re-import and reissue if we're
|
|
|
|
* not in the original thread group (or in task context).
|
|
|
|
*/
|
|
|
|
if (!same_thread_group(req->task, current) || !in_task())
|
|
|
|
return false;
|
2021-03-01 20:56:00 +00:00
|
|
|
return true;
|
|
|
|
}
|
2021-04-03 01:45:34 +00:00
|
|
|
#else
|
2021-04-12 12:40:02 +00:00
|
|
|
static bool io_resubmit_prep(struct io_kiocb *req)
|
2021-04-03 01:45:34 +00:00
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
static bool io_rw_should_reissue(struct io_kiocb *req)
|
2021-03-01 20:56:00 +00:00
|
|
|
{
|
2020-06-04 17:28:00 +00:00
|
|
|
return false;
|
|
|
|
}
|
2021-03-01 20:56:00 +00:00
|
|
|
#endif
|
2020-06-04 17:28:00 +00:00
|
|
|
|
2021-08-10 21:15:25 +00:00
|
|
|
static bool __io_complete_rw_common(struct io_kiocb *req, long res)
|
2020-06-22 17:09:46 +00:00
|
|
|
{
|
2021-03-22 01:45:59 +00:00
|
|
|
if (req->rw.kiocb.ki_flags & IOCB_WRITE)
|
|
|
|
kiocb_end_write(req);
|
2021-03-22 01:58:34 +00:00
|
|
|
if (res != req->result) {
|
|
|
|
if ((res == -EAGAIN || res == -EOPNOTSUPP) &&
|
|
|
|
io_rw_should_reissue(req)) {
|
|
|
|
req->flags |= REQ_F_REISSUE;
|
2021-08-10 21:15:25 +00:00
|
|
|
return true;
|
2021-03-22 01:58:34 +00:00
|
|
|
}
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-08-10 21:15:25 +00:00
|
|
|
req->result = res;
|
2021-03-22 01:58:34 +00:00
|
|
|
}
|
2021-08-10 21:15:25 +00:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_req_task_complete(struct io_kiocb *req)
|
|
|
|
{
|
2021-08-17 19:28:08 +00:00
|
|
|
__io_req_complete(req, 0, req->result, io_put_rw_kbuf(req));
|
2021-08-10 21:15:25 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void __io_complete_rw(struct io_kiocb *req, long res, long res2,
|
|
|
|
unsigned int issue_flags)
|
|
|
|
{
|
|
|
|
if (__io_complete_rw_common(req, res))
|
|
|
|
return;
|
|
|
|
io_req_task_complete(req);
|
2019-09-28 17:36:45 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void io_complete_rw(struct kiocb *kiocb, long res, long res2)
|
|
|
|
{
|
2019-12-20 15:45:55 +00:00
|
|
|
struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
|
2019-09-28 17:36:45 +00:00
|
|
|
|
2021-08-10 21:15:25 +00:00
|
|
|
if (__io_complete_rw_common(req, res))
|
|
|
|
return;
|
|
|
|
req->result = res;
|
|
|
|
req->io_task_work.func = io_req_task_complete;
|
|
|
|
io_req_task_work_add(req);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
|
|
|
|
{
|
2019-12-20 15:45:55 +00:00
|
|
|
struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
|
2019-01-09 15:59:42 +00:00
|
|
|
|
2019-10-17 15:20:46 +00:00
|
|
|
if (kiocb->ki_flags & IOCB_WRITE)
|
|
|
|
kiocb_end_write(req);
|
2021-03-22 01:58:34 +00:00
|
|
|
if (unlikely(res != req->result)) {
|
2021-04-12 12:40:02 +00:00
|
|
|
if (!(res == -EAGAIN && io_rw_should_reissue(req) &&
|
|
|
|
io_resubmit_prep(req))) {
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-03-22 01:58:34 +00:00
|
|
|
req->flags |= REQ_F_DONT_REISSUE;
|
|
|
|
}
|
2021-03-22 01:58:32 +00:00
|
|
|
}
|
2020-06-15 18:06:38 +00:00
|
|
|
|
|
|
|
WRITE_ONCE(req->result, res);
|
2021-02-23 15:18:36 +00:00
|
|
|
/* order with io_iopoll_complete() checking ->result */
|
2020-06-25 09:37:10 +00:00
|
|
|
smp_wmb();
|
|
|
|
WRITE_ONCE(req->iopoll_completed, 1);
|
2019-01-09 15:59:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* After the iocb has been issued, it's safe to be found on the poll list.
|
|
|
|
* Adding the kiocb to the list AFTER submission ensures that we don't
|
2021-04-13 01:58:46 +00:00
|
|
|
* find it from a io_do_iopoll() thread before the issuer is done
|
2019-01-09 15:59:42 +00:00
|
|
|
* accessing the kiocb cookie.
|
|
|
|
*/
|
2021-06-14 01:36:14 +00:00
|
|
|
static void io_iopoll_req_issued(struct io_kiocb *req)
|
2019-01-09 15:59:42 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2021-06-14 01:36:14 +00:00
|
|
|
const bool in_async = io_wq_current_is_worker();
|
|
|
|
|
|
|
|
/* workqueue context doesn't hold uring_lock, grab it now */
|
|
|
|
if (unlikely(in_async))
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
2019-01-09 15:59:42 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Track whether we have multiple files in our lists. This will impact
|
|
|
|
* how we do polling eventually, not spinning if we're on potentially
|
|
|
|
* different devices.
|
|
|
|
*/
|
2020-07-13 20:37:09 +00:00
|
|
|
if (list_empty(&ctx->iopoll_list)) {
|
2021-06-27 21:37:30 +00:00
|
|
|
ctx->poll_multi_queue = false;
|
|
|
|
} else if (!ctx->poll_multi_queue) {
|
2019-01-09 15:59:42 +00:00
|
|
|
struct io_kiocb *list_req;
|
2021-06-27 21:37:30 +00:00
|
|
|
unsigned int queue_num0, queue_num1;
|
2019-01-09 15:59:42 +00:00
|
|
|
|
2020-07-13 20:37:09 +00:00
|
|
|
list_req = list_first_entry(&ctx->iopoll_list, struct io_kiocb,
|
2020-07-13 20:37:10 +00:00
|
|
|
inflight_entry);
|
2021-06-27 21:37:30 +00:00
|
|
|
|
|
|
|
if (list_req->file != req->file) {
|
|
|
|
ctx->poll_multi_queue = true;
|
|
|
|
} else {
|
|
|
|
queue_num0 = blk_qc_t_to_queue_num(list_req->rw.kiocb.ki_cookie);
|
|
|
|
queue_num1 = blk_qc_t_to_queue_num(req->rw.kiocb.ki_cookie);
|
|
|
|
if (queue_num0 != queue_num1)
|
|
|
|
ctx->poll_multi_queue = true;
|
|
|
|
}
|
2019-01-09 15:59:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For fast devices, IO may have already completed. If it has, add
|
|
|
|
* it to the front so we find it first.
|
|
|
|
*/
|
io_uring: fix io_kiocb.flags modification race in IOPOLL mode
While testing io_uring in arm, we found sometimes io_sq_thread() keeps
polling io requests even though there are not inflight io requests in
block layer. After some investigations, found a possible race about
io_kiocb.flags, see below race codes:
1) in the end of io_write() or io_read()
req->flags &= ~REQ_F_NEED_CLEANUP;
kfree(iovec);
return ret;
2) in io_complete_rw_iopoll()
if (res != -EAGAIN)
req->flags |= REQ_F_IOPOLL_COMPLETED;
In IOPOLL mode, io requests still maybe completed by interrupt, then
above codes are not safe, concurrent modifications to req->flags, which
is not protected by lock or is not atomic modifications. I also had
disassemble io_complete_rw_iopoll() in arm:
req->flags |= REQ_F_IOPOLL_COMPLETED;
0xffff000008387b18 <+76>: ldr w0, [x19,#104]
0xffff000008387b1c <+80>: orr w0, w0, #0x1000
0xffff000008387b20 <+84>: str w0, [x19,#104]
Seems that the "req->flags |= REQ_F_IOPOLL_COMPLETED;" is load and
modification, two instructions, which obviously is not atomic.
To fix this issue, add a new iopoll_completed in io_kiocb to indicate
whether io request is completed.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-11 15:39:36 +00:00
|
|
|
if (READ_ONCE(req->iopoll_completed))
|
2020-07-13 20:37:10 +00:00
|
|
|
list_add(&req->inflight_entry, &ctx->iopoll_list);
|
2019-01-09 15:59:42 +00:00
|
|
|
else
|
2020-07-13 20:37:10 +00:00
|
|
|
list_add_tail(&req->inflight_entry, &ctx->iopoll_list);
|
io_uring: fix poll_list race for SETUP_IOPOLL|SETUP_SQPOLL
After making ext4 support iopoll method:
let ext4_file_operations's iopoll method be iomap_dio_iopoll(),
we found fio can easily hang in fio_ioring_getevents() with below fio
job:
rm -f testfile; sync;
sudo fio -name=fiotest -filename=testfile -iodepth=128 -thread
-rw=write -ioengine=io_uring -hipri=1 -sqthread_poll=1 -direct=1
-bs=4k -size=10G -numjobs=8 -runtime=2000 -group_reporting
with IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL enabled.
There are two issues that results in this hang, one reason is that
when IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL are enabled, fio
does not use io_uring_enter to get completed events, it relies on
kernel io_sq_thread to poll for completed events.
Another reason is that there is a race: when io_submit_sqes() in
io_sq_thread() submits a batch of sqes, variable 'inflight' will
record the number of submitted reqs, then io_sq_thread will poll for
reqs which have been added to poll_list. But note, if some previous
reqs have been punted to io worker, these reqs will won't be in
poll_list timely. io_sq_thread() will only poll for a part of previous
submitted reqs, and then find poll_list is empty, reset variable
'inflight' to be zero. If app just waits these deferred reqs and does
not wake up io_sq_thread again, then hang happens.
For app that entirely relies on io_sq_thread to poll completed requests,
let io_iopoll_req_issued() wake up io_sq_thread properly when adding new
element to poll_list, and when io_sq_thread prepares to sleep, check
whether poll_list is empty again, if not empty, continue to poll.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-25 14:12:08 +00:00
|
|
|
|
2021-06-14 01:36:14 +00:00
|
|
|
if (unlikely(in_async)) {
|
|
|
|
/*
|
|
|
|
* If IORING_SETUP_SQPOLL is enabled, sqes are either handle
|
|
|
|
* in sq thread task context or in io worker task context. If
|
|
|
|
* current task context is sq thread, we don't need to check
|
|
|
|
* whether should wake up sq thread.
|
|
|
|
*/
|
|
|
|
if ((ctx->flags & IORING_SETUP_SQPOLL) &&
|
|
|
|
wq_has_sleeper(&ctx->sq_data->wait))
|
|
|
|
wake_up(&ctx->sq_data->wait);
|
|
|
|
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
2019-01-09 15:59:42 +00:00
|
|
|
}
|
|
|
|
|
2020-06-01 16:00:27 +00:00
|
|
|
static bool io_bdev_nowait(struct block_device *bdev)
|
|
|
|
{
|
2020-10-19 08:59:42 +00:00
|
|
|
return !bdev || blk_queue_nowait(bdev_get_queue(bdev));
|
2020-06-01 16:00:27 +00:00
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
/*
|
|
|
|
* If we tracked the file through the SCM inflight mechanism, we could support
|
|
|
|
* any file. For now, just ensure that anything potentially problematic is done
|
|
|
|
* inline.
|
|
|
|
*/
|
2021-08-09 12:04:03 +00:00
|
|
|
static bool __io_file_supports_nowait(struct file *file, int rw)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
umode_t mode = file_inode(file)->i_mode;
|
|
|
|
|
2020-06-01 16:00:27 +00:00
|
|
|
if (S_ISBLK(mode)) {
|
2020-11-23 12:38:40 +00:00
|
|
|
if (IS_ENABLED(CONFIG_BLOCK) &&
|
|
|
|
io_bdev_nowait(I_BDEV(file->f_mapping->host)))
|
2020-06-01 16:00:27 +00:00
|
|
|
return true;
|
|
|
|
return false;
|
|
|
|
}
|
2021-06-09 11:07:25 +00:00
|
|
|
if (S_ISSOCK(mode))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return true;
|
2020-06-01 16:00:27 +00:00
|
|
|
if (S_ISREG(mode)) {
|
2020-11-23 12:38:40 +00:00
|
|
|
if (IS_ENABLED(CONFIG_BLOCK) &&
|
|
|
|
io_bdev_nowait(file->f_inode->i_sb->s_bdev) &&
|
2020-06-01 16:00:27 +00:00
|
|
|
file->f_op != &io_uring_fops)
|
|
|
|
return true;
|
|
|
|
return false;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2020-06-10 01:23:05 +00:00
|
|
|
/* any ->read/write should understand O_NONBLOCK */
|
|
|
|
if (file->f_flags & O_NONBLOCK)
|
|
|
|
return true;
|
|
|
|
|
2020-04-28 19:15:06 +00:00
|
|
|
if (!(file->f_mode & FMODE_NOWAIT))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (rw == READ)
|
|
|
|
return file->f_op->read_iter != NULL;
|
|
|
|
|
|
|
|
return file->f_op->write_iter != NULL;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2021-08-09 12:04:03 +00:00
|
|
|
static bool io_file_supports_nowait(struct io_kiocb *req, int rw)
|
2021-03-12 15:30:14 +00:00
|
|
|
{
|
2021-08-09 12:04:03 +00:00
|
|
|
if (rw == READ && (req->flags & REQ_F_NOWAIT_READ))
|
2021-03-12 15:30:14 +00:00
|
|
|
return true;
|
2021-08-09 12:04:03 +00:00
|
|
|
else if (rw == WRITE && (req->flags & REQ_F_NOWAIT_WRITE))
|
2021-03-12 15:30:14 +00:00
|
|
|
return true;
|
|
|
|
|
2021-08-09 12:04:03 +00:00
|
|
|
return __io_file_supports_nowait(req->file, rw);
|
2021-03-12 15:30:14 +00:00
|
|
|
}
|
|
|
|
|
2020-09-30 19:57:53 +00:00
|
|
|
static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2019-01-09 15:59:42 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2019-12-20 15:45:55 +00:00
|
|
|
struct kiocb *kiocb = &req->rw.kiocb;
|
2021-02-04 13:52:05 +00:00
|
|
|
struct file *file = req->file;
|
2019-03-13 18:39:28 +00:00
|
|
|
unsigned ioprio;
|
|
|
|
int ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2021-08-09 12:04:04 +00:00
|
|
|
if (!io_req_ffs_set(req) && S_ISREG(file_inode(file)->i_mode))
|
2019-10-17 15:20:46 +00:00
|
|
|
req->flags |= REQ_F_ISREG;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
kiocb->ki_pos = READ_ONCE(sqe->off);
|
2021-02-04 13:52:05 +00:00
|
|
|
if (kiocb->ki_pos == -1 && !(file->f_mode & FMODE_STREAM)) {
|
2019-12-25 23:33:42 +00:00
|
|
|
req->flags |= REQ_F_CUR_POS;
|
2021-02-04 13:52:05 +00:00
|
|
|
kiocb->ki_pos = file->f_pos;
|
2019-12-25 23:33:42 +00:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
|
2020-02-01 00:58:42 +00:00
|
|
|
kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
|
|
|
|
ret = kiocb_set_rw_flags(kiocb, READ_ONCE(sqe->rw_flags));
|
|
|
|
if (unlikely(ret))
|
|
|
|
return ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2021-02-04 13:52:05 +00:00
|
|
|
/* don't allow async punt for O_NONBLOCK or RWF_NOWAIT */
|
|
|
|
if ((kiocb->ki_flags & IOCB_NOWAIT) || (file->f_flags & O_NONBLOCK))
|
|
|
|
req->flags |= REQ_F_NOWAIT;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
ioprio = READ_ONCE(sqe->ioprio);
|
|
|
|
if (ioprio) {
|
|
|
|
ret = ioprio_check_cap(ioprio);
|
|
|
|
if (ret)
|
2019-03-13 18:39:28 +00:00
|
|
|
return ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
kiocb->ki_ioprio = ioprio;
|
|
|
|
} else
|
|
|
|
kiocb->ki_ioprio = get_current_ioprio();
|
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL) {
|
|
|
|
if (!(kiocb->ki_flags & IOCB_DIRECT) ||
|
|
|
|
!kiocb->ki_filp->f_op->iopoll)
|
2019-03-13 18:39:28 +00:00
|
|
|
return -EOPNOTSUPP;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
kiocb->ki_flags |= IOCB_HIPRI;
|
|
|
|
kiocb->ki_complete = io_complete_rw_iopoll;
|
io_uring: fix io_kiocb.flags modification race in IOPOLL mode
While testing io_uring in arm, we found sometimes io_sq_thread() keeps
polling io requests even though there are not inflight io requests in
block layer. After some investigations, found a possible race about
io_kiocb.flags, see below race codes:
1) in the end of io_write() or io_read()
req->flags &= ~REQ_F_NEED_CLEANUP;
kfree(iovec);
return ret;
2) in io_complete_rw_iopoll()
if (res != -EAGAIN)
req->flags |= REQ_F_IOPOLL_COMPLETED;
In IOPOLL mode, io requests still maybe completed by interrupt, then
above codes are not safe, concurrent modifications to req->flags, which
is not protected by lock or is not atomic modifications. I also had
disassemble io_complete_rw_iopoll() in arm:
req->flags |= REQ_F_IOPOLL_COMPLETED;
0xffff000008387b18 <+76>: ldr w0, [x19,#104]
0xffff000008387b1c <+80>: orr w0, w0, #0x1000
0xffff000008387b20 <+84>: str w0, [x19,#104]
Seems that the "req->flags |= REQ_F_IOPOLL_COMPLETED;" is load and
modification, two instructions, which obviously is not atomic.
To fix this issue, add a new iopoll_completed in io_kiocb to indicate
whether io request is completed.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-11 15:39:36 +00:00
|
|
|
req->iopoll_completed = 0;
|
2019-01-09 15:59:42 +00:00
|
|
|
} else {
|
2019-03-13 18:39:28 +00:00
|
|
|
if (kiocb->ki_flags & IOCB_HIPRI)
|
|
|
|
return -EINVAL;
|
2019-01-09 15:59:42 +00:00
|
|
|
kiocb->ki_complete = io_complete_rw;
|
|
|
|
}
|
2019-12-20 15:45:55 +00:00
|
|
|
|
2021-04-25 13:32:24 +00:00
|
|
|
if (req->opcode == IORING_OP_READ_FIXED ||
|
|
|
|
req->opcode == IORING_OP_WRITE_FIXED) {
|
|
|
|
req->imu = NULL;
|
|
|
|
io_req_set_rsrc_node(req);
|
|
|
|
}
|
|
|
|
|
2019-12-20 01:24:38 +00:00
|
|
|
req->rw.addr = READ_ONCE(sqe->addr);
|
|
|
|
req->rw.len = READ_ONCE(sqe->len);
|
2020-05-19 21:52:49 +00:00
|
|
|
req->buf_index = READ_ONCE(sqe->buf_index);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void io_rw_done(struct kiocb *kiocb, ssize_t ret)
|
|
|
|
{
|
|
|
|
switch (ret) {
|
|
|
|
case -EIOCBQUEUED:
|
|
|
|
break;
|
|
|
|
case -ERESTARTSYS:
|
|
|
|
case -ERESTARTNOINTR:
|
|
|
|
case -ERESTARTNOHAND:
|
|
|
|
case -ERESTART_RESTARTBLOCK:
|
|
|
|
/*
|
|
|
|
* We can't just restart the syscall, since previously
|
|
|
|
* submitted sqes may already be in progress. Just fail this
|
|
|
|
* IO with EINTR.
|
|
|
|
*/
|
|
|
|
ret = -EINTR;
|
2020-08-23 22:36:59 +00:00
|
|
|
fallthrough;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
default:
|
|
|
|
kiocb->ki_complete(kiocb, ret, 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-06-22 17:09:46 +00:00
|
|
|
static void kiocb_done(struct kiocb *kiocb, ssize_t ret,
|
2021-02-10 00:03:09 +00:00
|
|
|
unsigned int issue_flags)
|
2019-09-28 17:36:45 +00:00
|
|
|
{
|
2019-12-25 23:33:42 +00:00
|
|
|
struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
|
2020-08-16 01:44:09 +00:00
|
|
|
struct io_async_rw *io = req->async_data;
|
2021-04-08 18:28:03 +00:00
|
|
|
bool check_reissue = kiocb->ki_complete == io_complete_rw;
|
2019-12-25 23:33:42 +00:00
|
|
|
|
2020-08-13 17:51:40 +00:00
|
|
|
/* add previously done IO, if any */
|
2020-08-16 01:44:09 +00:00
|
|
|
if (io && io->bytes_done > 0) {
|
2020-08-13 17:51:40 +00:00
|
|
|
if (ret < 0)
|
2020-08-16 01:44:09 +00:00
|
|
|
ret = io->bytes_done;
|
2020-08-13 17:51:40 +00:00
|
|
|
else
|
2020-08-16 01:44:09 +00:00
|
|
|
ret += io->bytes_done;
|
2020-08-13 17:51:40 +00:00
|
|
|
}
|
|
|
|
|
2019-12-25 23:33:42 +00:00
|
|
|
if (req->flags & REQ_F_CUR_POS)
|
|
|
|
req->file->f_pos = kiocb->ki_pos;
|
2021-06-27 21:48:05 +00:00
|
|
|
if (ret >= 0 && check_reissue)
|
2021-02-10 00:03:09 +00:00
|
|
|
__io_complete_rw(req, ret, 0, issue_flags);
|
2019-09-28 17:36:45 +00:00
|
|
|
else
|
|
|
|
io_rw_done(kiocb, ret);
|
2021-04-08 18:28:03 +00:00
|
|
|
|
2021-06-24 14:09:57 +00:00
|
|
|
if (check_reissue && (req->flags & REQ_F_REISSUE)) {
|
2021-04-08 18:28:03 +00:00
|
|
|
req->flags &= ~REQ_F_REISSUE;
|
2021-04-15 17:31:14 +00:00
|
|
|
if (io_resubmit_prep(req)) {
|
2021-07-27 16:25:55 +00:00
|
|
|
io_req_task_queue_reissue(req);
|
2021-03-22 01:58:32 +00:00
|
|
|
} else {
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-08-17 19:28:08 +00:00
|
|
|
__io_req_complete(req, issue_flags, ret,
|
|
|
|
io_put_rw_kbuf(req));
|
2021-04-08 18:28:03 +00:00
|
|
|
}
|
|
|
|
}
|
2019-09-28 17:36:45 +00:00
|
|
|
}
|
|
|
|
|
2021-04-25 13:32:24 +00:00
|
|
|
static int __io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter,
|
|
|
|
struct io_mapped_ubuf *imu)
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
{
|
2019-12-20 15:45:55 +00:00
|
|
|
size_t len = req->rw.len;
|
2021-04-01 14:43:54 +00:00
|
|
|
u64 buf_end, buf_addr = req->rw.addr;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
size_t offset;
|
|
|
|
|
2021-04-01 14:43:54 +00:00
|
|
|
if (unlikely(check_add_overflow(buf_addr, (u64)len, &buf_end)))
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
return -EFAULT;
|
|
|
|
/* not inside the mapped region */
|
2021-04-01 14:43:55 +00:00
|
|
|
if (unlikely(buf_addr < imu->ubuf || buf_end > imu->ubuf_end))
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* May not be a start of buffer, set size appropriately
|
|
|
|
* and advance us to the beginning.
|
|
|
|
*/
|
|
|
|
offset = buf_addr - imu->ubuf;
|
|
|
|
iov_iter_bvec(iter, rw, imu->bvec, imu->nr_bvecs, offset + len);
|
io_uring: don't use iov_iter_advance() for fixed buffers
Hrvoje reports that when a large fixed buffer is registered and IO is
being done to the latter pages of said buffer, the IO submission time
is much worse:
reading to the start of the buffer: 11238 ns
reading to the end of the buffer: 1039879 ns
In fact, it's worse by two orders of magnitude. The reason for that is
how io_uring figures out how to setup the iov_iter. We point the iter
at the first bvec, and then use iov_iter_advance() to fast-forward to
the offset within that buffer we need.
However, that is abysmally slow, as it entails iterating the bvecs
that we setup as part of buffer registration. There's really no need
to use this generic helper, as we know it's a BVEC type iterator, and
we also know that each bvec is PAGE_SIZE in size, apart from possibly
the first and last. Hence we can just use a shift on the offset to
find the right index, and then adjust the iov_iter appropriately.
After this fix, the timings are:
reading to the start of the buffer: 10135 ns
reading to the end of the buffer: 1377 ns
Or about an 755x improvement for the tail page.
Reported-by: Hrvoje Zeba <zeba.hrvoje@gmail.com>
Tested-by: Hrvoje Zeba <zeba.hrvoje@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-07-20 14:37:31 +00:00
|
|
|
|
|
|
|
if (offset) {
|
|
|
|
/*
|
|
|
|
* Don't use iov_iter_advance() here, as it's really slow for
|
|
|
|
* using the latter parts of a big fixed buffer - it iterates
|
|
|
|
* over each segment manually. We can cheat a bit here, because
|
|
|
|
* we know that:
|
|
|
|
*
|
|
|
|
* 1) it's a BVEC iter, we set it up
|
|
|
|
* 2) all bvecs are PAGE_SIZE in size, except potentially the
|
|
|
|
* first and last bvec
|
|
|
|
*
|
|
|
|
* So just find our index, and adjust the iterator afterwards.
|
|
|
|
* If the offset is within the first bvec (or the whole first
|
|
|
|
* bvec, just use iov_iter_advance(). This makes it easier
|
|
|
|
* since we can just skip the first segment, which may not
|
|
|
|
* be PAGE_SIZE aligned.
|
|
|
|
*/
|
|
|
|
const struct bio_vec *bvec = imu->bvec;
|
|
|
|
|
|
|
|
if (offset <= bvec->bv_len) {
|
|
|
|
iov_iter_advance(iter, offset);
|
|
|
|
} else {
|
|
|
|
unsigned long seg_skip;
|
|
|
|
|
|
|
|
/* skip first vec */
|
|
|
|
offset -= bvec->bv_len;
|
|
|
|
seg_skip = 1 + (offset >> PAGE_SHIFT);
|
|
|
|
|
|
|
|
iter->bvec = bvec + seg_skip;
|
|
|
|
iter->nr_segs -= seg_skip;
|
2019-08-15 12:03:22 +00:00
|
|
|
iter->count -= bvec->bv_len + offset;
|
io_uring: don't use iov_iter_advance() for fixed buffers
Hrvoje reports that when a large fixed buffer is registered and IO is
being done to the latter pages of said buffer, the IO submission time
is much worse:
reading to the start of the buffer: 11238 ns
reading to the end of the buffer: 1039879 ns
In fact, it's worse by two orders of magnitude. The reason for that is
how io_uring figures out how to setup the iov_iter. We point the iter
at the first bvec, and then use iov_iter_advance() to fast-forward to
the offset within that buffer we need.
However, that is abysmally slow, as it entails iterating the bvecs
that we setup as part of buffer registration. There's really no need
to use this generic helper, as we know it's a BVEC type iterator, and
we also know that each bvec is PAGE_SIZE in size, apart from possibly
the first and last. Hence we can just use a shift on the offset to
find the right index, and then adjust the iov_iter appropriately.
After this fix, the timings are:
reading to the start of the buffer: 10135 ns
reading to the end of the buffer: 1377 ns
Or about an 755x improvement for the tail page.
Reported-by: Hrvoje Zeba <zeba.hrvoje@gmail.com>
Tested-by: Hrvoje Zeba <zeba.hrvoje@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-07-20 14:37:31 +00:00
|
|
|
iter->iov_offset = offset & ~PAGE_MASK;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-02-04 13:52:06 +00:00
|
|
|
return 0;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
}
|
|
|
|
|
2021-04-25 13:32:24 +00:00
|
|
|
static int io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct io_mapped_ubuf *imu = req->imu;
|
|
|
|
u16 index, buf_index = req->buf_index;
|
|
|
|
|
|
|
|
if (likely(!imu)) {
|
|
|
|
if (unlikely(buf_index >= ctx->nr_user_bufs))
|
|
|
|
return -EFAULT;
|
|
|
|
index = array_index_nospec(buf_index, ctx->nr_user_bufs);
|
|
|
|
imu = READ_ONCE(ctx->user_bufs[index]);
|
|
|
|
req->imu = imu;
|
|
|
|
}
|
|
|
|
return __io_import_fixed(req, rw, iter, imu);
|
|
|
|
}
|
|
|
|
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
static void io_ring_submit_unlock(struct io_ring_ctx *ctx, bool needs_lock)
|
|
|
|
{
|
|
|
|
if (needs_lock)
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_ring_submit_lock(struct io_ring_ctx *ctx, bool needs_lock)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* "Normal" inline submissions always hold the uring_lock, since we
|
|
|
|
* grab it from the system call. Same is true for the SQPOLL offload.
|
|
|
|
* The only exception is when we've detached the request and issue it
|
|
|
|
* from an async worker thread, grab the lock for that case.
|
|
|
|
*/
|
|
|
|
if (needs_lock)
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct io_buffer *io_buffer_select(struct io_kiocb *req, size_t *len,
|
|
|
|
int bgid, struct io_buffer *kbuf,
|
|
|
|
bool needs_lock)
|
|
|
|
{
|
|
|
|
struct io_buffer *head;
|
|
|
|
|
|
|
|
if (req->flags & REQ_F_BUFFER_SELECTED)
|
|
|
|
return kbuf;
|
|
|
|
|
|
|
|
io_ring_submit_lock(req->ctx, needs_lock);
|
|
|
|
|
|
|
|
lockdep_assert_held(&req->ctx->uring_lock);
|
|
|
|
|
2021-03-13 19:29:43 +00:00
|
|
|
head = xa_load(&req->ctx->io_buffers, bgid);
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
if (head) {
|
|
|
|
if (!list_empty(&head->list)) {
|
|
|
|
kbuf = list_last_entry(&head->list, struct io_buffer,
|
|
|
|
list);
|
|
|
|
list_del(&kbuf->list);
|
|
|
|
} else {
|
|
|
|
kbuf = head;
|
2021-03-13 19:29:43 +00:00
|
|
|
xa_erase(&req->ctx->io_buffers, bgid);
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
}
|
|
|
|
if (*len > kbuf->len)
|
|
|
|
*len = kbuf->len;
|
|
|
|
} else {
|
|
|
|
kbuf = ERR_PTR(-ENOBUFS);
|
|
|
|
}
|
|
|
|
|
|
|
|
io_ring_submit_unlock(req->ctx, needs_lock);
|
|
|
|
|
|
|
|
return kbuf;
|
|
|
|
}
|
|
|
|
|
2020-02-27 14:31:19 +00:00
|
|
|
static void __user *io_rw_buffer_select(struct io_kiocb *req, size_t *len,
|
|
|
|
bool needs_lock)
|
|
|
|
{
|
|
|
|
struct io_buffer *kbuf;
|
2020-05-19 21:52:49 +00:00
|
|
|
u16 bgid;
|
2020-02-27 14:31:19 +00:00
|
|
|
|
|
|
|
kbuf = (struct io_buffer *) (unsigned long) req->rw.addr;
|
2020-05-19 21:52:49 +00:00
|
|
|
bgid = req->buf_index;
|
2020-02-27 14:31:19 +00:00
|
|
|
kbuf = io_buffer_select(req, len, bgid, kbuf, needs_lock);
|
|
|
|
if (IS_ERR(kbuf))
|
|
|
|
return kbuf;
|
|
|
|
req->rw.addr = (u64) (unsigned long) kbuf;
|
|
|
|
req->flags |= REQ_F_BUFFER_SELECTED;
|
|
|
|
return u64_to_user_ptr(kbuf->addr);
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
static ssize_t io_compat_import(struct io_kiocb *req, struct iovec *iov,
|
|
|
|
bool needs_lock)
|
|
|
|
{
|
|
|
|
struct compat_iovec __user *uiov;
|
|
|
|
compat_ssize_t clen;
|
|
|
|
void __user *buf;
|
|
|
|
ssize_t len;
|
|
|
|
|
|
|
|
uiov = u64_to_user_ptr(req->rw.addr);
|
|
|
|
if (!access_ok(uiov, sizeof(*uiov)))
|
|
|
|
return -EFAULT;
|
|
|
|
if (__get_user(clen, &uiov->iov_len))
|
|
|
|
return -EFAULT;
|
|
|
|
if (clen < 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
len = clen;
|
|
|
|
buf = io_rw_buffer_select(req, &len, needs_lock);
|
|
|
|
if (IS_ERR(buf))
|
|
|
|
return PTR_ERR(buf);
|
|
|
|
iov[0].iov_base = buf;
|
|
|
|
iov[0].iov_len = (compat_size_t) len;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
static ssize_t __io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
|
|
|
|
bool needs_lock)
|
|
|
|
{
|
|
|
|
struct iovec __user *uiov = u64_to_user_ptr(req->rw.addr);
|
|
|
|
void __user *buf;
|
|
|
|
ssize_t len;
|
|
|
|
|
|
|
|
if (copy_from_user(iov, uiov, sizeof(*uiov)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
len = iov[0].iov_len;
|
|
|
|
if (len < 0)
|
|
|
|
return -EINVAL;
|
|
|
|
buf = io_rw_buffer_select(req, &len, needs_lock);
|
|
|
|
if (IS_ERR(buf))
|
|
|
|
return PTR_ERR(buf);
|
|
|
|
iov[0].iov_base = buf;
|
|
|
|
iov[0].iov_len = len;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
|
|
|
|
bool needs_lock)
|
|
|
|
{
|
2020-06-04 17:27:01 +00:00
|
|
|
if (req->flags & REQ_F_BUFFER_SELECTED) {
|
|
|
|
struct io_buffer *kbuf;
|
|
|
|
|
|
|
|
kbuf = (struct io_buffer *) (unsigned long) req->rw.addr;
|
|
|
|
iov[0].iov_base = u64_to_user_ptr(kbuf->addr);
|
|
|
|
iov[0].iov_len = kbuf->len;
|
2020-02-27 14:31:19 +00:00
|
|
|
return 0;
|
2020-06-04 17:27:01 +00:00
|
|
|
}
|
2020-12-19 03:15:43 +00:00
|
|
|
if (req->rw.len != 1)
|
2020-02-27 14:31:19 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
if (req->ctx->compat)
|
|
|
|
return io_compat_import(req, iov, needs_lock);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
return __io_iov_buffer_select(req, iov, needs_lock);
|
|
|
|
}
|
|
|
|
|
2021-02-04 13:52:06 +00:00
|
|
|
static int io_import_iovec(int rw, struct io_kiocb *req, struct iovec **iovec,
|
|
|
|
struct iov_iter *iter, bool needs_lock)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2019-12-20 15:45:55 +00:00
|
|
|
void __user *buf = u64_to_user_ptr(req->rw.addr);
|
|
|
|
size_t sqe_len = req->rw.len;
|
2021-02-04 13:52:06 +00:00
|
|
|
u8 opcode = req->opcode;
|
2020-02-27 14:31:19 +00:00
|
|
|
ssize_t ret;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
|
2019-11-25 20:14:40 +00:00
|
|
|
if (opcode == IORING_OP_READ_FIXED || opcode == IORING_OP_WRITE_FIXED) {
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
*iovec = NULL;
|
2019-12-20 15:45:55 +00:00
|
|
|
return io_import_fixed(req, rw, iter);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
/* buffer index only valid with fixed read/write, or buffer select */
|
2020-05-19 21:52:49 +00:00
|
|
|
if (req->buf_index && !(req->flags & REQ_F_BUFFER_SELECT))
|
2019-12-20 15:45:55 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
2019-12-22 22:19:35 +00:00
|
|
|
if (opcode == IORING_OP_READ || opcode == IORING_OP_WRITE) {
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
if (req->flags & REQ_F_BUFFER_SELECT) {
|
2020-02-27 14:31:19 +00:00
|
|
|
buf = io_rw_buffer_select(req, &sqe_len, needs_lock);
|
2020-08-20 08:34:39 +00:00
|
|
|
if (IS_ERR(buf))
|
2020-02-27 14:31:19 +00:00
|
|
|
return PTR_ERR(buf);
|
2020-03-11 18:27:04 +00:00
|
|
|
req->rw.len = sqe_len;
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
}
|
|
|
|
|
2019-12-22 22:19:35 +00:00
|
|
|
ret = import_single_range(rw, buf, sqe_len, *iovec, iter);
|
|
|
|
*iovec = NULL;
|
2020-11-07 13:16:25 +00:00
|
|
|
return ret;
|
2019-12-22 22:19:35 +00:00
|
|
|
}
|
|
|
|
|
2020-02-27 14:31:19 +00:00
|
|
|
if (req->flags & REQ_F_BUFFER_SELECT) {
|
|
|
|
ret = io_iov_buffer_select(req, *iovec, needs_lock);
|
2021-02-04 13:52:06 +00:00
|
|
|
if (!ret)
|
|
|
|
iov_iter_init(iter, rw, *iovec, 1, (*iovec)->iov_len);
|
2020-02-27 14:31:19 +00:00
|
|
|
*iovec = NULL;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-09-25 04:51:41 +00:00
|
|
|
return __import_iovec(rw, buf, sqe_len, UIO_FASTIOV, iovec, iter,
|
|
|
|
req->ctx->compat);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2020-08-26 16:36:20 +00:00
|
|
|
static inline loff_t *io_kiocb_ppos(struct kiocb *kiocb)
|
|
|
|
{
|
2020-09-30 19:57:15 +00:00
|
|
|
return (kiocb->ki_filp->f_mode & FMODE_STREAM) ? NULL : &kiocb->ki_pos;
|
2020-08-26 16:36:20 +00:00
|
|
|
}
|
|
|
|
|
2019-01-19 05:56:34 +00:00
|
|
|
/*
|
2019-09-23 17:05:34 +00:00
|
|
|
* For files that don't have ->read_iter() and ->write_iter(), handle them
|
|
|
|
* by looping over ->read() or ->write() manually.
|
2019-01-19 05:56:34 +00:00
|
|
|
*/
|
2020-10-22 20:14:12 +00:00
|
|
|
static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter)
|
2019-09-23 17:05:34 +00:00
|
|
|
{
|
2020-10-22 20:14:12 +00:00
|
|
|
struct kiocb *kiocb = &req->rw.kiocb;
|
|
|
|
struct file *file = req->file;
|
2019-09-23 17:05:34 +00:00
|
|
|
ssize_t ret = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't support polled IO through this interface, and we can't
|
|
|
|
* support non-blocking either. For the latter, this just causes
|
|
|
|
* the kiocb to be handled from an async context.
|
|
|
|
*/
|
|
|
|
if (kiocb->ki_flags & IOCB_HIPRI)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
if (kiocb->ki_flags & IOCB_NOWAIT)
|
|
|
|
return -EAGAIN;
|
|
|
|
|
|
|
|
while (iov_iter_count(iter)) {
|
2019-11-24 08:58:24 +00:00
|
|
|
struct iovec iovec;
|
2019-09-23 17:05:34 +00:00
|
|
|
ssize_t nr;
|
|
|
|
|
2019-11-24 08:58:24 +00:00
|
|
|
if (!iov_iter_is_bvec(iter)) {
|
|
|
|
iovec = iov_iter_iovec(iter);
|
|
|
|
} else {
|
2020-10-22 20:14:12 +00:00
|
|
|
iovec.iov_base = u64_to_user_ptr(req->rw.addr);
|
|
|
|
iovec.iov_len = req->rw.len;
|
2019-11-24 08:58:24 +00:00
|
|
|
}
|
|
|
|
|
2019-09-23 17:05:34 +00:00
|
|
|
if (rw == READ) {
|
|
|
|
nr = file->f_op->read(file, iovec.iov_base,
|
2020-08-26 16:36:20 +00:00
|
|
|
iovec.iov_len, io_kiocb_ppos(kiocb));
|
2019-09-23 17:05:34 +00:00
|
|
|
} else {
|
|
|
|
nr = file->f_op->write(file, iovec.iov_base,
|
2020-08-26 16:36:20 +00:00
|
|
|
iovec.iov_len, io_kiocb_ppos(kiocb));
|
2019-09-23 17:05:34 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (nr < 0) {
|
|
|
|
if (!ret)
|
|
|
|
ret = nr;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
ret += nr;
|
|
|
|
if (nr != iovec.iov_len)
|
|
|
|
break;
|
2020-10-22 20:14:12 +00:00
|
|
|
req->rw.len -= nr;
|
|
|
|
req->rw.addr += nr;
|
2019-09-23 17:05:34 +00:00
|
|
|
iov_iter_advance(iter, nr);
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-08-13 15:47:43 +00:00
|
|
|
static void io_req_map_rw(struct io_kiocb *req, const struct iovec *iovec,
|
|
|
|
const struct iovec *fast_iov, struct iov_iter *iter)
|
2019-12-02 18:03:47 +00:00
|
|
|
{
|
2020-08-16 01:44:09 +00:00
|
|
|
struct io_async_rw *rw = req->async_data;
|
2020-07-13 19:59:18 +00:00
|
|
|
|
2020-08-13 15:47:43 +00:00
|
|
|
memcpy(&rw->iter, iter, sizeof(*iter));
|
2020-09-05 21:45:46 +00:00
|
|
|
rw->free_iovec = iovec;
|
2020-08-13 17:51:40 +00:00
|
|
|
rw->bytes_done = 0;
|
2020-08-13 15:47:43 +00:00
|
|
|
/* can only be fixed buffers, no need to do anything */
|
2020-11-23 23:20:27 +00:00
|
|
|
if (iov_iter_is_bvec(iter))
|
2020-08-13 15:47:43 +00:00
|
|
|
return;
|
2020-07-13 19:59:18 +00:00
|
|
|
if (!iovec) {
|
2020-08-13 15:47:43 +00:00
|
|
|
unsigned iov_off = 0;
|
|
|
|
|
|
|
|
rw->iter.iov = rw->fast_iov;
|
|
|
|
if (iter->iov != fast_iov) {
|
|
|
|
iov_off = iter->iov - fast_iov;
|
|
|
|
rw->iter.iov += iov_off;
|
|
|
|
}
|
|
|
|
if (rw->fast_iov != fast_iov)
|
|
|
|
memcpy(rw->fast_iov + iov_off, fast_iov + iov_off,
|
2020-04-08 14:29:58 +00:00
|
|
|
sizeof(struct iovec) * iter->nr_segs);
|
2020-02-07 19:04:45 +00:00
|
|
|
} else {
|
|
|
|
req->flags |= REQ_F_NEED_CLEANUP;
|
2019-12-02 18:03:47 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-02-28 22:35:17 +00:00
|
|
|
static inline int io_alloc_async_data(struct io_kiocb *req)
|
2020-03-27 07:36:52 +00:00
|
|
|
{
|
2020-08-16 01:44:09 +00:00
|
|
|
WARN_ON_ONCE(!io_op_defs[req->opcode].async_size);
|
|
|
|
req->async_data = kmalloc(io_op_defs[req->opcode].async_size, GFP_KERNEL);
|
|
|
|
return req->async_data == NULL;
|
2020-03-27 07:36:52 +00:00
|
|
|
}
|
|
|
|
|
2020-08-13 15:47:43 +00:00
|
|
|
static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec,
|
|
|
|
const struct iovec *fast_iov,
|
2020-08-13 17:51:40 +00:00
|
|
|
struct iov_iter *iter, bool force)
|
2019-12-16 05:13:43 +00:00
|
|
|
{
|
2021-02-28 22:35:18 +00:00
|
|
|
if (!force && !io_op_defs[req->opcode].needs_async_setup)
|
2020-01-14 02:23:24 +00:00
|
|
|
return 0;
|
2020-08-16 01:44:09 +00:00
|
|
|
if (!req->async_data) {
|
2021-02-28 22:35:17 +00:00
|
|
|
if (io_alloc_async_data(req)) {
|
2021-02-04 13:52:01 +00:00
|
|
|
kfree(iovec);
|
2020-01-31 19:06:52 +00:00
|
|
|
return -ENOMEM;
|
2021-02-04 13:52:01 +00:00
|
|
|
}
|
2019-12-16 05:13:43 +00:00
|
|
|
|
2020-08-13 15:47:43 +00:00
|
|
|
io_req_map_rw(req, iovec, fast_iov, iter);
|
2020-01-31 19:06:52 +00:00
|
|
|
}
|
2019-12-16 05:13:43 +00:00
|
|
|
return 0;
|
2019-12-02 18:03:47 +00:00
|
|
|
}
|
|
|
|
|
2020-09-30 19:57:54 +00:00
|
|
|
static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
|
2020-07-13 19:59:19 +00:00
|
|
|
{
|
2020-08-16 01:44:09 +00:00
|
|
|
struct io_async_rw *iorw = req->async_data;
|
2020-09-05 21:45:45 +00:00
|
|
|
struct iovec *iov = iorw->fast_iov;
|
2021-02-04 13:52:06 +00:00
|
|
|
int ret;
|
2020-07-13 19:59:19 +00:00
|
|
|
|
2020-11-07 13:16:27 +00:00
|
|
|
ret = io_import_iovec(rw, req, &iov, &iorw->iter, false);
|
2020-07-13 19:59:19 +00:00
|
|
|
if (unlikely(ret < 0))
|
|
|
|
return ret;
|
|
|
|
|
2020-09-05 21:45:47 +00:00
|
|
|
iorw->bytes_done = 0;
|
|
|
|
iorw->free_iovec = iov;
|
|
|
|
if (iov)
|
|
|
|
req->flags |= REQ_F_NEED_CLEANUP;
|
2020-07-13 19:59:19 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-09-30 19:57:54 +00:00
|
|
|
static int io_read_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
2019-12-02 18:03:47 +00:00
|
|
|
{
|
2019-12-20 01:24:38 +00:00
|
|
|
if (unlikely(!(req->file->f_mode & FMODE_READ)))
|
|
|
|
return -EBADF;
|
2021-02-18 18:29:44 +00:00
|
|
|
return io_prep_rw(req, sqe);
|
2019-12-02 18:03:47 +00:00
|
|
|
}
|
|
|
|
|
2020-08-03 22:43:59 +00:00
|
|
|
/*
|
|
|
|
* This is our waitqueue callback handler, registered through lock_page_async()
|
|
|
|
* when we initially tried to do the IO with the iocb armed our waitqueue.
|
|
|
|
* This gets called when the page is unlocked, and we generally expect that to
|
|
|
|
* happen when the page IO is completed and the page is now uptodate. This will
|
|
|
|
* queue a task_work based retry of the operation, attempting to copy the data
|
|
|
|
* again. If the latter fails because the page was NOT uptodate, then we will
|
|
|
|
* do a thread based blocking retry of the operation. That's the unexpected
|
|
|
|
* slow path.
|
|
|
|
*/
|
2020-05-22 15:24:42 +00:00
|
|
|
static int io_async_buf_func(struct wait_queue_entry *wait, unsigned mode,
|
|
|
|
int sync, void *arg)
|
|
|
|
{
|
|
|
|
struct wait_page_queue *wpq;
|
|
|
|
struct io_kiocb *req = wait->private;
|
|
|
|
struct wait_page_key *key = arg;
|
|
|
|
|
|
|
|
wpq = container_of(wait, struct wait_page_queue, wait);
|
|
|
|
|
2020-08-03 20:01:22 +00:00
|
|
|
if (!wake_page_match(wpq, key))
|
|
|
|
return 0;
|
|
|
|
|
2020-09-29 12:00:45 +00:00
|
|
|
req->rw.kiocb.ki_flags &= ~IOCB_WAITQ;
|
2020-05-22 15:24:42 +00:00
|
|
|
list_del_init(&wait->entry);
|
2021-02-12 03:23:53 +00:00
|
|
|
io_req_task_queue(req);
|
2020-05-22 15:24:42 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2020-08-03 22:43:59 +00:00
|
|
|
/*
|
|
|
|
* This controls whether a given IO request should be armed for async page
|
|
|
|
* based retry. If we return false here, the request is handed to the async
|
|
|
|
* worker threads for retry. If we're doing buffered reads on a regular file,
|
|
|
|
* we prepare a private wait_page_queue entry and retry the operation. This
|
|
|
|
* will either succeed because the page is now uptodate and unlocked, or it
|
|
|
|
* will register a callback when the page is unlocked at IO completion. Through
|
|
|
|
* that callback, io_uring uses task_work to setup a retry of the operation.
|
|
|
|
* That retry will attempt the buffered read again. The retry will generally
|
|
|
|
* succeed, or in rare cases where it fails, we then fall back to using the
|
|
|
|
* async worker threads for a blocking retry.
|
|
|
|
*/
|
2020-08-13 17:51:40 +00:00
|
|
|
static bool io_rw_should_retry(struct io_kiocb *req)
|
2019-12-02 18:03:47 +00:00
|
|
|
{
|
2020-08-16 01:44:09 +00:00
|
|
|
struct io_async_rw *rw = req->async_data;
|
|
|
|
struct wait_page_queue *wait = &rw->wpq;
|
2020-05-22 15:24:42 +00:00
|
|
|
struct kiocb *kiocb = &req->rw.kiocb;
|
2019-12-02 18:03:47 +00:00
|
|
|
|
2020-05-22 15:24:42 +00:00
|
|
|
/* never retry for NOWAIT, we just complete with -EAGAIN */
|
|
|
|
if (req->flags & REQ_F_NOWAIT)
|
|
|
|
return false;
|
2019-12-02 18:03:47 +00:00
|
|
|
|
2020-08-13 17:51:40 +00:00
|
|
|
/* Only for buffered IO */
|
2020-08-16 17:58:43 +00:00
|
|
|
if (kiocb->ki_flags & (IOCB_DIRECT | IOCB_HIPRI))
|
2020-05-22 15:24:42 +00:00
|
|
|
return false;
|
2020-08-16 17:58:43 +00:00
|
|
|
|
2020-05-22 15:24:42 +00:00
|
|
|
/*
|
|
|
|
* just use poll if we can, and don't attempt if the fs doesn't
|
|
|
|
* support callback based unlocks
|
|
|
|
*/
|
|
|
|
if (file_can_poll(req->file) || !(req->file->f_mode & FMODE_BUF_RASYNC))
|
|
|
|
return false;
|
2019-12-02 18:03:47 +00:00
|
|
|
|
2020-08-16 17:58:43 +00:00
|
|
|
wait->wait.func = io_async_buf_func;
|
|
|
|
wait->wait.private = req;
|
|
|
|
wait->wait.flags = 0;
|
|
|
|
INIT_LIST_HEAD(&wait->wait.entry);
|
|
|
|
kiocb->ki_flags |= IOCB_WAITQ;
|
2020-09-29 12:00:45 +00:00
|
|
|
kiocb->ki_flags &= ~IOCB_NOWAIT;
|
2020-08-16 17:58:43 +00:00
|
|
|
kiocb->ki_waitq = wait;
|
|
|
|
return true;
|
2020-05-22 15:24:42 +00:00
|
|
|
}
|
|
|
|
|
2021-06-14 01:36:24 +00:00
|
|
|
static inline int io_iter_do_read(struct io_kiocb *req, struct iov_iter *iter)
|
2020-05-22 15:24:42 +00:00
|
|
|
{
|
|
|
|
if (req->file->f_op->read_iter)
|
|
|
|
return call_read_iter(req->file, &req->rw.kiocb, iter);
|
2020-08-05 10:53:50 +00:00
|
|
|
else if (req->file->f_op->read)
|
2020-10-22 20:14:12 +00:00
|
|
|
return loop_rw_iter(READ, req, iter);
|
2020-08-05 10:53:50 +00:00
|
|
|
else
|
|
|
|
return -EINVAL;
|
2019-12-02 18:03:47 +00:00
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
static int io_read(struct io_kiocb *req, unsigned int issue_flags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
|
2019-12-20 15:45:55 +00:00
|
|
|
struct kiocb *kiocb = &req->rw.kiocb;
|
2020-08-13 15:47:43 +00:00
|
|
|
struct iov_iter __iter, *iter = &__iter;
|
2020-08-16 01:44:09 +00:00
|
|
|
struct io_async_rw *rw = req->async_data;
|
2020-08-13 17:51:40 +00:00
|
|
|
ssize_t io_size, ret, ret2;
|
2021-02-10 00:03:07 +00:00
|
|
|
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
|
2020-08-13 15:47:43 +00:00
|
|
|
|
2020-11-07 13:16:27 +00:00
|
|
|
if (rw) {
|
2020-08-16 01:44:09 +00:00
|
|
|
iter = &rw->iter;
|
2020-11-07 13:16:27 +00:00
|
|
|
iovec = NULL;
|
|
|
|
} else {
|
|
|
|
ret = io_import_iovec(READ, req, &iovec, iter, !force_nonblock);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
}
|
2020-11-07 13:16:26 +00:00
|
|
|
io_size = iov_iter_count(iter);
|
2020-08-01 10:50:02 +00:00
|
|
|
req->result = io_size;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-12-18 19:19:41 +00:00
|
|
|
/* Ensure we clear previously set non-block flag */
|
|
|
|
if (!force_nonblock)
|
2020-02-20 16:56:08 +00:00
|
|
|
kiocb->ki_flags &= ~IOCB_NOWAIT;
|
2020-09-30 19:57:53 +00:00
|
|
|
else
|
|
|
|
kiocb->ki_flags |= IOCB_NOWAIT;
|
|
|
|
|
2020-06-21 10:09:51 +00:00
|
|
|
/* If the file doesn't support async, just async punt */
|
2021-08-09 12:04:03 +00:00
|
|
|
if (force_nonblock && !io_file_supports_nowait(req, READ)) {
|
2021-02-04 13:51:59 +00:00
|
|
|
ret = io_setup_async_rw(req, iovec, inline_vecs, iter, true);
|
2021-02-04 13:52:01 +00:00
|
|
|
return ret ?: -EAGAIN;
|
2021-02-04 13:51:59 +00:00
|
|
|
}
|
2019-05-10 22:07:28 +00:00
|
|
|
|
2020-11-07 13:16:26 +00:00
|
|
|
ret = rw_verify_area(READ, req->file, io_kiocb_ppos(kiocb), io_size);
|
2021-02-04 13:52:03 +00:00
|
|
|
if (unlikely(ret)) {
|
|
|
|
kfree(iovec);
|
|
|
|
return ret;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2020-08-13 17:51:40 +00:00
|
|
|
ret = io_iter_do_read(req, iter);
|
2019-09-23 17:05:34 +00:00
|
|
|
|
2021-04-02 02:41:15 +00:00
|
|
|
if (ret == -EAGAIN || (req->flags & REQ_F_REISSUE)) {
|
2021-04-08 00:54:39 +00:00
|
|
|
req->flags &= ~REQ_F_REISSUE;
|
2020-08-27 22:40:19 +00:00
|
|
|
/* IOPOLL retry should happen for io-wq threads */
|
|
|
|
if (!force_nonblock && !(req->ctx->flags & IORING_SETUP_IOPOLL))
|
2020-08-15 22:58:42 +00:00
|
|
|
goto done;
|
2021-02-04 13:52:05 +00:00
|
|
|
/* no retry on NONBLOCK nor RWF_NOWAIT */
|
|
|
|
if (req->flags & REQ_F_NOWAIT)
|
2020-09-02 15:30:31 +00:00
|
|
|
goto done;
|
2020-08-24 17:45:26 +00:00
|
|
|
/* some cases will consume bytes even on error returns */
|
2020-11-07 13:16:26 +00:00
|
|
|
iov_iter_revert(iter, io_size - iov_iter_count(iter));
|
2020-09-25 21:23:43 +00:00
|
|
|
ret = 0;
|
2021-04-02 02:41:15 +00:00
|
|
|
} else if (ret == -EIOCBQUEUED) {
|
|
|
|
goto out_free;
|
2021-02-04 13:52:02 +00:00
|
|
|
} else if (ret <= 0 || ret == io_size || !force_nonblock ||
|
2021-02-04 13:52:05 +00:00
|
|
|
(req->flags & REQ_F_NOWAIT) || !(req->flags & REQ_F_ISREG)) {
|
2021-02-04 13:52:02 +00:00
|
|
|
/* read all, failed, already did sync or don't want to retry */
|
2020-08-25 18:59:22 +00:00
|
|
|
goto done;
|
2020-08-13 17:51:40 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
ret2 = io_setup_async_rw(req, iovec, inline_vecs, iter, true);
|
2021-02-04 13:52:01 +00:00
|
|
|
if (ret2)
|
|
|
|
return ret2;
|
|
|
|
|
2021-02-17 21:02:36 +00:00
|
|
|
iovec = NULL;
|
2020-08-16 01:44:09 +00:00
|
|
|
rw = req->async_data;
|
2020-08-13 17:51:40 +00:00
|
|
|
/* now use our persistent iterator, if we aren't already */
|
2020-08-16 01:44:09 +00:00
|
|
|
iter = &rw->iter;
|
2020-08-13 17:51:40 +00:00
|
|
|
|
2021-02-04 13:52:04 +00:00
|
|
|
do {
|
|
|
|
io_size -= ret;
|
|
|
|
rw->bytes_done += ret;
|
|
|
|
/* if we can retry, do so with the callbacks armed */
|
|
|
|
if (!io_rw_should_retry(req)) {
|
|
|
|
kiocb->ki_flags &= ~IOCB_WAITQ;
|
|
|
|
return -EAGAIN;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now retry read with the IOCB_WAITQ parts set in the iocb. If
|
|
|
|
* we get -EIOCBQUEUED, then we'll get a notification when the
|
|
|
|
* desired page gets unlocked. We can also get a partial read
|
|
|
|
* here, and if we do, then just retry at the new offset.
|
|
|
|
*/
|
|
|
|
ret = io_iter_do_read(req, iter);
|
|
|
|
if (ret == -EIOCBQUEUED)
|
|
|
|
return 0;
|
2020-08-13 17:51:40 +00:00
|
|
|
/* we got some bytes, but not all. retry. */
|
2021-03-05 04:02:58 +00:00
|
|
|
kiocb->ki_flags &= ~IOCB_WAITQ;
|
2021-02-04 13:52:04 +00:00
|
|
|
} while (ret > 0 && ret < io_size);
|
2020-08-13 17:51:40 +00:00
|
|
|
done:
|
2021-02-10 00:03:09 +00:00
|
|
|
kiocb_done(kiocb, ret, issue_flags);
|
2021-02-17 21:02:36 +00:00
|
|
|
out_free:
|
|
|
|
/* it's faster to check here then delegate to kfree */
|
|
|
|
if (iovec)
|
|
|
|
kfree(iovec);
|
2021-02-04 13:52:03 +00:00
|
|
|
return 0;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2020-09-30 19:57:54 +00:00
|
|
|
static int io_write_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
2019-12-02 18:03:47 +00:00
|
|
|
{
|
2019-12-20 01:24:38 +00:00
|
|
|
if (unlikely(!(req->file->f_mode & FMODE_WRITE)))
|
|
|
|
return -EBADF;
|
2021-02-18 18:29:44 +00:00
|
|
|
return io_prep_rw(req, sqe);
|
2019-12-02 18:03:47 +00:00
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
static int io_write(struct io_kiocb *req, unsigned int issue_flags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
|
2019-12-20 15:45:55 +00:00
|
|
|
struct kiocb *kiocb = &req->rw.kiocb;
|
2020-08-13 15:47:43 +00:00
|
|
|
struct iov_iter __iter, *iter = &__iter;
|
2020-08-16 01:44:09 +00:00
|
|
|
struct io_async_rw *rw = req->async_data;
|
2020-08-01 10:50:02 +00:00
|
|
|
ssize_t ret, ret2, io_size;
|
2021-02-10 00:03:07 +00:00
|
|
|
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2020-11-07 13:16:27 +00:00
|
|
|
if (rw) {
|
2020-08-16 01:44:09 +00:00
|
|
|
iter = &rw->iter;
|
2020-11-07 13:16:27 +00:00
|
|
|
iovec = NULL;
|
|
|
|
} else {
|
|
|
|
ret = io_import_iovec(WRITE, req, &iovec, iter, !force_nonblock);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
}
|
2020-11-07 13:16:26 +00:00
|
|
|
io_size = iov_iter_count(iter);
|
2020-08-01 10:50:02 +00:00
|
|
|
req->result = io_size;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-12-18 19:19:41 +00:00
|
|
|
/* Ensure we clear previously set non-block flag */
|
|
|
|
if (!force_nonblock)
|
2020-09-30 19:57:53 +00:00
|
|
|
kiocb->ki_flags &= ~IOCB_NOWAIT;
|
|
|
|
else
|
|
|
|
kiocb->ki_flags |= IOCB_NOWAIT;
|
2019-12-18 19:19:41 +00:00
|
|
|
|
2020-06-21 10:09:51 +00:00
|
|
|
/* If the file doesn't support async, just async punt */
|
2021-08-09 12:04:03 +00:00
|
|
|
if (force_nonblock && !io_file_supports_nowait(req, WRITE))
|
2019-12-02 18:03:47 +00:00
|
|
|
goto copy_iov;
|
2019-01-19 05:56:34 +00:00
|
|
|
|
2019-12-10 03:16:22 +00:00
|
|
|
/* file path doesn't support NOWAIT for non-direct_IO */
|
|
|
|
if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&
|
|
|
|
(req->flags & REQ_F_ISREG))
|
2019-12-02 18:03:47 +00:00
|
|
|
goto copy_iov;
|
2019-01-19 05:56:34 +00:00
|
|
|
|
2020-11-07 13:16:26 +00:00
|
|
|
ret = rw_verify_area(WRITE, req->file, io_kiocb_ppos(kiocb), io_size);
|
2020-08-01 10:50:02 +00:00
|
|
|
if (unlikely(ret))
|
|
|
|
goto out_free;
|
2020-03-20 17:23:41 +00:00
|
|
|
|
2020-08-01 10:50:02 +00:00
|
|
|
/*
|
|
|
|
* Open-code file_start_write here to grab freeze protection,
|
|
|
|
* which will be released by another thread in
|
|
|
|
* io_complete_rw(). Fool lockdep by telling it the lock got
|
|
|
|
* released so that it doesn't complain about the held lock when
|
|
|
|
* we return to userspace.
|
|
|
|
*/
|
|
|
|
if (req->flags & REQ_F_ISREG) {
|
2020-11-11 00:50:21 +00:00
|
|
|
sb_start_write(file_inode(req->file)->i_sb);
|
2020-08-01 10:50:02 +00:00
|
|
|
__sb_writers_release(file_inode(req->file)->i_sb,
|
|
|
|
SB_FREEZE_WRITE);
|
|
|
|
}
|
|
|
|
kiocb->ki_flags |= IOCB_WRITE;
|
2020-03-20 17:23:41 +00:00
|
|
|
|
2020-08-01 10:50:02 +00:00
|
|
|
if (req->file->f_op->write_iter)
|
2020-08-13 15:47:43 +00:00
|
|
|
ret2 = call_write_iter(req->file, kiocb, iter);
|
2020-08-05 10:53:50 +00:00
|
|
|
else if (req->file->f_op->write)
|
2020-10-22 20:14:12 +00:00
|
|
|
ret2 = loop_rw_iter(WRITE, req, iter);
|
2020-08-05 10:53:50 +00:00
|
|
|
else
|
|
|
|
ret2 = -EINVAL;
|
2020-03-20 17:23:41 +00:00
|
|
|
|
2021-04-08 00:54:39 +00:00
|
|
|
if (req->flags & REQ_F_REISSUE) {
|
|
|
|
req->flags &= ~REQ_F_REISSUE;
|
2021-04-02 02:41:15 +00:00
|
|
|
ret2 = -EAGAIN;
|
2021-04-08 00:54:39 +00:00
|
|
|
}
|
2021-04-02 02:41:15 +00:00
|
|
|
|
2020-08-01 10:50:02 +00:00
|
|
|
/*
|
|
|
|
* Raw bdev writes will return -EOPNOTSUPP for IOCB_NOWAIT. Just
|
|
|
|
* retry them without IOCB_NOWAIT.
|
|
|
|
*/
|
|
|
|
if (ret2 == -EOPNOTSUPP && (kiocb->ki_flags & IOCB_NOWAIT))
|
|
|
|
ret2 = -EAGAIN;
|
2021-02-04 13:52:05 +00:00
|
|
|
/* no retry on NONBLOCK nor RWF_NOWAIT */
|
|
|
|
if (ret2 == -EAGAIN && (req->flags & REQ_F_NOWAIT))
|
2020-09-02 15:30:31 +00:00
|
|
|
goto done;
|
2020-08-01 10:50:02 +00:00
|
|
|
if (!force_nonblock || ret2 != -EAGAIN) {
|
2020-08-27 22:40:19 +00:00
|
|
|
/* IOPOLL retry should happen for io-wq threads */
|
|
|
|
if ((req->ctx->flags & IORING_SETUP_IOPOLL) && ret2 == -EAGAIN)
|
|
|
|
goto copy_iov;
|
2020-09-02 15:30:31 +00:00
|
|
|
done:
|
2021-02-10 00:03:09 +00:00
|
|
|
kiocb_done(kiocb, ret2, issue_flags);
|
2020-08-01 10:50:02 +00:00
|
|
|
} else {
|
2019-12-02 18:03:47 +00:00
|
|
|
copy_iov:
|
2020-08-24 17:45:26 +00:00
|
|
|
/* some cases will consume bytes even on error returns */
|
2020-11-07 13:16:26 +00:00
|
|
|
iov_iter_revert(iter, io_size - iov_iter_count(iter));
|
2020-08-13 17:51:40 +00:00
|
|
|
ret = io_setup_async_rw(req, iovec, inline_vecs, iter, false);
|
2021-02-04 13:52:01 +00:00
|
|
|
return ret ?: -EAGAIN;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
2019-01-19 05:56:34 +00:00
|
|
|
out_free:
|
2020-08-20 08:34:10 +00:00
|
|
|
/* it's reportedly faster than delegating the null check to kfree() */
|
2020-07-13 19:59:20 +00:00
|
|
|
if (iovec)
|
2020-06-18 07:01:56 +00:00
|
|
|
kfree(iovec);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-09-28 20:23:58 +00:00
|
|
|
static int io_renameat_prep(struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
|
|
|
{
|
|
|
|
struct io_rename *ren = &req->rename;
|
|
|
|
const char __user *oldf, *newf;
|
|
|
|
|
2021-06-23 15:04:13 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
|
|
|
if (sqe->ioprio || sqe->buf_index)
|
|
|
|
return -EINVAL;
|
2020-09-28 20:23:58 +00:00
|
|
|
if (unlikely(req->flags & REQ_F_FIXED_FILE))
|
|
|
|
return -EBADF;
|
|
|
|
|
|
|
|
ren->old_dfd = READ_ONCE(sqe->fd);
|
|
|
|
oldf = u64_to_user_ptr(READ_ONCE(sqe->addr));
|
|
|
|
newf = u64_to_user_ptr(READ_ONCE(sqe->addr2));
|
|
|
|
ren->new_dfd = READ_ONCE(sqe->len);
|
|
|
|
ren->flags = READ_ONCE(sqe->rename_flags);
|
|
|
|
|
|
|
|
ren->oldpath = getname(oldf);
|
|
|
|
if (IS_ERR(ren->oldpath))
|
|
|
|
return PTR_ERR(ren->oldpath);
|
|
|
|
|
|
|
|
ren->newpath = getname(newf);
|
|
|
|
if (IS_ERR(ren->newpath)) {
|
|
|
|
putname(ren->oldpath);
|
|
|
|
return PTR_ERR(ren->newpath);
|
|
|
|
}
|
|
|
|
|
|
|
|
req->flags |= REQ_F_NEED_CLEANUP;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
static int io_renameat(struct io_kiocb *req, unsigned int issue_flags)
|
2020-09-28 20:23:58 +00:00
|
|
|
{
|
|
|
|
struct io_rename *ren = &req->rename;
|
|
|
|
int ret;
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
if (issue_flags & IO_URING_F_NONBLOCK)
|
2020-09-28 20:23:58 +00:00
|
|
|
return -EAGAIN;
|
|
|
|
|
|
|
|
ret = do_renameat2(ren->old_dfd, ren->oldpath, ren->new_dfd,
|
|
|
|
ren->newpath, ren->flags);
|
|
|
|
|
|
|
|
req->flags &= ~REQ_F_NEED_CLEANUP;
|
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2020-09-28 20:23:58 +00:00
|
|
|
io_req_complete(req, ret);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-09-28 20:27:37 +00:00
|
|
|
static int io_unlinkat_prep(struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
|
|
|
{
|
|
|
|
struct io_unlink *un = &req->unlink;
|
|
|
|
const char __user *fname;
|
|
|
|
|
2021-06-23 15:07:45 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
|
|
|
if (sqe->ioprio || sqe->off || sqe->len || sqe->buf_index)
|
|
|
|
return -EINVAL;
|
2020-09-28 20:27:37 +00:00
|
|
|
if (unlikely(req->flags & REQ_F_FIXED_FILE))
|
|
|
|
return -EBADF;
|
|
|
|
|
|
|
|
un->dfd = READ_ONCE(sqe->fd);
|
|
|
|
|
|
|
|
un->flags = READ_ONCE(sqe->unlink_flags);
|
|
|
|
if (un->flags & ~AT_REMOVEDIR)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
fname = u64_to_user_ptr(READ_ONCE(sqe->addr));
|
|
|
|
un->filename = getname(fname);
|
|
|
|
if (IS_ERR(un->filename))
|
|
|
|
return PTR_ERR(un->filename);
|
|
|
|
|
|
|
|
req->flags |= REQ_F_NEED_CLEANUP;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
static int io_unlinkat(struct io_kiocb *req, unsigned int issue_flags)
|
2020-09-28 20:27:37 +00:00
|
|
|
{
|
|
|
|
struct io_unlink *un = &req->unlink;
|
|
|
|
int ret;
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
if (issue_flags & IO_URING_F_NONBLOCK)
|
2020-09-28 20:27:37 +00:00
|
|
|
return -EAGAIN;
|
|
|
|
|
|
|
|
if (un->flags & AT_REMOVEDIR)
|
|
|
|
ret = do_rmdir(un->dfd, un->filename);
|
|
|
|
else
|
|
|
|
ret = do_unlinkat(un->dfd, un->filename);
|
|
|
|
|
|
|
|
req->flags &= ~REQ_F_NEED_CLEANUP;
|
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2020-09-28 20:27:37 +00:00
|
|
|
io_req_complete(req, ret);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-09-05 17:14:22 +00:00
|
|
|
static int io_shutdown_prep(struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
|
|
|
{
|
|
|
|
#if defined(CONFIG_NET)
|
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
|
|
|
if (sqe->ioprio || sqe->off || sqe->addr || sqe->rw_flags ||
|
|
|
|
sqe->buf_index)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
req->shutdown.how = READ_ONCE(sqe->len);
|
|
|
|
return 0;
|
|
|
|
#else
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
static int io_shutdown(struct io_kiocb *req, unsigned int issue_flags)
|
2020-09-05 17:14:22 +00:00
|
|
|
{
|
|
|
|
#if defined(CONFIG_NET)
|
|
|
|
struct socket *sock;
|
|
|
|
int ret;
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
if (issue_flags & IO_URING_F_NONBLOCK)
|
2020-09-05 17:14:22 +00:00
|
|
|
return -EAGAIN;
|
|
|
|
|
2020-12-16 20:44:05 +00:00
|
|
|
sock = sock_from_file(req->file);
|
2020-09-05 17:14:22 +00:00
|
|
|
if (unlikely(!sock))
|
2020-12-16 20:44:05 +00:00
|
|
|
return -ENOTSOCK;
|
2020-09-05 17:14:22 +00:00
|
|
|
|
|
|
|
ret = __sys_shutdown_sock(sock, req->shutdown.how);
|
2020-12-15 03:57:27 +00:00
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2020-09-05 17:14:22 +00:00
|
|
|
io_req_complete(req, ret);
|
|
|
|
return 0;
|
|
|
|
#else
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2020-05-17 11:18:06 +00:00
|
|
|
static int __io_splice_prep(struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
2020-02-24 08:32:45 +00:00
|
|
|
{
|
2021-06-24 14:09:57 +00:00
|
|
|
struct io_splice *sp = &req->splice;
|
2020-02-24 08:32:45 +00:00
|
|
|
unsigned int valid_flags = SPLICE_F_FD_IN_FIXED | SPLICE_F_ALL;
|
|
|
|
|
2020-06-03 15:03:22 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
2020-02-24 08:32:45 +00:00
|
|
|
|
|
|
|
sp->file_in = NULL;
|
|
|
|
sp->len = READ_ONCE(sqe->len);
|
|
|
|
sp->flags = READ_ONCE(sqe->splice_flags);
|
|
|
|
|
|
|
|
if (unlikely(sp->flags & ~valid_flags))
|
|
|
|
return -EINVAL;
|
|
|
|
|
io_uring: remove file batch-get optimisation
For requests with non-fixed files, instead of grabbing just one
reference, we get by the number of left requests, so the following
requests using the same file can take it without atomics.
However, it's not all win. If there is one request in the middle
not using files or having a fixed file, we'll need to put back the left
references. Even worse if an application submits requests dealing with
different files, it will do a put for each new request, so doubling the
number of atomics needed. Also, even if not used, it's still takes some
cycles in the submission path.
If a file used many times, it rather makes sense to pre-register it, if
not, we may fall in the described pitfall. So, this optimisation is a
matter of use case. Go with the simpliest code-wise way, remove it.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-10 13:52:47 +00:00
|
|
|
sp->file_in = io_file_get(req->ctx, req, READ_ONCE(sqe->splice_fd_in),
|
2020-10-10 17:34:08 +00:00
|
|
|
(sp->flags & SPLICE_F_FD_IN_FIXED));
|
|
|
|
if (!sp->file_in)
|
|
|
|
return -EBADF;
|
2020-02-24 08:32:45 +00:00
|
|
|
req->flags |= REQ_F_NEED_CLEANUP;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-05-17 11:18:06 +00:00
|
|
|
static int io_tee_prep(struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
|
|
|
{
|
|
|
|
if (READ_ONCE(sqe->splice_off_in) || READ_ONCE(sqe->off))
|
|
|
|
return -EINVAL;
|
|
|
|
return __io_splice_prep(req, sqe);
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
static int io_tee(struct io_kiocb *req, unsigned int issue_flags)
|
2020-05-17 11:18:06 +00:00
|
|
|
{
|
|
|
|
struct io_splice *sp = &req->splice;
|
|
|
|
struct file *in = sp->file_in;
|
|
|
|
struct file *out = sp->file_out;
|
|
|
|
unsigned int flags = sp->flags & ~SPLICE_F_FD_IN_FIXED;
|
|
|
|
long ret = 0;
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
if (issue_flags & IO_URING_F_NONBLOCK)
|
2020-05-17 11:18:06 +00:00
|
|
|
return -EAGAIN;
|
|
|
|
if (sp->len)
|
|
|
|
ret = do_tee(in, out, sp->len, flags);
|
|
|
|
|
2021-03-19 17:22:43 +00:00
|
|
|
if (!(sp->flags & SPLICE_F_FD_IN_FIXED))
|
|
|
|
io_put_file(in);
|
2020-05-17 11:18:06 +00:00
|
|
|
req->flags &= ~REQ_F_NEED_CLEANUP;
|
|
|
|
|
|
|
|
if (ret != sp->len)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2020-06-22 15:17:17 +00:00
|
|
|
io_req_complete(req, ret);
|
2020-05-17 11:18:06 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_splice_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
|
|
|
{
|
2021-06-24 14:09:57 +00:00
|
|
|
struct io_splice *sp = &req->splice;
|
2020-05-17 11:18:06 +00:00
|
|
|
|
|
|
|
sp->off_in = READ_ONCE(sqe->splice_off_in);
|
|
|
|
sp->off_out = READ_ONCE(sqe->off);
|
|
|
|
return __io_splice_prep(req, sqe);
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
static int io_splice(struct io_kiocb *req, unsigned int issue_flags)
|
2020-02-24 08:32:45 +00:00
|
|
|
{
|
|
|
|
struct io_splice *sp = &req->splice;
|
|
|
|
struct file *in = sp->file_in;
|
|
|
|
struct file *out = sp->file_out;
|
|
|
|
unsigned int flags = sp->flags & ~SPLICE_F_FD_IN_FIXED;
|
|
|
|
loff_t *poff_in, *poff_out;
|
2020-05-04 20:00:54 +00:00
|
|
|
long ret = 0;
|
2020-02-24 08:32:45 +00:00
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
if (issue_flags & IO_URING_F_NONBLOCK)
|
2020-05-01 14:09:38 +00:00
|
|
|
return -EAGAIN;
|
2020-02-24 08:32:45 +00:00
|
|
|
|
|
|
|
poff_in = (sp->off_in == -1) ? NULL : &sp->off_in;
|
|
|
|
poff_out = (sp->off_out == -1) ? NULL : &sp->off_out;
|
2020-05-04 20:00:54 +00:00
|
|
|
|
2020-05-17 20:21:38 +00:00
|
|
|
if (sp->len)
|
2020-05-04 20:00:54 +00:00
|
|
|
ret = do_splice(in, poff_in, out, poff_out, sp->len, flags);
|
2020-02-24 08:32:45 +00:00
|
|
|
|
2021-03-19 17:22:43 +00:00
|
|
|
if (!(sp->flags & SPLICE_F_FD_IN_FIXED))
|
|
|
|
io_put_file(in);
|
2020-02-24 08:32:45 +00:00
|
|
|
req->flags &= ~REQ_F_NEED_CLEANUP;
|
|
|
|
|
|
|
|
if (ret != sp->len)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2020-06-22 15:17:17 +00:00
|
|
|
io_req_complete(req, ret);
|
2020-02-24 08:32:45 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
/*
|
|
|
|
* IORING_OP_NOP just posts a completion event, nothing else.
|
|
|
|
*/
|
2021-02-10 00:03:09 +00:00
|
|
|
static int io_nop(struct io_kiocb *req, unsigned int issue_flags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
__io_req_complete(req, issue_flags, 0, 0);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-18 18:29:38 +00:00
|
|
|
static int io_fsync_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
2019-01-11 16:43:02 +00:00
|
|
|
{
|
2019-01-11 05:13:58 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2019-01-11 16:43:02 +00:00
|
|
|
|
2019-03-13 18:39:28 +00:00
|
|
|
if (!req->file)
|
|
|
|
return -EBADF;
|
2019-01-11 16:43:02 +00:00
|
|
|
|
2019-01-11 05:13:58 +00:00
|
|
|
if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
|
2019-01-09 15:59:42 +00:00
|
|
|
return -EINVAL;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index))
|
2019-01-11 16:43:02 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
2019-12-16 18:55:28 +00:00
|
|
|
req->sync.flags = READ_ONCE(sqe->fsync_flags);
|
|
|
|
if (unlikely(req->sync.flags & ~IORING_FSYNC_DATASYNC))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
req->sync.off = READ_ONCE(sqe->off);
|
|
|
|
req->sync.len = READ_ONCE(sqe->len);
|
2019-01-11 16:43:02 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
static int io_fsync(struct io_kiocb *req, unsigned int issue_flags)
|
2019-12-16 18:55:28 +00:00
|
|
|
{
|
|
|
|
loff_t end = req->sync.off + req->sync.len;
|
|
|
|
int ret;
|
|
|
|
|
2020-06-08 18:08:18 +00:00
|
|
|
/* fsync always requires a blocking context */
|
2021-02-10 00:03:07 +00:00
|
|
|
if (issue_flags & IO_URING_F_NONBLOCK)
|
2020-06-08 18:08:18 +00:00
|
|
|
return -EAGAIN;
|
|
|
|
|
2019-12-20 15:45:55 +00:00
|
|
|
ret = vfs_fsync_range(req->file, req->sync.off,
|
2019-12-16 18:55:28 +00:00
|
|
|
end > 0 ? end : LLONG_MAX,
|
|
|
|
req->sync.flags & IORING_FSYNC_DATASYNC);
|
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2020-06-22 15:17:17 +00:00
|
|
|
io_req_complete(req, ret);
|
2019-01-11 16:43:02 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-12-10 17:38:56 +00:00
|
|
|
static int io_fallocate_prep(struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
|
|
|
{
|
|
|
|
if (sqe->ioprio || sqe->buf_index || sqe->rw_flags)
|
|
|
|
return -EINVAL;
|
2020-06-03 15:03:22 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
2019-12-10 17:38:56 +00:00
|
|
|
|
|
|
|
req->sync.off = READ_ONCE(sqe->off);
|
|
|
|
req->sync.len = READ_ONCE(sqe->addr);
|
|
|
|
req->sync.mode = READ_ONCE(sqe->len);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
static int io_fallocate(struct io_kiocb *req, unsigned int issue_flags)
|
2019-04-09 20:56:44 +00:00
|
|
|
{
|
2020-06-08 18:08:18 +00:00
|
|
|
int ret;
|
|
|
|
|
2019-12-10 17:38:56 +00:00
|
|
|
/* fallocate always requiring blocking context */
|
2021-02-10 00:03:07 +00:00
|
|
|
if (issue_flags & IO_URING_F_NONBLOCK)
|
2019-04-09 20:56:44 +00:00
|
|
|
return -EAGAIN;
|
2020-06-08 18:08:18 +00:00
|
|
|
ret = vfs_fallocate(req->file, req->sync.mode, req->sync.off,
|
|
|
|
req->sync.len);
|
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2020-06-22 15:17:17 +00:00
|
|
|
io_req_complete(req, ret);
|
2019-04-09 20:56:44 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-06-03 15:03:24 +00:00
|
|
|
static int __io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
2019-12-16 05:13:43 +00:00
|
|
|
{
|
2020-01-09 00:47:02 +00:00
|
|
|
const char __user *fname;
|
2019-12-11 18:20:36 +00:00
|
|
|
int ret;
|
2019-12-16 05:13:43 +00:00
|
|
|
|
2021-08-09 12:04:16 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
2020-06-03 15:03:24 +00:00
|
|
|
if (unlikely(sqe->ioprio || sqe->buf_index))
|
2019-12-11 18:20:36 +00:00
|
|
|
return -EINVAL;
|
2020-06-03 15:03:24 +00:00
|
|
|
if (unlikely(req->flags & REQ_F_FIXED_FILE))
|
2020-02-07 04:31:40 +00:00
|
|
|
return -EBADF;
|
2019-12-03 01:50:25 +00:00
|
|
|
|
2020-06-03 15:03:24 +00:00
|
|
|
/* open.how should be already initialised */
|
|
|
|
if (!(req->open.how.flags & O_PATH) && force_o_largefile())
|
2020-04-08 15:20:54 +00:00
|
|
|
req->open.how.flags |= O_LARGEFILE;
|
2019-12-20 01:24:38 +00:00
|
|
|
|
2020-06-03 15:03:23 +00:00
|
|
|
req->open.dfd = READ_ONCE(sqe->fd);
|
|
|
|
fname = u64_to_user_ptr(READ_ONCE(sqe->addr));
|
2020-01-09 00:47:02 +00:00
|
|
|
req->open.filename = getname(fname);
|
2019-12-11 18:20:36 +00:00
|
|
|
if (IS_ERR(req->open.filename)) {
|
|
|
|
ret = PTR_ERR(req->open.filename);
|
|
|
|
req->open.filename = NULL;
|
|
|
|
return ret;
|
|
|
|
}
|
2020-03-20 01:23:18 +00:00
|
|
|
req->open.nofile = rlimit(RLIMIT_NOFILE);
|
2020-02-07 20:59:53 +00:00
|
|
|
req->flags |= REQ_F_NEED_CLEANUP;
|
2019-12-11 18:20:36 +00:00
|
|
|
return 0;
|
2019-12-03 01:50:25 +00:00
|
|
|
}
|
|
|
|
|
2020-06-03 15:03:24 +00:00
|
|
|
static int io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
|
|
|
{
|
2021-08-09 12:04:16 +00:00
|
|
|
u64 mode = READ_ONCE(sqe->len);
|
|
|
|
u64 flags = READ_ONCE(sqe->open_flags);
|
2020-06-03 15:03:24 +00:00
|
|
|
|
|
|
|
req->open.how = build_open_how(flags, mode);
|
|
|
|
return __io_openat_prep(req, sqe);
|
|
|
|
}
|
|
|
|
|
2020-01-09 00:59:24 +00:00
|
|
|
static int io_openat2_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
2019-04-19 19:38:09 +00:00
|
|
|
{
|
2020-01-09 00:59:24 +00:00
|
|
|
struct open_how __user *how;
|
|
|
|
size_t len;
|
2019-04-19 19:34:07 +00:00
|
|
|
int ret;
|
|
|
|
|
2020-01-09 00:59:24 +00:00
|
|
|
how = u64_to_user_ptr(READ_ONCE(sqe->addr2));
|
|
|
|
len = READ_ONCE(sqe->len);
|
|
|
|
if (len < OPEN_HOW_SIZE_VER0)
|
|
|
|
return -EINVAL;
|
2019-12-20 01:24:38 +00:00
|
|
|
|
2020-01-09 00:59:24 +00:00
|
|
|
ret = copy_struct_from_user(&req->open.how, sizeof(req->open.how), how,
|
|
|
|
len);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2019-12-20 01:24:38 +00:00
|
|
|
|
2020-06-03 15:03:24 +00:00
|
|
|
return __io_openat_prep(req, sqe);
|
2020-01-09 00:59:24 +00:00
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
static int io_openat2(struct io_kiocb *req, unsigned int issue_flags)
|
2019-12-11 18:20:36 +00:00
|
|
|
{
|
|
|
|
struct open_flags op;
|
|
|
|
struct file *file;
|
io_uring: enable LOOKUP_CACHED path resolution for filename lookups
Instead of being pessimistic and assume that path lookup will block, use
LOOKUP_CACHED to attempt just a cached lookup. This ensures that the
fast path is always done inline, and we only punt to async context if
IO is needed to satisfy the lookup.
For forced nonblock open attempts, mark the file O_NONBLOCK over the
actual ->open() call as well. We can safely clear this again before
doing fd_install(), so it'll never be user visible that we fiddled with
it.
This greatly improves the performance of file open where the dentry is
already cached:
ached 5.10-git 5.10-git+LOOKUP_CACHED Speedup
---------------------------------------------------------------
33% 1,014,975 900,474 1.1x
89% 545,466 292,937 1.9x
100% 435,636 151,475 2.9x
The more cache hot we are, the faster the inline LOOKUP_CACHED
optimization helps. This is unsurprising and expected, as a thread
offload becomes a more dominant part of the total overhead. If we look
at io_uring tracing, doing an IORING_OP_OPENAT on a file that isn't in
the dentry cache will yield:
275.550481: io_uring_create: ring 00000000ddda6278, fd 3 sq size 8, cq size 16, flags 0
275.550491: io_uring_submit_sqe: ring 00000000ddda6278, op 18, data 0x0, non block 1, sq_thread 0
275.550498: io_uring_queue_async_work: ring 00000000ddda6278, request 00000000c0267d17, flags 69760, normal queue, work 000000003d683991
275.550502: io_uring_cqring_wait: ring 00000000ddda6278, min_events 1
275.550556: io_uring_complete: ring 00000000ddda6278, user_data 0x0, result 4
which shows a failed nonblock lookup, then punt to worker, and then we
complete with fd == 4. This takes 65 usec in total. Re-running the same
test case again:
281.253956: io_uring_create: ring 0000000008207252, fd 3 sq size 8, cq size 16, flags 0
281.253967: io_uring_submit_sqe: ring 0000000008207252, op 18, data 0x0, non block 1, sq_thread 0
281.253973: io_uring_complete: ring 0000000008207252, user_data 0x0, result 4
shows the same request completing inline, also returning fd == 4. This
takes 6 usec.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-12-10 19:25:36 +00:00
|
|
|
bool nonblock_set;
|
|
|
|
bool resolve_nonblock;
|
2019-12-11 18:20:36 +00:00
|
|
|
int ret;
|
|
|
|
|
2020-01-09 00:59:24 +00:00
|
|
|
ret = build_open_flags(&req->open.how, &op);
|
2019-12-11 18:20:36 +00:00
|
|
|
if (ret)
|
|
|
|
goto err;
|
io_uring: enable LOOKUP_CACHED path resolution for filename lookups
Instead of being pessimistic and assume that path lookup will block, use
LOOKUP_CACHED to attempt just a cached lookup. This ensures that the
fast path is always done inline, and we only punt to async context if
IO is needed to satisfy the lookup.
For forced nonblock open attempts, mark the file O_NONBLOCK over the
actual ->open() call as well. We can safely clear this again before
doing fd_install(), so it'll never be user visible that we fiddled with
it.
This greatly improves the performance of file open where the dentry is
already cached:
ached 5.10-git 5.10-git+LOOKUP_CACHED Speedup
---------------------------------------------------------------
33% 1,014,975 900,474 1.1x
89% 545,466 292,937 1.9x
100% 435,636 151,475 2.9x
The more cache hot we are, the faster the inline LOOKUP_CACHED
optimization helps. This is unsurprising and expected, as a thread
offload becomes a more dominant part of the total overhead. If we look
at io_uring tracing, doing an IORING_OP_OPENAT on a file that isn't in
the dentry cache will yield:
275.550481: io_uring_create: ring 00000000ddda6278, fd 3 sq size 8, cq size 16, flags 0
275.550491: io_uring_submit_sqe: ring 00000000ddda6278, op 18, data 0x0, non block 1, sq_thread 0
275.550498: io_uring_queue_async_work: ring 00000000ddda6278, request 00000000c0267d17, flags 69760, normal queue, work 000000003d683991
275.550502: io_uring_cqring_wait: ring 00000000ddda6278, min_events 1
275.550556: io_uring_complete: ring 00000000ddda6278, user_data 0x0, result 4
which shows a failed nonblock lookup, then punt to worker, and then we
complete with fd == 4. This takes 65 usec in total. Re-running the same
test case again:
281.253956: io_uring_create: ring 0000000008207252, fd 3 sq size 8, cq size 16, flags 0
281.253967: io_uring_submit_sqe: ring 0000000008207252, op 18, data 0x0, non block 1, sq_thread 0
281.253973: io_uring_complete: ring 0000000008207252, user_data 0x0, result 4
shows the same request completing inline, also returning fd == 4. This
takes 6 usec.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-12-10 19:25:36 +00:00
|
|
|
nonblock_set = op.open_flag & O_NONBLOCK;
|
|
|
|
resolve_nonblock = req->open.how.resolve & RESOLVE_CACHED;
|
2021-02-10 00:03:07 +00:00
|
|
|
if (issue_flags & IO_URING_F_NONBLOCK) {
|
io_uring: enable LOOKUP_CACHED path resolution for filename lookups
Instead of being pessimistic and assume that path lookup will block, use
LOOKUP_CACHED to attempt just a cached lookup. This ensures that the
fast path is always done inline, and we only punt to async context if
IO is needed to satisfy the lookup.
For forced nonblock open attempts, mark the file O_NONBLOCK over the
actual ->open() call as well. We can safely clear this again before
doing fd_install(), so it'll never be user visible that we fiddled with
it.
This greatly improves the performance of file open where the dentry is
already cached:
ached 5.10-git 5.10-git+LOOKUP_CACHED Speedup
---------------------------------------------------------------
33% 1,014,975 900,474 1.1x
89% 545,466 292,937 1.9x
100% 435,636 151,475 2.9x
The more cache hot we are, the faster the inline LOOKUP_CACHED
optimization helps. This is unsurprising and expected, as a thread
offload becomes a more dominant part of the total overhead. If we look
at io_uring tracing, doing an IORING_OP_OPENAT on a file that isn't in
the dentry cache will yield:
275.550481: io_uring_create: ring 00000000ddda6278, fd 3 sq size 8, cq size 16, flags 0
275.550491: io_uring_submit_sqe: ring 00000000ddda6278, op 18, data 0x0, non block 1, sq_thread 0
275.550498: io_uring_queue_async_work: ring 00000000ddda6278, request 00000000c0267d17, flags 69760, normal queue, work 000000003d683991
275.550502: io_uring_cqring_wait: ring 00000000ddda6278, min_events 1
275.550556: io_uring_complete: ring 00000000ddda6278, user_data 0x0, result 4
which shows a failed nonblock lookup, then punt to worker, and then we
complete with fd == 4. This takes 65 usec in total. Re-running the same
test case again:
281.253956: io_uring_create: ring 0000000008207252, fd 3 sq size 8, cq size 16, flags 0
281.253967: io_uring_submit_sqe: ring 0000000008207252, op 18, data 0x0, non block 1, sq_thread 0
281.253973: io_uring_complete: ring 0000000008207252, user_data 0x0, result 4
shows the same request completing inline, also returning fd == 4. This
takes 6 usec.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-12-10 19:25:36 +00:00
|
|
|
/*
|
|
|
|
* Don't bother trying for O_TRUNC, O_CREAT, or O_TMPFILE open,
|
|
|
|
* it'll always -EAGAIN
|
|
|
|
*/
|
|
|
|
if (req->open.how.flags & (O_TRUNC | O_CREAT | O_TMPFILE))
|
|
|
|
return -EAGAIN;
|
|
|
|
op.lookup_flags |= LOOKUP_CACHED;
|
|
|
|
op.open_flag |= O_NONBLOCK;
|
|
|
|
}
|
2019-12-11 18:20:36 +00:00
|
|
|
|
2020-03-20 01:23:18 +00:00
|
|
|
ret = __get_unused_fd_flags(req->open.how.flags, req->open.nofile);
|
2019-12-11 18:20:36 +00:00
|
|
|
if (ret < 0)
|
|
|
|
goto err;
|
|
|
|
|
|
|
|
file = do_filp_open(req->open.dfd, req->open.filename, &op);
|
2021-06-24 14:10:00 +00:00
|
|
|
if (IS_ERR(file)) {
|
2020-11-13 23:48:44 +00:00
|
|
|
/*
|
2021-06-24 14:10:00 +00:00
|
|
|
* We could hang on to this 'fd' on retrying, but seems like
|
|
|
|
* marginal gain for something that is now known to be a slower
|
|
|
|
* path. So just put it, and we'll get a new one when we retry.
|
2020-11-13 23:48:44 +00:00
|
|
|
*/
|
io_uring: enable LOOKUP_CACHED path resolution for filename lookups
Instead of being pessimistic and assume that path lookup will block, use
LOOKUP_CACHED to attempt just a cached lookup. This ensures that the
fast path is always done inline, and we only punt to async context if
IO is needed to satisfy the lookup.
For forced nonblock open attempts, mark the file O_NONBLOCK over the
actual ->open() call as well. We can safely clear this again before
doing fd_install(), so it'll never be user visible that we fiddled with
it.
This greatly improves the performance of file open where the dentry is
already cached:
ached 5.10-git 5.10-git+LOOKUP_CACHED Speedup
---------------------------------------------------------------
33% 1,014,975 900,474 1.1x
89% 545,466 292,937 1.9x
100% 435,636 151,475 2.9x
The more cache hot we are, the faster the inline LOOKUP_CACHED
optimization helps. This is unsurprising and expected, as a thread
offload becomes a more dominant part of the total overhead. If we look
at io_uring tracing, doing an IORING_OP_OPENAT on a file that isn't in
the dentry cache will yield:
275.550481: io_uring_create: ring 00000000ddda6278, fd 3 sq size 8, cq size 16, flags 0
275.550491: io_uring_submit_sqe: ring 00000000ddda6278, op 18, data 0x0, non block 1, sq_thread 0
275.550498: io_uring_queue_async_work: ring 00000000ddda6278, request 00000000c0267d17, flags 69760, normal queue, work 000000003d683991
275.550502: io_uring_cqring_wait: ring 00000000ddda6278, min_events 1
275.550556: io_uring_complete: ring 00000000ddda6278, user_data 0x0, result 4
which shows a failed nonblock lookup, then punt to worker, and then we
complete with fd == 4. This takes 65 usec in total. Re-running the same
test case again:
281.253956: io_uring_create: ring 0000000008207252, fd 3 sq size 8, cq size 16, flags 0
281.253967: io_uring_submit_sqe: ring 0000000008207252, op 18, data 0x0, non block 1, sq_thread 0
281.253973: io_uring_complete: ring 0000000008207252, user_data 0x0, result 4
shows the same request completing inline, also returning fd == 4. This
takes 6 usec.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-12-10 19:25:36 +00:00
|
|
|
put_unused_fd(ret);
|
|
|
|
|
2019-12-11 18:20:36 +00:00
|
|
|
ret = PTR_ERR(file);
|
2021-06-24 14:10:00 +00:00
|
|
|
/* only retry if RESOLVE_CACHED wasn't already set by application */
|
|
|
|
if (ret == -EAGAIN &&
|
|
|
|
(!resolve_nonblock && (issue_flags & IO_URING_F_NONBLOCK)))
|
|
|
|
return -EAGAIN;
|
|
|
|
goto err;
|
2019-12-11 18:20:36 +00:00
|
|
|
}
|
2021-06-24 14:10:00 +00:00
|
|
|
|
|
|
|
if ((issue_flags & IO_URING_F_NONBLOCK) && !nonblock_set)
|
|
|
|
file->f_flags &= ~O_NONBLOCK;
|
|
|
|
fsnotify_open(file);
|
|
|
|
fd_install(ret, file);
|
2019-12-11 18:20:36 +00:00
|
|
|
err:
|
|
|
|
putname(req->open.filename);
|
2020-02-07 20:59:53 +00:00
|
|
|
req->flags &= ~REQ_F_NEED_CLEANUP;
|
2019-12-11 18:20:36 +00:00
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-04-11 00:46:29 +00:00
|
|
|
__io_req_complete(req, issue_flags, ret, 0);
|
2019-12-11 18:20:36 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
static int io_openat(struct io_kiocb *req, unsigned int issue_flags)
|
2020-01-09 00:59:24 +00:00
|
|
|
{
|
2021-02-28 22:35:14 +00:00
|
|
|
return io_openat2(req, issue_flags);
|
2020-01-09 00:59:24 +00:00
|
|
|
}
|
|
|
|
|
2020-03-02 23:32:28 +00:00
|
|
|
static int io_remove_buffers_prep(struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
|
|
|
{
|
|
|
|
struct io_provide_buf *p = &req->pbuf;
|
|
|
|
u64 tmp;
|
|
|
|
|
|
|
|
if (sqe->ioprio || sqe->rw_flags || sqe->addr || sqe->len || sqe->off)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
tmp = READ_ONCE(sqe->fd);
|
|
|
|
if (!tmp || tmp > USHRT_MAX)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
memset(p, 0, sizeof(*p));
|
|
|
|
p->nbufs = tmp;
|
|
|
|
p->bgid = READ_ONCE(sqe->buf_group);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __io_remove_buffers(struct io_ring_ctx *ctx, struct io_buffer *buf,
|
|
|
|
int bgid, unsigned nbufs)
|
|
|
|
{
|
|
|
|
unsigned i = 0;
|
|
|
|
|
|
|
|
/* shouldn't happen */
|
|
|
|
if (!nbufs)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* the head kbuf is the list itself */
|
|
|
|
while (!list_empty(&buf->list)) {
|
|
|
|
struct io_buffer *nxt;
|
|
|
|
|
|
|
|
nxt = list_first_entry(&buf->list, struct io_buffer, list);
|
|
|
|
list_del(&nxt->list);
|
|
|
|
kfree(nxt);
|
|
|
|
if (++i == nbufs)
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
i++;
|
|
|
|
kfree(buf);
|
2021-03-13 19:29:43 +00:00
|
|
|
xa_erase(&ctx->io_buffers, bgid);
|
2020-03-02 23:32:28 +00:00
|
|
|
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
static int io_remove_buffers(struct io_kiocb *req, unsigned int issue_flags)
|
2020-03-02 23:32:28 +00:00
|
|
|
{
|
|
|
|
struct io_provide_buf *p = &req->pbuf;
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct io_buffer *head;
|
|
|
|
int ret = 0;
|
2021-02-10 00:03:07 +00:00
|
|
|
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
|
2020-03-02 23:32:28 +00:00
|
|
|
|
|
|
|
io_ring_submit_lock(ctx, !force_nonblock);
|
|
|
|
|
|
|
|
lockdep_assert_held(&ctx->uring_lock);
|
|
|
|
|
|
|
|
ret = -ENOENT;
|
2021-03-13 19:29:43 +00:00
|
|
|
head = xa_load(&ctx->io_buffers, p->bgid);
|
2020-03-02 23:32:28 +00:00
|
|
|
if (head)
|
|
|
|
ret = __io_remove_buffers(ctx, head, p->bgid, p->nbufs);
|
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2020-03-02 23:32:28 +00:00
|
|
|
|
2021-02-28 22:35:13 +00:00
|
|
|
/* complete before unlock, IOPOLL may need the lock */
|
|
|
|
__io_req_complete(req, issue_flags, ret, 0);
|
|
|
|
io_ring_submit_unlock(ctx, !force_nonblock);
|
2020-03-02 23:32:28 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-02-23 23:41:33 +00:00
|
|
|
static int io_provide_buffers_prep(struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
|
|
|
{
|
2021-04-15 12:07:39 +00:00
|
|
|
unsigned long size, tmp_check;
|
2020-02-23 23:41:33 +00:00
|
|
|
struct io_provide_buf *p = &req->pbuf;
|
|
|
|
u64 tmp;
|
|
|
|
|
|
|
|
if (sqe->ioprio || sqe->rw_flags)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
tmp = READ_ONCE(sqe->fd);
|
|
|
|
if (!tmp || tmp > USHRT_MAX)
|
|
|
|
return -E2BIG;
|
|
|
|
p->nbufs = tmp;
|
|
|
|
p->addr = READ_ONCE(sqe->addr);
|
|
|
|
p->len = READ_ONCE(sqe->len);
|
|
|
|
|
2021-04-15 12:07:39 +00:00
|
|
|
if (check_mul_overflow((unsigned long)p->len, (unsigned long)p->nbufs,
|
|
|
|
&size))
|
|
|
|
return -EOVERFLOW;
|
|
|
|
if (check_add_overflow((unsigned long)p->addr, size, &tmp_check))
|
|
|
|
return -EOVERFLOW;
|
|
|
|
|
2021-03-19 10:21:19 +00:00
|
|
|
size = (unsigned long)p->len * p->nbufs;
|
|
|
|
if (!access_ok(u64_to_user_ptr(p->addr), size))
|
2020-02-23 23:41:33 +00:00
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
p->bgid = READ_ONCE(sqe->buf_group);
|
|
|
|
tmp = READ_ONCE(sqe->off);
|
|
|
|
if (tmp > USHRT_MAX)
|
|
|
|
return -E2BIG;
|
|
|
|
p->bid = tmp;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_add_buffers(struct io_provide_buf *pbuf, struct io_buffer **head)
|
|
|
|
{
|
|
|
|
struct io_buffer *buf;
|
|
|
|
u64 addr = pbuf->addr;
|
|
|
|
int i, bid = pbuf->bid;
|
|
|
|
|
|
|
|
for (i = 0; i < pbuf->nbufs; i++) {
|
|
|
|
buf = kmalloc(sizeof(*buf), GFP_KERNEL);
|
|
|
|
if (!buf)
|
|
|
|
break;
|
|
|
|
|
|
|
|
buf->addr = addr;
|
2021-05-05 12:47:06 +00:00
|
|
|
buf->len = min_t(__u32, pbuf->len, MAX_RW_COUNT);
|
2020-02-23 23:41:33 +00:00
|
|
|
buf->bid = bid;
|
|
|
|
addr += pbuf->len;
|
|
|
|
bid++;
|
|
|
|
if (!*head) {
|
|
|
|
INIT_LIST_HEAD(&buf->list);
|
|
|
|
*head = buf;
|
|
|
|
} else {
|
|
|
|
list_add_tail(&buf->list, &(*head)->list);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return i ? i : -ENOMEM;
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
static int io_provide_buffers(struct io_kiocb *req, unsigned int issue_flags)
|
2020-02-23 23:41:33 +00:00
|
|
|
{
|
|
|
|
struct io_provide_buf *p = &req->pbuf;
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct io_buffer *head, *list;
|
|
|
|
int ret = 0;
|
2021-02-10 00:03:07 +00:00
|
|
|
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
|
2020-02-23 23:41:33 +00:00
|
|
|
|
|
|
|
io_ring_submit_lock(ctx, !force_nonblock);
|
|
|
|
|
|
|
|
lockdep_assert_held(&ctx->uring_lock);
|
|
|
|
|
2021-03-13 19:29:43 +00:00
|
|
|
list = head = xa_load(&ctx->io_buffers, p->bgid);
|
2020-02-23 23:41:33 +00:00
|
|
|
|
|
|
|
ret = io_add_buffers(p, &head);
|
2021-03-13 19:29:43 +00:00
|
|
|
if (ret >= 0 && !list) {
|
|
|
|
ret = xa_insert(&ctx->io_buffers, p->bgid, head, GFP_KERNEL);
|
|
|
|
if (ret < 0)
|
2020-03-02 23:32:28 +00:00
|
|
|
__io_remove_buffers(ctx, head, p->bgid, -1U);
|
2020-02-23 23:41:33 +00:00
|
|
|
}
|
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-02-28 22:35:13 +00:00
|
|
|
/* complete before unlock, IOPOLL may need the lock */
|
|
|
|
__io_req_complete(req, issue_flags, ret, 0);
|
|
|
|
io_ring_submit_unlock(ctx, !force_nonblock);
|
2020-02-23 23:41:33 +00:00
|
|
|
return 0;
|
2020-01-09 00:59:24 +00:00
|
|
|
}
|
|
|
|
|
2020-01-08 22:18:09 +00:00
|
|
|
static int io_epoll_ctl_prep(struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
|
|
|
{
|
|
|
|
#if defined(CONFIG_EPOLL)
|
|
|
|
if (sqe->ioprio || sqe->buf_index)
|
|
|
|
return -EINVAL;
|
2021-05-14 11:05:46 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
2020-06-03 15:03:22 +00:00
|
|
|
return -EINVAL;
|
2020-01-08 22:18:09 +00:00
|
|
|
|
|
|
|
req->epoll.epfd = READ_ONCE(sqe->fd);
|
|
|
|
req->epoll.op = READ_ONCE(sqe->len);
|
|
|
|
req->epoll.fd = READ_ONCE(sqe->off);
|
|
|
|
|
|
|
|
if (ep_op_has_event(req->epoll.op)) {
|
|
|
|
struct epoll_event __user *ev;
|
|
|
|
|
|
|
|
ev = u64_to_user_ptr(READ_ONCE(sqe->addr));
|
|
|
|
if (copy_from_user(&req->epoll.event, ev, sizeof(*ev)))
|
|
|
|
return -EFAULT;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
#else
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
static int io_epoll_ctl(struct io_kiocb *req, unsigned int issue_flags)
|
2020-01-08 22:18:09 +00:00
|
|
|
{
|
|
|
|
#if defined(CONFIG_EPOLL)
|
|
|
|
struct io_epoll *ie = &req->epoll;
|
|
|
|
int ret;
|
2021-02-10 00:03:07 +00:00
|
|
|
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
|
2020-01-08 22:18:09 +00:00
|
|
|
|
|
|
|
ret = do_epoll_ctl(ie->epfd, ie->op, ie->fd, &ie->event, force_nonblock);
|
|
|
|
if (force_nonblock && ret == -EAGAIN)
|
|
|
|
return -EAGAIN;
|
|
|
|
|
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-02-10 00:03:09 +00:00
|
|
|
__io_req_complete(req, issue_flags, ret, 0);
|
2020-01-08 22:18:09 +00:00
|
|
|
return 0;
|
|
|
|
#else
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2019-12-26 05:18:28 +00:00
|
|
|
static int io_madvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
|
|
|
{
|
|
|
|
#if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU)
|
|
|
|
if (sqe->ioprio || sqe->buf_index || sqe->off)
|
|
|
|
return -EINVAL;
|
2020-06-03 15:03:22 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
2019-12-26 05:18:28 +00:00
|
|
|
|
|
|
|
req->madvise.addr = READ_ONCE(sqe->addr);
|
|
|
|
req->madvise.len = READ_ONCE(sqe->len);
|
|
|
|
req->madvise.advice = READ_ONCE(sqe->fadvise_advice);
|
|
|
|
return 0;
|
|
|
|
#else
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
static int io_madvise(struct io_kiocb *req, unsigned int issue_flags)
|
2019-12-26 05:18:28 +00:00
|
|
|
{
|
|
|
|
#if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU)
|
|
|
|
struct io_madvise *ma = &req->madvise;
|
|
|
|
int ret;
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
if (issue_flags & IO_URING_F_NONBLOCK)
|
2019-12-26 05:18:28 +00:00
|
|
|
return -EAGAIN;
|
|
|
|
|
mm/madvise: pass mm to do_madvise
Patch series "introduce memory hinting API for external process", v9.
Now, we have MADV_PAGEOUT and MADV_COLD as madvise hinting API. With
that, application could give hints to kernel what memory range are
preferred to be reclaimed. However, in some platform(e.g., Android), the
information required to make the hinting decision is not known to the app.
Instead, it is known to a centralized userspace daemon(e.g.,
ActivityManagerService), and that daemon must be able to initiate reclaim
on its own without any app involvement.
To solve the concern, this patch introduces new syscall -
process_madvise(2). Bascially, it's same with madvise(2) syscall but it
has some differences.
1. It needs pidfd of target process to provide the hint
2. It supports only MADV_{COLD|PAGEOUT|MERGEABLE|UNMEREABLE} at this
moment. Other hints in madvise will be opened when there are explicit
requests from community to prevent unexpected bugs we couldn't support.
3. Only privileged processes can do something for other process's
address space.
For more detail of the new API, please see "mm: introduce external memory
hinting API" description in this patchset.
This patch (of 3):
In upcoming patches, do_madvise will be called from external process
context so we shouldn't asssume "current" is always hinted process's
task_struct.
Furthermore, we must not access mm_struct via task->mm, but obtain it via
access_mm() once (in the following patch) and only use that pointer [1],
so pass it to do_madvise() as well. Note the vma->vm_mm pointers are
safe, so we can use them further down the call stack.
And let's pass current->mm as arguments of do_madvise so it shouldn't
change existing behavior but prepare next patch to make review easy.
[vbabka@suse.cz: changelog tweak]
[minchan@kernel.org: use current->mm for io_uring]
Link: http://lkml.kernel.org/r/20200423145215.72666-1-minchan@kernel.org
[akpm@linux-foundation.org: fix it for upstream changes]
[akpm@linux-foundation.org: whoops]
[rdunlap@infradead.org: add missing includes]
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Jann Horn <jannh@google.com>
Cc: Tim Murray <timmurray@google.com>
Cc: Daniel Colascione <dancol@google.com>
Cc: Sandeep Patil <sspatil@google.com>
Cc: Sonny Rao <sonnyrao@google.com>
Cc: Brian Geffon <bgeffon@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: John Dias <joaodias@google.com>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: SeongJae Park <sj38.park@gmail.com>
Cc: Christian Brauner <christian@brauner.io>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Oleksandr Natalenko <oleksandr@redhat.com>
Cc: SeongJae Park <sjpark@amazon.de>
Cc: Christian Brauner <christian.brauner@ubuntu.com>
Cc: Florian Weimer <fw@deneb.enyo.de>
Cc: <linux-man@vger.kernel.org>
Link: https://lkml.kernel.org/r/20200901000633.1920247-1-minchan@kernel.org
Link: http://lkml.kernel.org/r/20200622192900.22757-1-minchan@kernel.org
Link: http://lkml.kernel.org/r/20200302193630.68771-2-minchan@kernel.org
Link: http://lkml.kernel.org/r/20200622192900.22757-2-minchan@kernel.org
Link: https://lkml.kernel.org/r/20200901000633.1920247-2-minchan@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-17 23:14:50 +00:00
|
|
|
ret = do_madvise(current->mm, ma->addr, ma->len, ma->advice);
|
2019-12-26 05:18:28 +00:00
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2020-06-22 15:17:17 +00:00
|
|
|
io_req_complete(req, ret);
|
2019-12-26 05:18:28 +00:00
|
|
|
return 0;
|
|
|
|
#else
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2019-12-26 05:03:45 +00:00
|
|
|
static int io_fadvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
|
|
|
{
|
|
|
|
if (sqe->ioprio || sqe->buf_index || sqe->addr)
|
|
|
|
return -EINVAL;
|
2020-06-03 15:03:22 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
2019-12-26 05:03:45 +00:00
|
|
|
|
|
|
|
req->fadvise.offset = READ_ONCE(sqe->off);
|
|
|
|
req->fadvise.len = READ_ONCE(sqe->len);
|
|
|
|
req->fadvise.advice = READ_ONCE(sqe->fadvise_advice);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
static int io_fadvise(struct io_kiocb *req, unsigned int issue_flags)
|
2019-12-26 05:03:45 +00:00
|
|
|
{
|
|
|
|
struct io_fadvise *fa = &req->fadvise;
|
|
|
|
int ret;
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
if (issue_flags & IO_URING_F_NONBLOCK) {
|
2020-02-01 16:22:49 +00:00
|
|
|
switch (fa->advice) {
|
|
|
|
case POSIX_FADV_NORMAL:
|
|
|
|
case POSIX_FADV_RANDOM:
|
|
|
|
case POSIX_FADV_SEQUENTIAL:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EAGAIN;
|
|
|
|
}
|
|
|
|
}
|
2019-12-26 05:03:45 +00:00
|
|
|
|
|
|
|
ret = vfs_fadvise(req->file, fa->offset, fa->len, fa->advice);
|
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-04-11 00:46:29 +00:00
|
|
|
__io_req_complete(req, issue_flags, ret, 0);
|
2019-12-26 05:03:45 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-12-14 04:18:10 +00:00
|
|
|
static int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
|
|
|
{
|
2021-05-14 11:05:46 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
2020-06-03 15:03:22 +00:00
|
|
|
return -EINVAL;
|
2019-12-14 04:18:10 +00:00
|
|
|
if (sqe->ioprio || sqe->buf_index)
|
|
|
|
return -EINVAL;
|
2020-04-08 05:58:46 +00:00
|
|
|
if (req->flags & REQ_F_FIXED_FILE)
|
2020-02-07 04:31:40 +00:00
|
|
|
return -EBADF;
|
2019-12-14 04:18:10 +00:00
|
|
|
|
2020-05-23 04:31:16 +00:00
|
|
|
req->statx.dfd = READ_ONCE(sqe->fd);
|
|
|
|
req->statx.mask = READ_ONCE(sqe->len);
|
2020-05-23 04:31:18 +00:00
|
|
|
req->statx.filename = u64_to_user_ptr(READ_ONCE(sqe->addr));
|
2020-05-23 04:31:16 +00:00
|
|
|
req->statx.buffer = u64_to_user_ptr(READ_ONCE(sqe->addr2));
|
|
|
|
req->statx.flags = READ_ONCE(sqe->statx_flags);
|
2019-12-14 04:18:10 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
static int io_statx(struct io_kiocb *req, unsigned int issue_flags)
|
2019-12-14 04:18:10 +00:00
|
|
|
{
|
2020-05-23 04:31:16 +00:00
|
|
|
struct io_statx *ctx = &req->statx;
|
2019-12-14 04:18:10 +00:00
|
|
|
int ret;
|
|
|
|
|
2021-03-22 01:58:30 +00:00
|
|
|
if (issue_flags & IO_URING_F_NONBLOCK)
|
2019-12-14 04:18:10 +00:00
|
|
|
return -EAGAIN;
|
|
|
|
|
2020-05-23 04:31:18 +00:00
|
|
|
ret = do_statx(ctx->dfd, ctx->filename, ctx->flags, ctx->mask,
|
|
|
|
ctx->buffer);
|
2019-12-14 04:18:10 +00:00
|
|
|
|
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2020-06-22 15:17:17 +00:00
|
|
|
io_req_complete(req, ret);
|
2019-12-14 04:18:10 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-12-11 21:02:38 +00:00
|
|
|
static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
|
|
|
{
|
2020-09-05 17:36:08 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
2020-06-03 15:03:22 +00:00
|
|
|
return -EINVAL;
|
2019-12-11 21:02:38 +00:00
|
|
|
if (sqe->ioprio || sqe->off || sqe->addr || sqe->len ||
|
|
|
|
sqe->rw_flags || sqe->buf_index)
|
|
|
|
return -EINVAL;
|
2020-04-08 05:58:46 +00:00
|
|
|
if (req->flags & REQ_F_FIXED_FILE)
|
2020-02-07 04:31:40 +00:00
|
|
|
return -EBADF;
|
2019-12-11 21:02:38 +00:00
|
|
|
|
|
|
|
req->close.fd = READ_ONCE(sqe->fd);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
static int io_close(struct io_kiocb *req, unsigned int issue_flags)
|
2019-12-11 21:02:38 +00:00
|
|
|
{
|
2021-01-19 22:50:37 +00:00
|
|
|
struct files_struct *files = current->files;
|
2020-06-08 18:08:17 +00:00
|
|
|
struct io_close *close = &req->close;
|
2021-01-19 22:50:37 +00:00
|
|
|
struct fdtable *fdt;
|
2021-04-11 00:46:28 +00:00
|
|
|
struct file *file = NULL;
|
|
|
|
int ret = -EBADF;
|
2019-12-11 21:02:38 +00:00
|
|
|
|
2021-01-19 22:50:37 +00:00
|
|
|
spin_lock(&files->file_lock);
|
|
|
|
fdt = files_fdtable(files);
|
|
|
|
if (close->fd >= fdt->max_fds) {
|
|
|
|
spin_unlock(&files->file_lock);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
file = fdt->fd[close->fd];
|
2021-04-11 00:46:28 +00:00
|
|
|
if (!file || file->f_op == &io_uring_fops) {
|
2021-01-19 22:50:37 +00:00
|
|
|
spin_unlock(&files->file_lock);
|
|
|
|
file = NULL;
|
|
|
|
goto err;
|
2020-06-08 18:08:17 +00:00
|
|
|
}
|
2019-12-11 21:02:38 +00:00
|
|
|
|
|
|
|
/* if the file has a flush method, be safe and punt to async */
|
2021-02-10 00:03:07 +00:00
|
|
|
if (file->f_op->flush && (issue_flags & IO_URING_F_NONBLOCK)) {
|
2021-01-19 22:50:37 +00:00
|
|
|
spin_unlock(&files->file_lock);
|
2020-05-26 17:34:06 +00:00
|
|
|
return -EAGAIN;
|
2020-03-02 20:45:16 +00:00
|
|
|
}
|
2019-12-11 21:02:38 +00:00
|
|
|
|
2021-01-19 22:50:37 +00:00
|
|
|
ret = __close_fd_get_file(close->fd, &file);
|
|
|
|
spin_unlock(&files->file_lock);
|
|
|
|
if (ret < 0) {
|
|
|
|
if (ret == -ENOENT)
|
|
|
|
ret = -EBADF;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
2020-06-08 18:08:17 +00:00
|
|
|
/* No ->flush() or already async, safely close from here */
|
2021-01-19 22:50:37 +00:00
|
|
|
ret = filp_close(file, current->files);
|
|
|
|
err:
|
2020-06-08 18:08:17 +00:00
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-01-19 22:50:37 +00:00
|
|
|
if (file)
|
|
|
|
fput(file);
|
2021-02-10 00:03:09 +00:00
|
|
|
__io_req_complete(req, issue_flags, ret, 0);
|
2020-02-01 00:16:48 +00:00
|
|
|
return 0;
|
2019-12-11 21:02:38 +00:00
|
|
|
}
|
|
|
|
|
2021-02-18 18:29:38 +00:00
|
|
|
static int io_sfr_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
2019-04-09 20:56:44 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
|
|
|
if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
|
|
|
if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2019-12-16 18:55:28 +00:00
|
|
|
req->sync.off = READ_ONCE(sqe->off);
|
|
|
|
req->sync.len = READ_ONCE(sqe->len);
|
|
|
|
req->sync.flags = READ_ONCE(sqe->sync_range_flags);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
static int io_sync_file_range(struct io_kiocb *req, unsigned int issue_flags)
|
2019-12-16 18:55:28 +00:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2020-06-08 18:08:18 +00:00
|
|
|
/* sync_file_range always requires a blocking context */
|
2021-02-10 00:03:07 +00:00
|
|
|
if (issue_flags & IO_URING_F_NONBLOCK)
|
2020-06-08 18:08:18 +00:00
|
|
|
return -EAGAIN;
|
|
|
|
|
2019-12-20 15:45:55 +00:00
|
|
|
ret = sync_file_range(req->file, req->sync.off, req->sync.len,
|
2019-12-16 18:55:28 +00:00
|
|
|
req->sync.flags);
|
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2020-06-22 15:17:17 +00:00
|
|
|
io_req_complete(req, ret);
|
2019-04-09 20:56:44 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-03-04 07:53:52 +00:00
|
|
|
#if defined(CONFIG_NET)
|
2020-02-28 07:36:36 +00:00
|
|
|
static int io_setup_async_msg(struct io_kiocb *req,
|
|
|
|
struct io_async_msghdr *kmsg)
|
|
|
|
{
|
2020-08-16 01:44:09 +00:00
|
|
|
struct io_async_msghdr *async_msg = req->async_data;
|
|
|
|
|
|
|
|
if (async_msg)
|
2020-02-28 07:36:36 +00:00
|
|
|
return -EAGAIN;
|
2020-08-16 01:44:09 +00:00
|
|
|
if (io_alloc_async_data(req)) {
|
2021-02-05 00:58:00 +00:00
|
|
|
kfree(kmsg->free_iov);
|
2020-02-28 07:36:36 +00:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
2020-08-16 01:44:09 +00:00
|
|
|
async_msg = req->async_data;
|
2020-02-28 07:36:36 +00:00
|
|
|
req->flags |= REQ_F_NEED_CLEANUP;
|
2020-08-16 01:44:09 +00:00
|
|
|
memcpy(async_msg, kmsg, sizeof(*kmsg));
|
2021-02-05 00:57:58 +00:00
|
|
|
async_msg->msg.msg_name = &async_msg->addr;
|
2021-02-05 00:58:00 +00:00
|
|
|
/* if were using fast_iov, set it to the new one */
|
|
|
|
if (!async_msg->free_iov)
|
|
|
|
async_msg->msg.msg_iter.iov = async_msg->fast_iov;
|
|
|
|
|
2020-02-28 07:36:36 +00:00
|
|
|
return -EAGAIN;
|
|
|
|
}
|
|
|
|
|
2020-07-12 17:41:06 +00:00
|
|
|
static int io_sendmsg_copy_hdr(struct io_kiocb *req,
|
|
|
|
struct io_async_msghdr *iomsg)
|
|
|
|
{
|
|
|
|
iomsg->msg.msg_name = &iomsg->addr;
|
2021-02-05 00:58:00 +00:00
|
|
|
iomsg->free_iov = iomsg->fast_iov;
|
2020-07-12 17:41:06 +00:00
|
|
|
return sendmsg_copy_msghdr(&iomsg->msg, req->sr_msg.umsg,
|
2021-02-05 00:58:00 +00:00
|
|
|
req->sr_msg.msg_flags, &iomsg->free_iov);
|
2020-07-12 17:41:06 +00:00
|
|
|
}
|
|
|
|
|
2021-02-18 18:29:44 +00:00
|
|
|
static int io_sendmsg_prep_async(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = io_sendmsg_copy_hdr(req, req->async_data);
|
|
|
|
if (!ret)
|
|
|
|
req->flags |= REQ_F_NEED_CLEANUP;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-12-20 01:24:38 +00:00
|
|
|
static int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
2019-12-03 01:50:25 +00:00
|
|
|
{
|
2019-12-20 15:58:21 +00:00
|
|
|
struct io_sr_msg *sr = &req->sr_msg;
|
2019-12-03 01:50:25 +00:00
|
|
|
|
2020-06-03 15:03:25 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2020-07-12 17:41:04 +00:00
|
|
|
sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
|
2020-01-05 03:19:44 +00:00
|
|
|
sr->len = READ_ONCE(sqe->len);
|
2021-04-01 14:44:00 +00:00
|
|
|
sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
|
|
|
|
if (sr->msg_flags & MSG_DONTWAIT)
|
|
|
|
req->flags |= REQ_F_NOWAIT;
|
2019-12-20 01:24:38 +00:00
|
|
|
|
2020-02-27 21:17:49 +00:00
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
if (req->ctx->compat)
|
|
|
|
sr->msg_flags |= MSG_CMSG_COMPAT;
|
|
|
|
#endif
|
2021-02-18 18:29:44 +00:00
|
|
|
return 0;
|
2019-12-03 01:50:25 +00:00
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
static int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
|
2019-04-19 19:38:09 +00:00
|
|
|
{
|
2020-07-16 20:28:00 +00:00
|
|
|
struct io_async_msghdr iomsg, *kmsg;
|
2019-04-19 19:34:07 +00:00
|
|
|
struct socket *sock;
|
2020-07-16 20:27:59 +00:00
|
|
|
unsigned flags;
|
2021-03-20 19:33:36 +00:00
|
|
|
int min_ret = 0;
|
2019-04-19 19:34:07 +00:00
|
|
|
int ret;
|
|
|
|
|
2020-12-04 11:36:04 +00:00
|
|
|
sock = sock_from_file(req->file);
|
2020-07-16 20:27:59 +00:00
|
|
|
if (unlikely(!sock))
|
2020-12-04 11:36:04 +00:00
|
|
|
return -ENOTSOCK;
|
2019-12-20 01:24:38 +00:00
|
|
|
|
2021-02-05 00:58:00 +00:00
|
|
|
kmsg = req->async_data;
|
|
|
|
if (!kmsg) {
|
2020-07-16 20:27:59 +00:00
|
|
|
ret = io_sendmsg_copy_hdr(req, &iomsg);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
kmsg = &iomsg;
|
2019-04-19 19:34:07 +00:00
|
|
|
}
|
|
|
|
|
2021-04-01 14:44:00 +00:00
|
|
|
flags = req->sr_msg.msg_flags;
|
|
|
|
if (issue_flags & IO_URING_F_NONBLOCK)
|
2020-07-16 20:27:59 +00:00
|
|
|
flags |= MSG_DONTWAIT;
|
2021-03-20 19:33:36 +00:00
|
|
|
if (flags & MSG_WAITALL)
|
|
|
|
min_ret = iov_iter_count(&kmsg->msg.msg_iter);
|
|
|
|
|
2020-07-16 20:27:59 +00:00
|
|
|
ret = __sys_sendmsg_sock(sock, &kmsg->msg, flags);
|
2021-02-10 00:03:07 +00:00
|
|
|
if ((issue_flags & IO_URING_F_NONBLOCK) && ret == -EAGAIN)
|
2020-07-16 20:27:59 +00:00
|
|
|
return io_setup_async_msg(req, kmsg);
|
|
|
|
if (ret == -ERESTARTSYS)
|
|
|
|
ret = -EINTR;
|
2019-04-19 19:34:07 +00:00
|
|
|
|
2021-02-05 00:58:00 +00:00
|
|
|
/* fast path, check for non-NULL to avoid function call */
|
|
|
|
if (kmsg->free_iov)
|
|
|
|
kfree(kmsg->free_iov);
|
2020-02-07 19:04:45 +00:00
|
|
|
req->flags &= ~REQ_F_NEED_CLEANUP;
|
2021-03-20 19:33:36 +00:00
|
|
|
if (ret < min_ret)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-02-10 00:03:09 +00:00
|
|
|
__io_req_complete(req, issue_flags, ret, 0);
|
2019-04-09 20:56:44 +00:00
|
|
|
return 0;
|
2019-12-03 01:50:25 +00:00
|
|
|
}
|
2019-04-19 19:38:09 +00:00
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
static int io_send(struct io_kiocb *req, unsigned int issue_flags)
|
2020-01-05 03:19:44 +00:00
|
|
|
{
|
2020-07-16 20:27:59 +00:00
|
|
|
struct io_sr_msg *sr = &req->sr_msg;
|
|
|
|
struct msghdr msg;
|
|
|
|
struct iovec iov;
|
2020-01-05 03:19:44 +00:00
|
|
|
struct socket *sock;
|
2020-07-16 20:27:59 +00:00
|
|
|
unsigned flags;
|
2021-03-20 19:33:36 +00:00
|
|
|
int min_ret = 0;
|
2020-01-05 03:19:44 +00:00
|
|
|
int ret;
|
|
|
|
|
2020-12-04 11:36:04 +00:00
|
|
|
sock = sock_from_file(req->file);
|
2020-07-16 20:27:59 +00:00
|
|
|
if (unlikely(!sock))
|
2020-12-04 11:36:04 +00:00
|
|
|
return -ENOTSOCK;
|
2020-01-05 03:19:44 +00:00
|
|
|
|
2020-07-16 20:27:59 +00:00
|
|
|
ret = import_single_range(WRITE, sr->buf, sr->len, &iov, &msg.msg_iter);
|
|
|
|
if (unlikely(ret))
|
2020-09-09 12:12:37 +00:00
|
|
|
return ret;
|
2020-01-05 03:19:44 +00:00
|
|
|
|
2020-07-16 20:27:59 +00:00
|
|
|
msg.msg_name = NULL;
|
|
|
|
msg.msg_control = NULL;
|
|
|
|
msg.msg_controllen = 0;
|
|
|
|
msg.msg_namelen = 0;
|
2020-01-05 03:19:44 +00:00
|
|
|
|
2021-04-01 14:44:00 +00:00
|
|
|
flags = req->sr_msg.msg_flags;
|
|
|
|
if (issue_flags & IO_URING_F_NONBLOCK)
|
2020-07-16 20:27:59 +00:00
|
|
|
flags |= MSG_DONTWAIT;
|
2021-03-20 19:33:36 +00:00
|
|
|
if (flags & MSG_WAITALL)
|
|
|
|
min_ret = iov_iter_count(&msg.msg_iter);
|
|
|
|
|
2020-07-16 20:27:59 +00:00
|
|
|
msg.msg_flags = flags;
|
|
|
|
ret = sock_sendmsg(sock, &msg);
|
2021-02-10 00:03:07 +00:00
|
|
|
if ((issue_flags & IO_URING_F_NONBLOCK) && ret == -EAGAIN)
|
2020-07-16 20:27:59 +00:00
|
|
|
return -EAGAIN;
|
|
|
|
if (ret == -ERESTARTSYS)
|
|
|
|
ret = -EINTR;
|
2020-01-05 03:19:44 +00:00
|
|
|
|
2021-03-20 19:33:36 +00:00
|
|
|
if (ret < min_ret)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-02-10 00:03:09 +00:00
|
|
|
__io_req_complete(req, issue_flags, ret, 0);
|
2020-01-05 03:19:44 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-07-12 17:41:05 +00:00
|
|
|
static int __io_recvmsg_copy_hdr(struct io_kiocb *req,
|
|
|
|
struct io_async_msghdr *iomsg)
|
2020-02-27 17:15:42 +00:00
|
|
|
{
|
|
|
|
struct io_sr_msg *sr = &req->sr_msg;
|
|
|
|
struct iovec __user *uiov;
|
|
|
|
size_t iov_len;
|
|
|
|
int ret;
|
|
|
|
|
2020-07-12 17:41:05 +00:00
|
|
|
ret = __copy_msghdr_from_user(&iomsg->msg, sr->umsg,
|
|
|
|
&iomsg->uaddr, &uiov, &iov_len);
|
2020-02-27 17:15:42 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
if (req->flags & REQ_F_BUFFER_SELECT) {
|
|
|
|
if (iov_len > 1)
|
|
|
|
return -EINVAL;
|
2021-02-05 00:57:59 +00:00
|
|
|
if (copy_from_user(iomsg->fast_iov, uiov, sizeof(*uiov)))
|
2020-02-27 17:15:42 +00:00
|
|
|
return -EFAULT;
|
2021-02-05 00:57:59 +00:00
|
|
|
sr->len = iomsg->fast_iov[0].iov_len;
|
2021-02-05 00:58:00 +00:00
|
|
|
iomsg->free_iov = NULL;
|
2020-02-27 17:15:42 +00:00
|
|
|
} else {
|
2021-02-05 00:58:00 +00:00
|
|
|
iomsg->free_iov = iomsg->fast_iov;
|
2020-09-25 04:51:41 +00:00
|
|
|
ret = __import_iovec(READ, uiov, iov_len, UIO_FASTIOV,
|
2021-02-05 00:58:00 +00:00
|
|
|
&iomsg->free_iov, &iomsg->msg.msg_iter,
|
2020-09-25 04:51:41 +00:00
|
|
|
false);
|
2020-02-27 17:15:42 +00:00
|
|
|
if (ret > 0)
|
|
|
|
ret = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
static int __io_compat_recvmsg_copy_hdr(struct io_kiocb *req,
|
2020-07-12 17:41:05 +00:00
|
|
|
struct io_async_msghdr *iomsg)
|
2020-02-27 17:15:42 +00:00
|
|
|
{
|
|
|
|
struct io_sr_msg *sr = &req->sr_msg;
|
|
|
|
struct compat_iovec __user *uiov;
|
|
|
|
compat_uptr_t ptr;
|
|
|
|
compat_size_t len;
|
|
|
|
int ret;
|
|
|
|
|
2021-04-11 00:46:30 +00:00
|
|
|
ret = __get_compat_msghdr(&iomsg->msg, sr->umsg_compat, &iomsg->uaddr,
|
|
|
|
&ptr, &len);
|
2020-02-27 17:15:42 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
uiov = compat_ptr(ptr);
|
|
|
|
if (req->flags & REQ_F_BUFFER_SELECT) {
|
|
|
|
compat_ssize_t clen;
|
|
|
|
|
|
|
|
if (len > 1)
|
|
|
|
return -EINVAL;
|
|
|
|
if (!access_ok(uiov, sizeof(*uiov)))
|
|
|
|
return -EFAULT;
|
|
|
|
if (__get_user(clen, &uiov->iov_len))
|
|
|
|
return -EFAULT;
|
|
|
|
if (clen < 0)
|
|
|
|
return -EINVAL;
|
2020-11-29 18:33:32 +00:00
|
|
|
sr->len = clen;
|
2021-02-05 00:58:00 +00:00
|
|
|
iomsg->free_iov = NULL;
|
2020-02-27 17:15:42 +00:00
|
|
|
} else {
|
2021-02-05 00:58:00 +00:00
|
|
|
iomsg->free_iov = iomsg->fast_iov;
|
2020-09-25 04:51:41 +00:00
|
|
|
ret = __import_iovec(READ, (struct iovec __user *)uiov, len,
|
2021-02-05 00:58:00 +00:00
|
|
|
UIO_FASTIOV, &iomsg->free_iov,
|
2020-09-25 04:51:41 +00:00
|
|
|
&iomsg->msg.msg_iter, true);
|
2020-02-27 17:15:42 +00:00
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2020-07-12 17:41:05 +00:00
|
|
|
static int io_recvmsg_copy_hdr(struct io_kiocb *req,
|
|
|
|
struct io_async_msghdr *iomsg)
|
2020-02-27 17:15:42 +00:00
|
|
|
{
|
2020-07-12 17:41:05 +00:00
|
|
|
iomsg->msg.msg_name = &iomsg->addr;
|
2020-02-27 17:15:42 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
if (req->ctx->compat)
|
2020-07-12 17:41:05 +00:00
|
|
|
return __io_compat_recvmsg_copy_hdr(req, iomsg);
|
2020-01-05 03:19:44 +00:00
|
|
|
#endif
|
2020-02-27 17:15:42 +00:00
|
|
|
|
2020-07-12 17:41:05 +00:00
|
|
|
return __io_recvmsg_copy_hdr(req, iomsg);
|
2020-02-27 17:15:42 +00:00
|
|
|
}
|
|
|
|
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
static struct io_buffer *io_recv_buffer_select(struct io_kiocb *req,
|
2020-07-16 20:28:05 +00:00
|
|
|
bool needs_lock)
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
{
|
|
|
|
struct io_sr_msg *sr = &req->sr_msg;
|
|
|
|
struct io_buffer *kbuf;
|
|
|
|
|
|
|
|
kbuf = io_buffer_select(req, &sr->len, sr->bgid, sr->kbuf, needs_lock);
|
|
|
|
if (IS_ERR(kbuf))
|
|
|
|
return kbuf;
|
|
|
|
|
|
|
|
sr->kbuf = kbuf;
|
|
|
|
req->flags |= REQ_F_BUFFER_SELECTED;
|
|
|
|
return kbuf;
|
2020-01-05 03:19:44 +00:00
|
|
|
}
|
|
|
|
|
2020-07-16 20:28:05 +00:00
|
|
|
static inline unsigned int io_put_recv_kbuf(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
return io_put_kbuf(req, req->sr_msg.kbuf);
|
|
|
|
}
|
|
|
|
|
2021-02-18 18:29:44 +00:00
|
|
|
static int io_recvmsg_prep_async(struct io_kiocb *req)
|
2019-04-19 19:38:09 +00:00
|
|
|
{
|
2020-02-07 19:04:45 +00:00
|
|
|
int ret;
|
2019-12-20 01:24:38 +00:00
|
|
|
|
2021-02-18 18:29:44 +00:00
|
|
|
ret = io_recvmsg_copy_hdr(req, req->async_data);
|
|
|
|
if (!ret)
|
|
|
|
req->flags |= REQ_F_NEED_CLEANUP;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
|
|
|
{
|
|
|
|
struct io_sr_msg *sr = &req->sr_msg;
|
|
|
|
|
2020-06-03 15:03:25 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2020-07-12 17:41:04 +00:00
|
|
|
sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
|
2020-01-31 15:34:59 +00:00
|
|
|
sr->len = READ_ONCE(sqe->len);
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
sr->bgid = READ_ONCE(sqe->buf_group);
|
2021-04-01 14:44:00 +00:00
|
|
|
sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
|
|
|
|
if (sr->msg_flags & MSG_DONTWAIT)
|
|
|
|
req->flags |= REQ_F_NOWAIT;
|
2019-12-19 21:44:26 +00:00
|
|
|
|
2020-02-27 21:17:49 +00:00
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
if (req->ctx->compat)
|
|
|
|
sr->msg_flags |= MSG_CMSG_COMPAT;
|
|
|
|
#endif
|
2021-02-18 18:29:44 +00:00
|
|
|
return 0;
|
2019-04-19 19:38:09 +00:00
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
|
2019-04-19 19:38:09 +00:00
|
|
|
{
|
2020-07-16 20:28:00 +00:00
|
|
|
struct io_async_msghdr iomsg, *kmsg;
|
2019-12-03 01:50:25 +00:00
|
|
|
struct socket *sock;
|
2020-07-16 20:28:05 +00:00
|
|
|
struct io_buffer *kbuf;
|
2020-07-16 20:27:59 +00:00
|
|
|
unsigned flags;
|
2021-03-20 19:33:36 +00:00
|
|
|
int min_ret = 0;
|
2020-02-27 17:15:42 +00:00
|
|
|
int ret, cflags = 0;
|
2021-02-10 00:03:07 +00:00
|
|
|
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
|
2019-12-03 01:50:25 +00:00
|
|
|
|
2020-12-04 11:36:04 +00:00
|
|
|
sock = sock_from_file(req->file);
|
2020-07-16 20:27:59 +00:00
|
|
|
if (unlikely(!sock))
|
2020-12-04 11:36:04 +00:00
|
|
|
return -ENOTSOCK;
|
2019-12-20 01:24:38 +00:00
|
|
|
|
2021-02-05 00:58:00 +00:00
|
|
|
kmsg = req->async_data;
|
|
|
|
if (!kmsg) {
|
2020-07-16 20:27:59 +00:00
|
|
|
ret = io_recvmsg_copy_hdr(req, &iomsg);
|
|
|
|
if (ret)
|
2020-07-15 19:20:45 +00:00
|
|
|
return ret;
|
2020-07-16 20:27:59 +00:00
|
|
|
kmsg = &iomsg;
|
|
|
|
}
|
2019-12-03 01:50:25 +00:00
|
|
|
|
2020-07-16 20:28:03 +00:00
|
|
|
if (req->flags & REQ_F_BUFFER_SELECT) {
|
2020-07-16 20:28:05 +00:00
|
|
|
kbuf = io_recv_buffer_select(req, !force_nonblock);
|
2020-07-16 20:28:03 +00:00
|
|
|
if (IS_ERR(kbuf))
|
2020-02-27 17:15:42 +00:00
|
|
|
return PTR_ERR(kbuf);
|
2020-07-16 20:27:59 +00:00
|
|
|
kmsg->fast_iov[0].iov_base = u64_to_user_ptr(kbuf->addr);
|
2021-02-05 00:57:59 +00:00
|
|
|
kmsg->fast_iov[0].iov_len = req->sr_msg.len;
|
|
|
|
iov_iter_init(&kmsg->msg.msg_iter, READ, kmsg->fast_iov,
|
2020-07-16 20:27:59 +00:00
|
|
|
1, req->sr_msg.len);
|
|
|
|
}
|
2020-02-27 17:15:42 +00:00
|
|
|
|
2021-04-01 14:44:00 +00:00
|
|
|
flags = req->sr_msg.msg_flags;
|
|
|
|
if (force_nonblock)
|
2020-07-16 20:27:59 +00:00
|
|
|
flags |= MSG_DONTWAIT;
|
2021-03-20 19:33:36 +00:00
|
|
|
if (flags & MSG_WAITALL)
|
|
|
|
min_ret = iov_iter_count(&kmsg->msg.msg_iter);
|
|
|
|
|
2020-07-16 20:27:59 +00:00
|
|
|
ret = __sys_recvmsg_sock(sock, &kmsg->msg, req->sr_msg.umsg,
|
|
|
|
kmsg->uaddr, flags);
|
2020-07-16 20:28:02 +00:00
|
|
|
if (force_nonblock && ret == -EAGAIN)
|
|
|
|
return io_setup_async_msg(req, kmsg);
|
2020-07-16 20:27:59 +00:00
|
|
|
if (ret == -ERESTARTSYS)
|
|
|
|
ret = -EINTR;
|
2019-12-03 01:50:25 +00:00
|
|
|
|
2020-07-16 20:28:05 +00:00
|
|
|
if (req->flags & REQ_F_BUFFER_SELECTED)
|
|
|
|
cflags = io_put_recv_kbuf(req);
|
2021-02-05 00:58:00 +00:00
|
|
|
/* fast path, check for non-NULL to avoid function call */
|
|
|
|
if (kmsg->free_iov)
|
|
|
|
kfree(kmsg->free_iov);
|
2020-02-07 19:04:45 +00:00
|
|
|
req->flags &= ~REQ_F_NEED_CLEANUP;
|
2021-03-20 19:33:36 +00:00
|
|
|
if (ret < min_ret || ((flags & MSG_WAITALL) && (kmsg->msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))))
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-02-10 00:03:09 +00:00
|
|
|
__io_req_complete(req, issue_flags, ret, cflags);
|
2019-12-03 01:50:25 +00:00
|
|
|
return 0;
|
2019-04-19 19:34:07 +00:00
|
|
|
}
|
2019-04-09 20:56:44 +00:00
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
static int io_recv(struct io_kiocb *req, unsigned int issue_flags)
|
2020-01-05 03:19:44 +00:00
|
|
|
{
|
2020-07-16 20:28:00 +00:00
|
|
|
struct io_buffer *kbuf;
|
2020-07-16 20:27:59 +00:00
|
|
|
struct io_sr_msg *sr = &req->sr_msg;
|
|
|
|
struct msghdr msg;
|
|
|
|
void __user *buf = sr->buf;
|
2020-01-05 03:19:44 +00:00
|
|
|
struct socket *sock;
|
2020-07-16 20:27:59 +00:00
|
|
|
struct iovec iov;
|
|
|
|
unsigned flags;
|
2021-03-20 19:33:36 +00:00
|
|
|
int min_ret = 0;
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
int ret, cflags = 0;
|
2021-02-10 00:03:07 +00:00
|
|
|
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
|
2020-01-05 03:19:44 +00:00
|
|
|
|
2020-12-04 11:36:04 +00:00
|
|
|
sock = sock_from_file(req->file);
|
2020-07-16 20:27:59 +00:00
|
|
|
if (unlikely(!sock))
|
2020-12-04 11:36:04 +00:00
|
|
|
return -ENOTSOCK;
|
2020-01-05 03:19:44 +00:00
|
|
|
|
2020-07-16 20:28:03 +00:00
|
|
|
if (req->flags & REQ_F_BUFFER_SELECT) {
|
2020-07-16 20:28:05 +00:00
|
|
|
kbuf = io_recv_buffer_select(req, !force_nonblock);
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
if (IS_ERR(kbuf))
|
|
|
|
return PTR_ERR(kbuf);
|
2020-07-16 20:27:59 +00:00
|
|
|
buf = u64_to_user_ptr(kbuf->addr);
|
2020-07-16 20:28:03 +00:00
|
|
|
}
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
|
2020-07-16 20:27:59 +00:00
|
|
|
ret = import_single_range(READ, buf, sr->len, &iov, &msg.msg_iter);
|
2020-07-16 20:28:01 +00:00
|
|
|
if (unlikely(ret))
|
|
|
|
goto out_free;
|
2020-01-05 03:19:44 +00:00
|
|
|
|
2020-07-16 20:27:59 +00:00
|
|
|
msg.msg_name = NULL;
|
|
|
|
msg.msg_control = NULL;
|
|
|
|
msg.msg_controllen = 0;
|
|
|
|
msg.msg_namelen = 0;
|
|
|
|
msg.msg_iocb = NULL;
|
|
|
|
msg.msg_flags = 0;
|
2020-01-05 03:19:44 +00:00
|
|
|
|
2021-04-01 14:44:00 +00:00
|
|
|
flags = req->sr_msg.msg_flags;
|
|
|
|
if (force_nonblock)
|
2020-07-16 20:27:59 +00:00
|
|
|
flags |= MSG_DONTWAIT;
|
2021-03-20 19:33:36 +00:00
|
|
|
if (flags & MSG_WAITALL)
|
|
|
|
min_ret = iov_iter_count(&msg.msg_iter);
|
|
|
|
|
2020-07-16 20:27:59 +00:00
|
|
|
ret = sock_recvmsg(sock, &msg, flags);
|
|
|
|
if (force_nonblock && ret == -EAGAIN)
|
|
|
|
return -EAGAIN;
|
|
|
|
if (ret == -ERESTARTSYS)
|
|
|
|
ret = -EINTR;
|
2020-07-16 20:28:01 +00:00
|
|
|
out_free:
|
2020-07-16 20:28:05 +00:00
|
|
|
if (req->flags & REQ_F_BUFFER_SELECTED)
|
|
|
|
cflags = io_put_recv_kbuf(req);
|
2021-03-20 19:33:36 +00:00
|
|
|
if (ret < min_ret || ((flags & MSG_WAITALL) && (msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))))
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-02-10 00:03:09 +00:00
|
|
|
__io_req_complete(req, issue_flags, ret, cflags);
|
2020-01-05 03:19:44 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-12-20 01:24:38 +00:00
|
|
|
static int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
2019-10-17 20:42:58 +00:00
|
|
|
{
|
2019-12-16 18:55:28 +00:00
|
|
|
struct io_accept *accept = &req->accept;
|
|
|
|
|
2020-09-05 17:36:08 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
2019-10-17 20:42:58 +00:00
|
|
|
return -EINVAL;
|
2019-11-25 19:40:22 +00:00
|
|
|
if (sqe->ioprio || sqe->len || sqe->buf_index)
|
2019-10-17 20:42:58 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
2019-12-11 23:12:15 +00:00
|
|
|
accept->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
|
|
|
|
accept->addr_len = u64_to_user_ptr(READ_ONCE(sqe->addr2));
|
2019-12-16 18:55:28 +00:00
|
|
|
accept->flags = READ_ONCE(sqe->accept_flags);
|
2020-03-20 02:16:56 +00:00
|
|
|
accept->nofile = rlimit(RLIMIT_NOFILE);
|
2019-12-16 18:55:28 +00:00
|
|
|
return 0;
|
|
|
|
}
|
2019-10-17 20:42:58 +00:00
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
static int io_accept(struct io_kiocb *req, unsigned int issue_flags)
|
2019-12-16 18:55:28 +00:00
|
|
|
{
|
|
|
|
struct io_accept *accept = &req->accept;
|
2021-02-10 00:03:07 +00:00
|
|
|
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
|
2020-06-08 18:08:18 +00:00
|
|
|
unsigned int file_flags = force_nonblock ? O_NONBLOCK : 0;
|
2019-12-16 18:55:28 +00:00
|
|
|
int ret;
|
|
|
|
|
2020-06-10 05:41:59 +00:00
|
|
|
if (req->file->f_flags & O_NONBLOCK)
|
|
|
|
req->flags |= REQ_F_NOWAIT;
|
|
|
|
|
2019-12-16 18:55:28 +00:00
|
|
|
ret = __sys_accept4_file(req->file, file_flags, accept->addr,
|
2020-03-20 02:16:56 +00:00
|
|
|
accept->addr_len, accept->flags,
|
|
|
|
accept->nofile);
|
2019-12-16 18:55:28 +00:00
|
|
|
if (ret == -EAGAIN && force_nonblock)
|
2019-10-17 20:42:58 +00:00
|
|
|
return -EAGAIN;
|
2020-06-08 18:08:18 +00:00
|
|
|
if (ret < 0) {
|
|
|
|
if (ret == -ERESTARTSYS)
|
|
|
|
ret = -EINTR;
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2020-06-08 18:08:18 +00:00
|
|
|
}
|
2021-02-10 00:03:09 +00:00
|
|
|
__io_req_complete(req, issue_flags, ret, 0);
|
2019-10-17 20:42:58 +00:00
|
|
|
return 0;
|
2019-12-16 18:55:28 +00:00
|
|
|
}
|
|
|
|
|
2021-02-18 18:29:44 +00:00
|
|
|
static int io_connect_prep_async(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
struct io_async_connect *io = req->async_data;
|
|
|
|
struct io_connect *conn = &req->connect;
|
|
|
|
|
|
|
|
return move_addr_to_kernel(conn->addr, conn->addr_len, &io->address);
|
|
|
|
}
|
|
|
|
|
2019-12-20 01:24:38 +00:00
|
|
|
static int io_connect_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
2019-12-02 23:28:46 +00:00
|
|
|
{
|
2019-12-20 01:24:38 +00:00
|
|
|
struct io_connect *conn = &req->connect;
|
2019-12-02 23:28:46 +00:00
|
|
|
|
2020-09-05 17:36:08 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
2019-12-20 15:51:52 +00:00
|
|
|
return -EINVAL;
|
|
|
|
if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->rw_flags)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2019-12-20 01:24:38 +00:00
|
|
|
conn->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
|
|
|
|
conn->addr_len = READ_ONCE(sqe->addr2);
|
2021-02-18 18:29:44 +00:00
|
|
|
return 0;
|
2019-12-02 23:28:46 +00:00
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
static int io_connect(struct io_kiocb *req, unsigned int issue_flags)
|
2019-11-23 21:24:24 +00:00
|
|
|
{
|
2020-08-16 01:44:09 +00:00
|
|
|
struct io_async_connect __io, *io;
|
2019-11-23 21:24:24 +00:00
|
|
|
unsigned file_flags;
|
2019-12-20 15:51:52 +00:00
|
|
|
int ret;
|
2021-02-10 00:03:07 +00:00
|
|
|
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
|
2019-11-23 21:24:24 +00:00
|
|
|
|
2020-08-16 01:44:09 +00:00
|
|
|
if (req->async_data) {
|
|
|
|
io = req->async_data;
|
2019-12-02 23:28:46 +00:00
|
|
|
} else {
|
2019-12-20 01:24:38 +00:00
|
|
|
ret = move_addr_to_kernel(req->connect.addr,
|
|
|
|
req->connect.addr_len,
|
2020-08-16 01:44:09 +00:00
|
|
|
&__io.address);
|
2019-12-02 23:28:46 +00:00
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
io = &__io;
|
|
|
|
}
|
|
|
|
|
2019-12-20 15:51:52 +00:00
|
|
|
file_flags = force_nonblock ? O_NONBLOCK : 0;
|
|
|
|
|
2020-08-16 01:44:09 +00:00
|
|
|
ret = __sys_connect_file(req->file, &io->address,
|
2019-12-20 15:51:52 +00:00
|
|
|
req->connect.addr_len, file_flags);
|
2019-12-03 18:23:54 +00:00
|
|
|
if ((ret == -EAGAIN || ret == -EINPROGRESS) && force_nonblock) {
|
2020-08-16 01:44:09 +00:00
|
|
|
if (req->async_data)
|
2019-12-16 05:13:43 +00:00
|
|
|
return -EAGAIN;
|
2020-08-16 01:44:09 +00:00
|
|
|
if (io_alloc_async_data(req)) {
|
2019-12-02 23:28:46 +00:00
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
2020-08-16 01:44:09 +00:00
|
|
|
memcpy(req->async_data, &__io, sizeof(__io));
|
2019-11-23 21:24:24 +00:00
|
|
|
return -EAGAIN;
|
2019-12-02 23:28:46 +00:00
|
|
|
}
|
2019-11-23 21:24:24 +00:00
|
|
|
if (ret == -ERESTARTSYS)
|
|
|
|
ret = -EINTR;
|
2019-12-02 23:28:46 +00:00
|
|
|
out:
|
2019-12-08 03:59:47 +00:00
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-02-10 00:03:09 +00:00
|
|
|
__io_req_complete(req, issue_flags, ret, 0);
|
2019-11-23 21:24:24 +00:00
|
|
|
return 0;
|
2020-03-04 07:53:52 +00:00
|
|
|
}
|
|
|
|
#else /* !CONFIG_NET */
|
2021-02-19 16:35:19 +00:00
|
|
|
#define IO_NETOP_FN(op) \
|
|
|
|
static int io_##op(struct io_kiocb *req, unsigned int issue_flags) \
|
|
|
|
{ \
|
|
|
|
return -EOPNOTSUPP; \
|
|
|
|
}
|
|
|
|
|
|
|
|
#define IO_NETOP_PREP(op) \
|
|
|
|
IO_NETOP_FN(op) \
|
|
|
|
static int io_##op##_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) \
|
|
|
|
{ \
|
|
|
|
return -EOPNOTSUPP; \
|
|
|
|
} \
|
|
|
|
|
|
|
|
#define IO_NETOP_PREP_ASYNC(op) \
|
|
|
|
IO_NETOP_PREP(op) \
|
|
|
|
static int io_##op##_prep_async(struct io_kiocb *req) \
|
|
|
|
{ \
|
|
|
|
return -EOPNOTSUPP; \
|
|
|
|
}
|
|
|
|
|
|
|
|
IO_NETOP_PREP_ASYNC(sendmsg);
|
|
|
|
IO_NETOP_PREP_ASYNC(recvmsg);
|
|
|
|
IO_NETOP_PREP_ASYNC(connect);
|
|
|
|
IO_NETOP_PREP(accept);
|
|
|
|
IO_NETOP_FN(send);
|
|
|
|
IO_NETOP_FN(recv);
|
2020-03-04 07:53:52 +00:00
|
|
|
#endif /* CONFIG_NET */
|
2019-11-23 21:24:24 +00:00
|
|
|
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
struct io_poll_table {
|
|
|
|
struct poll_table_struct pt;
|
|
|
|
struct io_kiocb *req;
|
2021-07-20 09:50:43 +00:00
|
|
|
int nr_entries;
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
int error;
|
|
|
|
};
|
2020-06-30 18:39:05 +00:00
|
|
|
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
|
2021-06-30 20:54:04 +00:00
|
|
|
__poll_t mask, io_req_tw_func_t func)
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
{
|
|
|
|
/* for instances that support it check for an event match first: */
|
|
|
|
if (mask && !(mask & poll->events))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
trace_io_uring_task_add(req->ctx, req->opcode, req->user_data, mask);
|
|
|
|
|
|
|
|
list_del_init(&poll->wait.entry);
|
|
|
|
|
|
|
|
req->result = mask;
|
2021-06-30 20:54:04 +00:00
|
|
|
req->io_task_work.func = func;
|
2020-08-11 14:04:14 +00:00
|
|
|
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
/*
|
2020-05-18 17:04:17 +00:00
|
|
|
* If this fails, then the task is exiting. When a task exits, the
|
|
|
|
* work gets canceled, so just cancel this request as well instead
|
|
|
|
* of executing it. We can't safely execute it anyway, as we may not
|
|
|
|
* have the needed state needed for it anyway.
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
*/
|
2021-07-01 12:26:05 +00:00
|
|
|
io_req_task_work_add(req);
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2020-04-13 17:09:12 +00:00
|
|
|
static bool io_poll_rewait(struct io_kiocb *req, struct io_poll_iocb *poll)
|
|
|
|
__acquires(&req->ctx->completion_lock)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2021-07-01 12:26:05 +00:00
|
|
|
if (unlikely(req->task->flags & PF_EXITING))
|
|
|
|
WRITE_ONCE(poll->canceled, true);
|
|
|
|
|
2020-04-13 17:09:12 +00:00
|
|
|
if (!req->result && !READ_ONCE(poll->canceled)) {
|
|
|
|
struct poll_table_struct pt = { ._key = poll->events };
|
|
|
|
|
|
|
|
req->result = vfs_poll(req->file, &pt) & poll->events;
|
|
|
|
}
|
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2020-04-13 17:09:12 +00:00
|
|
|
if (!req->result && !READ_ONCE(poll->canceled)) {
|
|
|
|
add_wait_queue(poll->head, &poll->wait);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2020-08-15 18:44:50 +00:00
|
|
|
static struct io_poll_iocb *io_poll_get_double(struct io_kiocb *req)
|
2020-05-15 17:56:54 +00:00
|
|
|
{
|
2020-08-16 01:44:09 +00:00
|
|
|
/* pure poll stashes this in ->async_data, poll driven retry elsewhere */
|
2020-08-15 18:44:50 +00:00
|
|
|
if (req->opcode == IORING_OP_POLL_ADD)
|
2020-08-16 01:44:09 +00:00
|
|
|
return req->async_data;
|
2020-08-15 18:44:50 +00:00
|
|
|
return req->apoll->double_poll;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct io_poll_iocb *io_poll_get_single(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
if (req->opcode == IORING_OP_POLL_ADD)
|
|
|
|
return &req->poll;
|
|
|
|
return &req->apoll->poll;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_poll_remove_double(struct io_kiocb *req)
|
2021-04-01 14:43:57 +00:00
|
|
|
__must_hold(&req->ctx->completion_lock)
|
2020-08-15 18:44:50 +00:00
|
|
|
{
|
|
|
|
struct io_poll_iocb *poll = io_poll_get_double(req);
|
2020-05-15 17:56:54 +00:00
|
|
|
|
|
|
|
lockdep_assert_held(&req->ctx->completion_lock);
|
|
|
|
|
|
|
|
if (poll && poll->head) {
|
|
|
|
struct wait_queue_head *head = poll->head;
|
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock_irq(&head->lock);
|
2020-05-15 17:56:54 +00:00
|
|
|
list_del_init(&poll->wait.entry);
|
|
|
|
if (poll->wait.private)
|
2021-02-24 20:28:27 +00:00
|
|
|
req_ref_put(req);
|
2020-05-15 17:56:54 +00:00
|
|
|
poll->head = NULL;
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock_irq(&head->lock);
|
2020-05-15 17:56:54 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-04-09 08:13:20 +00:00
|
|
|
static bool io_poll_complete(struct io_kiocb *req, __poll_t mask)
|
2021-04-01 14:43:57 +00:00
|
|
|
__must_hold(&req->ctx->completion_lock)
|
2020-05-15 17:56:54 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2021-02-23 05:08:01 +00:00
|
|
|
unsigned flags = IORING_CQE_F_MORE;
|
2021-04-09 08:13:20 +00:00
|
|
|
int error;
|
2020-05-15 17:56:54 +00:00
|
|
|
|
2021-04-09 08:13:20 +00:00
|
|
|
if (READ_ONCE(req->poll.canceled)) {
|
2021-02-23 15:19:33 +00:00
|
|
|
error = -ECANCELED;
|
2021-02-23 05:08:01 +00:00
|
|
|
req->poll.events |= EPOLLONESHOT;
|
2021-04-09 08:13:20 +00:00
|
|
|
} else {
|
2021-02-23 16:02:26 +00:00
|
|
|
error = mangle_poll(mask);
|
2021-04-09 08:13:20 +00:00
|
|
|
}
|
io_uring: allow events and user_data update of running poll requests
This adds two new POLL_ADD flags, IORING_POLL_UPDATE_EVENTS and
IORING_POLL_UPDATE_USER_DATA. As with the other POLL_ADD flag, these are
masked into sqe->len. If set, the POLL_ADD will have the following
behavior:
- sqe->addr must contain the the user_data of the poll request that
needs to be modified. This field is otherwise invalid for a POLL_ADD
command.
- If IORING_POLL_UPDATE_EVENTS is set, sqe->poll_events must contain the
new mask for the existing poll request. There are no checks for whether
these are identical or not, if a matching poll request is found, then it
is re-armed with the new mask.
- If IORING_POLL_UPDATE_USER_DATA is set, sqe->off must contain the new
user_data for the existing poll request.
A POLL_ADD with any of these flags set may complete with any of the
following results:
1) 0, which means that we successfully found the existing poll request
specified, and performed the re-arm procedure. Any error from that
re-arm will be exposed as a completion event for that original poll
request, not for the update request.
2) -ENOENT, if no existing poll request was found with the given
user_data.
3) -EALREADY, if the existing poll request was already in the process of
being removed/canceled/completing.
4) -EACCES, if an attempt was made to modify an internal poll request
(eg not one originally issued ass IORING_OP_POLL_ADD).
The usual -EINVAL cases apply as well, if any invalid fields are set
in the sqe for this command type.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-17 14:37:41 +00:00
|
|
|
if (req->poll.events & EPOLLONESHOT)
|
|
|
|
flags = 0;
|
2021-04-25 13:32:17 +00:00
|
|
|
if (!io_cqring_fill_event(ctx, req->user_data, error, flags)) {
|
2021-02-23 05:08:01 +00:00
|
|
|
req->poll.done = true;
|
|
|
|
flags = 0;
|
|
|
|
}
|
2021-04-13 07:20:39 +00:00
|
|
|
if (flags & IORING_CQE_F_MORE)
|
|
|
|
ctx->cq_extra++;
|
2020-05-15 17:56:54 +00:00
|
|
|
|
|
|
|
io_commit_cqring(ctx);
|
2021-02-23 05:08:01 +00:00
|
|
|
return !(flags & IORING_CQE_F_MORE);
|
2020-05-15 17:56:54 +00:00
|
|
|
}
|
|
|
|
|
2021-06-30 20:54:04 +00:00
|
|
|
static void io_poll_task_func(struct io_kiocb *req)
|
2020-05-15 17:56:54 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2020-10-18 09:17:42 +00:00
|
|
|
struct io_kiocb *nxt;
|
2020-05-15 17:56:54 +00:00
|
|
|
|
|
|
|
if (io_poll_rewait(req, &req->poll)) {
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2020-10-18 09:17:42 +00:00
|
|
|
} else {
|
2021-04-09 08:13:19 +00:00
|
|
|
bool done;
|
2020-05-15 17:56:54 +00:00
|
|
|
|
2021-04-09 08:13:20 +00:00
|
|
|
done = io_poll_complete(req, req->result);
|
2021-02-23 05:08:01 +00:00
|
|
|
if (done) {
|
2021-07-28 03:03:22 +00:00
|
|
|
io_poll_remove_double(req);
|
2021-02-23 05:08:01 +00:00
|
|
|
hash_del(&req->hash_node);
|
2021-04-09 08:13:19 +00:00
|
|
|
} else {
|
2021-02-23 05:08:01 +00:00
|
|
|
req->result = 0;
|
|
|
|
add_wait_queue(req->poll.head, &req->poll.wait);
|
|
|
|
}
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2020-10-18 09:17:42 +00:00
|
|
|
io_cqring_ev_posted(ctx);
|
2020-05-15 17:56:54 +00:00
|
|
|
|
2021-02-23 05:08:01 +00:00
|
|
|
if (done) {
|
|
|
|
nxt = io_put_req_find_next(req);
|
|
|
|
if (nxt)
|
2021-06-30 20:54:04 +00:00
|
|
|
io_req_task_submit(nxt);
|
2021-02-23 05:08:01 +00:00
|
|
|
}
|
2020-10-18 09:17:42 +00:00
|
|
|
}
|
2020-05-15 17:56:54 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int io_poll_double_wake(struct wait_queue_entry *wait, unsigned mode,
|
|
|
|
int sync, void *key)
|
|
|
|
{
|
|
|
|
struct io_kiocb *req = wait->private;
|
2020-08-15 18:44:50 +00:00
|
|
|
struct io_poll_iocb *poll = io_poll_get_single(req);
|
2020-05-15 17:56:54 +00:00
|
|
|
__poll_t mask = key_to_poll(key);
|
2021-08-10 21:18:27 +00:00
|
|
|
unsigned long flags;
|
2020-05-15 17:56:54 +00:00
|
|
|
|
|
|
|
/* for instances that support it check for an event match first: */
|
|
|
|
if (mask && !(mask & poll->events))
|
|
|
|
return 0;
|
2021-02-23 05:08:01 +00:00
|
|
|
if (!(poll->events & EPOLLONESHOT))
|
|
|
|
return poll->wait.func(&poll->wait, mode, sync, key);
|
2020-05-15 17:56:54 +00:00
|
|
|
|
2020-09-28 14:38:54 +00:00
|
|
|
list_del_init(&wait->entry);
|
|
|
|
|
2021-07-09 14:20:28 +00:00
|
|
|
if (poll->head) {
|
2020-05-15 17:56:54 +00:00
|
|
|
bool done;
|
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock_irqsave(&poll->head->lock, flags);
|
2020-07-17 23:09:27 +00:00
|
|
|
done = list_empty(&poll->wait.entry);
|
2020-05-15 17:56:54 +00:00
|
|
|
if (!done)
|
2020-07-17 23:09:27 +00:00
|
|
|
list_del_init(&poll->wait.entry);
|
2020-08-15 18:44:50 +00:00
|
|
|
/* make sure double remove sees this as being gone */
|
|
|
|
wait->private = NULL;
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock_irqrestore(&poll->head->lock, flags);
|
2020-10-25 19:53:26 +00:00
|
|
|
if (!done) {
|
|
|
|
/* use wait func handler, so it matches the rq type */
|
|
|
|
poll->wait.func(&poll->wait, mode, sync, key);
|
|
|
|
}
|
2020-05-15 17:56:54 +00:00
|
|
|
}
|
2021-02-24 20:28:27 +00:00
|
|
|
req_ref_put(req);
|
2020-05-15 17:56:54 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_init_poll_iocb(struct io_poll_iocb *poll, __poll_t events,
|
|
|
|
wait_queue_func_t wake_func)
|
|
|
|
{
|
|
|
|
poll->head = NULL;
|
|
|
|
poll->done = false;
|
|
|
|
poll->canceled = false;
|
2021-03-19 20:06:24 +00:00
|
|
|
#define IO_POLL_UNMASK (EPOLLERR|EPOLLHUP|EPOLLNVAL|EPOLLRDHUP)
|
|
|
|
/* mask in events that we always want/need */
|
|
|
|
poll->events = events | IO_POLL_UNMASK;
|
2020-05-15 17:56:54 +00:00
|
|
|
INIT_LIST_HEAD(&poll->wait.entry);
|
|
|
|
init_waitqueue_func_entry(&poll->wait, wake_func);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
|
2020-07-17 23:09:27 +00:00
|
|
|
struct wait_queue_head *head,
|
|
|
|
struct io_poll_iocb **poll_ptr)
|
2020-05-15 17:56:54 +00:00
|
|
|
{
|
|
|
|
struct io_kiocb *req = pt->req;
|
|
|
|
|
|
|
|
/*
|
2021-07-20 09:50:43 +00:00
|
|
|
* The file being polled uses multiple waitqueues for poll handling
|
|
|
|
* (e.g. one for read, one for write). Setup a separate io_poll_iocb
|
|
|
|
* if this happens.
|
2020-05-15 17:56:54 +00:00
|
|
|
*/
|
2021-07-20 09:50:43 +00:00
|
|
|
if (unlikely(pt->nr_entries)) {
|
2020-10-16 19:55:56 +00:00
|
|
|
struct io_poll_iocb *poll_one = poll;
|
|
|
|
|
2020-05-15 17:56:54 +00:00
|
|
|
/* already have a 2nd entry, fail a third attempt */
|
2020-07-17 23:09:27 +00:00
|
|
|
if (*poll_ptr) {
|
2020-05-15 17:56:54 +00:00
|
|
|
pt->error = -EINVAL;
|
|
|
|
return;
|
|
|
|
}
|
2021-04-15 15:47:13 +00:00
|
|
|
/*
|
|
|
|
* Can't handle multishot for double wait for now, turn it
|
|
|
|
* into one-shot mode.
|
|
|
|
*/
|
2021-05-17 11:43:34 +00:00
|
|
|
if (!(poll_one->events & EPOLLONESHOT))
|
|
|
|
poll_one->events |= EPOLLONESHOT;
|
io_uring: ignore double poll add on the same waitqueue head
syzbot reports a deadlock, attempting to lock the same spinlock twice:
============================================
WARNING: possible recursive locking detected
5.11.0-syzkaller #0 Not tainted
--------------------------------------------
swapper/1/0 is trying to acquire lock:
ffff88801b2b1130 (&runtime->sleep){..-.}-{2:2}, at: spin_lock include/linux/spinlock.h:354 [inline]
ffff88801b2b1130 (&runtime->sleep){..-.}-{2:2}, at: io_poll_double_wake+0x25f/0x6a0 fs/io_uring.c:4960
but task is already holding lock:
ffff88801b2b3130 (&runtime->sleep){..-.}-{2:2}, at: __wake_up_common_lock+0xb4/0x130 kernel/sched/wait.c:137
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&runtime->sleep);
lock(&runtime->sleep);
*** DEADLOCK ***
May be due to missing lock nesting notation
2 locks held by swapper/1/0:
#0: ffff888147474908 (&group->lock){..-.}-{2:2}, at: _snd_pcm_stream_lock_irqsave+0x9f/0xd0 sound/core/pcm_native.c:170
#1: ffff88801b2b3130 (&runtime->sleep){..-.}-{2:2}, at: __wake_up_common_lock+0xb4/0x130 kernel/sched/wait.c:137
stack backtrace:
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.11.0-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
<IRQ>
__dump_stack lib/dump_stack.c:79 [inline]
dump_stack+0xfa/0x151 lib/dump_stack.c:120
print_deadlock_bug kernel/locking/lockdep.c:2829 [inline]
check_deadlock kernel/locking/lockdep.c:2872 [inline]
validate_chain kernel/locking/lockdep.c:3661 [inline]
__lock_acquire.cold+0x14c/0x3b4 kernel/locking/lockdep.c:4900
lock_acquire kernel/locking/lockdep.c:5510 [inline]
lock_acquire+0x1ab/0x730 kernel/locking/lockdep.c:5475
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
spin_lock include/linux/spinlock.h:354 [inline]
io_poll_double_wake+0x25f/0x6a0 fs/io_uring.c:4960
__wake_up_common+0x147/0x650 kernel/sched/wait.c:108
__wake_up_common_lock+0xd0/0x130 kernel/sched/wait.c:138
snd_pcm_update_state+0x46a/0x540 sound/core/pcm_lib.c:203
snd_pcm_update_hw_ptr0+0xa75/0x1a50 sound/core/pcm_lib.c:464
snd_pcm_period_elapsed+0x160/0x250 sound/core/pcm_lib.c:1805
dummy_hrtimer_callback+0x94/0x1b0 sound/drivers/dummy.c:378
__run_hrtimer kernel/time/hrtimer.c:1519 [inline]
__hrtimer_run_queues+0x609/0xe40 kernel/time/hrtimer.c:1583
hrtimer_run_softirq+0x17b/0x360 kernel/time/hrtimer.c:1600
__do_softirq+0x29b/0x9f6 kernel/softirq.c:345
invoke_softirq kernel/softirq.c:221 [inline]
__irq_exit_rcu kernel/softirq.c:422 [inline]
irq_exit_rcu+0x134/0x200 kernel/softirq.c:434
sysvec_apic_timer_interrupt+0x93/0xc0 arch/x86/kernel/apic/apic.c:1100
</IRQ>
asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:632
RIP: 0010:native_save_fl arch/x86/include/asm/irqflags.h:29 [inline]
RIP: 0010:arch_local_save_flags arch/x86/include/asm/irqflags.h:70 [inline]
RIP: 0010:arch_irqs_disabled arch/x86/include/asm/irqflags.h:137 [inline]
RIP: 0010:acpi_safe_halt drivers/acpi/processor_idle.c:111 [inline]
RIP: 0010:acpi_idle_do_entry+0x1c9/0x250 drivers/acpi/processor_idle.c:516
Code: dd 38 6e f8 84 db 75 ac e8 54 32 6e f8 e8 0f 1c 74 f8 e9 0c 00 00 00 e8 45 32 6e f8 0f 00 2d 4e 4a c5 00 e8 39 32 6e f8 fb f4 <9c> 5b 81 e3 00 02 00 00 fa 31 ff 48 89 de e8 14 3a 6e f8 48 85 db
RSP: 0018:ffffc90000d47d18 EFLAGS: 00000293
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: ffff8880115c3780 RSI: ffffffff89052537 RDI: 0000000000000000
RBP: ffff888141127064 R08: 0000000000000001 R09: 0000000000000001
R10: ffffffff81794168 R11: 0000000000000000 R12: 0000000000000001
R13: ffff888141127000 R14: ffff888141127064 R15: ffff888143331804
acpi_idle_enter+0x361/0x500 drivers/acpi/processor_idle.c:647
cpuidle_enter_state+0x1b1/0xc80 drivers/cpuidle/cpuidle.c:237
cpuidle_enter+0x4a/0xa0 drivers/cpuidle/cpuidle.c:351
call_cpuidle kernel/sched/idle.c:158 [inline]
cpuidle_idle_call kernel/sched/idle.c:239 [inline]
do_idle+0x3e1/0x590 kernel/sched/idle.c:300
cpu_startup_entry+0x14/0x20 kernel/sched/idle.c:397
start_secondary+0x274/0x350 arch/x86/kernel/smpboot.c:272
secondary_startup_64_no_verify+0xb0/0xbb
which is due to the driver doing poll_wait() twice on the same
wait_queue_head. That is perfectly valid, but from checking the rest
of the kernel tree, it's the only driver that does this.
We can handle this just fine, we just need to ignore the second addition
as we'll get woken just fine on the first one.
Cc: stable@vger.kernel.org # 5.8+
Fixes: 18bceab101ad ("io_uring: allow POLL_ADD with double poll_wait() users")
Reported-by: syzbot+28abd693db9e92c160d8@syzkaller.appspotmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-28 23:07:30 +00:00
|
|
|
/* double add on the same waitqueue head, ignore */
|
2021-05-17 11:43:34 +00:00
|
|
|
if (poll_one->head == head)
|
io_uring: ignore double poll add on the same waitqueue head
syzbot reports a deadlock, attempting to lock the same spinlock twice:
============================================
WARNING: possible recursive locking detected
5.11.0-syzkaller #0 Not tainted
--------------------------------------------
swapper/1/0 is trying to acquire lock:
ffff88801b2b1130 (&runtime->sleep){..-.}-{2:2}, at: spin_lock include/linux/spinlock.h:354 [inline]
ffff88801b2b1130 (&runtime->sleep){..-.}-{2:2}, at: io_poll_double_wake+0x25f/0x6a0 fs/io_uring.c:4960
but task is already holding lock:
ffff88801b2b3130 (&runtime->sleep){..-.}-{2:2}, at: __wake_up_common_lock+0xb4/0x130 kernel/sched/wait.c:137
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&runtime->sleep);
lock(&runtime->sleep);
*** DEADLOCK ***
May be due to missing lock nesting notation
2 locks held by swapper/1/0:
#0: ffff888147474908 (&group->lock){..-.}-{2:2}, at: _snd_pcm_stream_lock_irqsave+0x9f/0xd0 sound/core/pcm_native.c:170
#1: ffff88801b2b3130 (&runtime->sleep){..-.}-{2:2}, at: __wake_up_common_lock+0xb4/0x130 kernel/sched/wait.c:137
stack backtrace:
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.11.0-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
<IRQ>
__dump_stack lib/dump_stack.c:79 [inline]
dump_stack+0xfa/0x151 lib/dump_stack.c:120
print_deadlock_bug kernel/locking/lockdep.c:2829 [inline]
check_deadlock kernel/locking/lockdep.c:2872 [inline]
validate_chain kernel/locking/lockdep.c:3661 [inline]
__lock_acquire.cold+0x14c/0x3b4 kernel/locking/lockdep.c:4900
lock_acquire kernel/locking/lockdep.c:5510 [inline]
lock_acquire+0x1ab/0x730 kernel/locking/lockdep.c:5475
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
spin_lock include/linux/spinlock.h:354 [inline]
io_poll_double_wake+0x25f/0x6a0 fs/io_uring.c:4960
__wake_up_common+0x147/0x650 kernel/sched/wait.c:108
__wake_up_common_lock+0xd0/0x130 kernel/sched/wait.c:138
snd_pcm_update_state+0x46a/0x540 sound/core/pcm_lib.c:203
snd_pcm_update_hw_ptr0+0xa75/0x1a50 sound/core/pcm_lib.c:464
snd_pcm_period_elapsed+0x160/0x250 sound/core/pcm_lib.c:1805
dummy_hrtimer_callback+0x94/0x1b0 sound/drivers/dummy.c:378
__run_hrtimer kernel/time/hrtimer.c:1519 [inline]
__hrtimer_run_queues+0x609/0xe40 kernel/time/hrtimer.c:1583
hrtimer_run_softirq+0x17b/0x360 kernel/time/hrtimer.c:1600
__do_softirq+0x29b/0x9f6 kernel/softirq.c:345
invoke_softirq kernel/softirq.c:221 [inline]
__irq_exit_rcu kernel/softirq.c:422 [inline]
irq_exit_rcu+0x134/0x200 kernel/softirq.c:434
sysvec_apic_timer_interrupt+0x93/0xc0 arch/x86/kernel/apic/apic.c:1100
</IRQ>
asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:632
RIP: 0010:native_save_fl arch/x86/include/asm/irqflags.h:29 [inline]
RIP: 0010:arch_local_save_flags arch/x86/include/asm/irqflags.h:70 [inline]
RIP: 0010:arch_irqs_disabled arch/x86/include/asm/irqflags.h:137 [inline]
RIP: 0010:acpi_safe_halt drivers/acpi/processor_idle.c:111 [inline]
RIP: 0010:acpi_idle_do_entry+0x1c9/0x250 drivers/acpi/processor_idle.c:516
Code: dd 38 6e f8 84 db 75 ac e8 54 32 6e f8 e8 0f 1c 74 f8 e9 0c 00 00 00 e8 45 32 6e f8 0f 00 2d 4e 4a c5 00 e8 39 32 6e f8 fb f4 <9c> 5b 81 e3 00 02 00 00 fa 31 ff 48 89 de e8 14 3a 6e f8 48 85 db
RSP: 0018:ffffc90000d47d18 EFLAGS: 00000293
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: ffff8880115c3780 RSI: ffffffff89052537 RDI: 0000000000000000
RBP: ffff888141127064 R08: 0000000000000001 R09: 0000000000000001
R10: ffffffff81794168 R11: 0000000000000000 R12: 0000000000000001
R13: ffff888141127000 R14: ffff888141127064 R15: ffff888143331804
acpi_idle_enter+0x361/0x500 drivers/acpi/processor_idle.c:647
cpuidle_enter_state+0x1b1/0xc80 drivers/cpuidle/cpuidle.c:237
cpuidle_enter+0x4a/0xa0 drivers/cpuidle/cpuidle.c:351
call_cpuidle kernel/sched/idle.c:158 [inline]
cpuidle_idle_call kernel/sched/idle.c:239 [inline]
do_idle+0x3e1/0x590 kernel/sched/idle.c:300
cpu_startup_entry+0x14/0x20 kernel/sched/idle.c:397
start_secondary+0x274/0x350 arch/x86/kernel/smpboot.c:272
secondary_startup_64_no_verify+0xb0/0xbb
which is due to the driver doing poll_wait() twice on the same
wait_queue_head. That is perfectly valid, but from checking the rest
of the kernel tree, it's the only driver that does this.
We can handle this just fine, we just need to ignore the second addition
as we'll get woken just fine on the first one.
Cc: stable@vger.kernel.org # 5.8+
Fixes: 18bceab101ad ("io_uring: allow POLL_ADD with double poll_wait() users")
Reported-by: syzbot+28abd693db9e92c160d8@syzkaller.appspotmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-28 23:07:30 +00:00
|
|
|
return;
|
2020-05-15 17:56:54 +00:00
|
|
|
poll = kmalloc(sizeof(*poll), GFP_ATOMIC);
|
|
|
|
if (!poll) {
|
|
|
|
pt->error = -ENOMEM;
|
|
|
|
return;
|
|
|
|
}
|
2020-10-16 19:55:56 +00:00
|
|
|
io_init_poll_iocb(poll, poll_one->events, io_poll_double_wake);
|
2021-02-24 20:28:27 +00:00
|
|
|
req_ref_get(req);
|
2020-05-15 17:56:54 +00:00
|
|
|
poll->wait.private = req;
|
2020-07-17 23:09:27 +00:00
|
|
|
*poll_ptr = poll;
|
2020-05-15 17:56:54 +00:00
|
|
|
}
|
|
|
|
|
2021-07-20 09:50:43 +00:00
|
|
|
pt->nr_entries++;
|
2020-05-15 17:56:54 +00:00
|
|
|
poll->head = head;
|
2020-06-17 09:53:56 +00:00
|
|
|
|
|
|
|
if (poll->events & EPOLLEXCLUSIVE)
|
|
|
|
add_wait_queue_exclusive(head, &poll->wait);
|
|
|
|
else
|
|
|
|
add_wait_queue(head, &poll->wait);
|
2020-05-15 17:56:54 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void io_async_queue_proc(struct file *file, struct wait_queue_head *head,
|
|
|
|
struct poll_table_struct *p)
|
|
|
|
{
|
|
|
|
struct io_poll_table *pt = container_of(p, struct io_poll_table, pt);
|
2020-07-17 23:09:27 +00:00
|
|
|
struct async_poll *apoll = pt->req->apoll;
|
2020-05-15 17:56:54 +00:00
|
|
|
|
2020-07-17 23:09:27 +00:00
|
|
|
__io_queue_proc(&apoll->poll, pt, head, &apoll->double_poll);
|
2020-05-15 17:56:54 +00:00
|
|
|
}
|
|
|
|
|
2021-06-30 20:54:04 +00:00
|
|
|
static void io_async_task_func(struct io_kiocb *req)
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
{
|
|
|
|
struct async_poll *apoll = req->apoll;
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2021-05-31 06:36:37 +00:00
|
|
|
trace_io_uring_task_run(req->ctx, req, req->opcode, req->user_data);
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
|
2020-04-13 17:09:12 +00:00
|
|
|
if (io_poll_rewait(req, &apoll->poll)) {
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2020-04-13 17:09:12 +00:00
|
|
|
return;
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
}
|
|
|
|
|
2021-04-09 08:13:21 +00:00
|
|
|
hash_del(&req->hash_node);
|
2020-08-15 18:44:50 +00:00
|
|
|
io_poll_remove_double(req);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2020-04-13 17:09:12 +00:00
|
|
|
|
2020-06-30 12:20:42 +00:00
|
|
|
if (!READ_ONCE(apoll->poll.canceled))
|
2021-06-30 20:54:04 +00:00
|
|
|
io_req_task_submit(req);
|
2020-06-30 12:20:42 +00:00
|
|
|
else
|
2021-03-19 17:22:40 +00:00
|
|
|
io_req_complete_failed(req, -ECANCELED);
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int io_async_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
|
|
|
|
void *key)
|
|
|
|
{
|
|
|
|
struct io_kiocb *req = wait->private;
|
|
|
|
struct io_poll_iocb *poll = &req->apoll->poll;
|
|
|
|
|
|
|
|
trace_io_uring_poll_wake(req->ctx, req->opcode, req->user_data,
|
|
|
|
key_to_poll(key));
|
|
|
|
|
|
|
|
return __io_async_wake(req, poll, key_to_poll(key), io_async_task_func);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_poll_req_insert(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct hlist_head *list;
|
|
|
|
|
|
|
|
list = &ctx->cancel_hash[hash_long(req->user_data, ctx->cancel_hash_bits)];
|
|
|
|
hlist_add_head(&req->hash_node, list);
|
|
|
|
}
|
|
|
|
|
|
|
|
static __poll_t __io_arm_poll_handler(struct io_kiocb *req,
|
|
|
|
struct io_poll_iocb *poll,
|
|
|
|
struct io_poll_table *ipt, __poll_t mask,
|
|
|
|
wait_queue_func_t wake_func)
|
|
|
|
__acquires(&ctx->completion_lock)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
bool cancel = false;
|
|
|
|
|
2020-10-18 09:17:43 +00:00
|
|
|
INIT_HLIST_NODE(&req->hash_node);
|
2020-05-15 17:56:54 +00:00
|
|
|
io_init_poll_iocb(poll, mask, wake_func);
|
2020-06-21 10:09:52 +00:00
|
|
|
poll->file = req->file;
|
2020-05-15 17:56:54 +00:00
|
|
|
poll->wait.private = req;
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
|
|
|
|
ipt->pt._key = mask;
|
|
|
|
ipt->req = req;
|
2021-07-20 09:50:43 +00:00
|
|
|
ipt->error = 0;
|
|
|
|
ipt->nr_entries = 0;
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
|
|
|
|
mask = vfs_poll(req->file, &ipt->pt) & poll->events;
|
2021-07-20 09:50:43 +00:00
|
|
|
if (unlikely(!ipt->nr_entries) && !ipt->error)
|
|
|
|
ipt->error = -EINVAL;
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-07-28 03:03:22 +00:00
|
|
|
if (ipt->error || (mask && (poll->events & EPOLLONESHOT)))
|
2021-07-20 09:50:44 +00:00
|
|
|
io_poll_remove_double(req);
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
if (likely(poll->head)) {
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock_irq(&poll->head->lock);
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
if (unlikely(list_empty(&poll->wait.entry))) {
|
|
|
|
if (ipt->error)
|
|
|
|
cancel = true;
|
|
|
|
ipt->error = 0;
|
|
|
|
mask = 0;
|
|
|
|
}
|
2021-02-23 05:08:01 +00:00
|
|
|
if ((mask && (poll->events & EPOLLONESHOT)) || ipt->error)
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
list_del_init(&poll->wait.entry);
|
|
|
|
else if (cancel)
|
|
|
|
WRITE_ONCE(poll->canceled, true);
|
|
|
|
else if (!poll->done) /* actually waiting for an event */
|
|
|
|
io_poll_req_insert(req);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock_irq(&poll->head->lock);
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
|
2021-06-22 12:17:39 +00:00
|
|
|
enum {
|
|
|
|
IO_APOLL_OK,
|
|
|
|
IO_APOLL_ABORTED,
|
|
|
|
IO_APOLL_READY
|
|
|
|
};
|
|
|
|
|
|
|
|
static int io_arm_poll_handler(struct io_kiocb *req)
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
{
|
|
|
|
const struct io_op_def *def = &io_op_defs[req->opcode];
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct async_poll *apoll;
|
|
|
|
struct io_poll_table ipt;
|
2021-06-26 20:40:44 +00:00
|
|
|
__poll_t ret, mask = EPOLLONESHOT | POLLERR | POLLPRI;
|
2020-08-25 18:27:50 +00:00
|
|
|
int rw;
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
|
|
|
|
if (!req->file || !file_can_poll(req->file))
|
2021-06-22 12:17:39 +00:00
|
|
|
return IO_APOLL_ABORTED;
|
2020-06-21 10:09:51 +00:00
|
|
|
if (req->flags & REQ_F_POLLED)
|
2021-06-22 12:17:39 +00:00
|
|
|
return IO_APOLL_ABORTED;
|
2021-06-26 20:40:44 +00:00
|
|
|
if (!def->pollin && !def->pollout)
|
|
|
|
return IO_APOLL_ABORTED;
|
|
|
|
|
|
|
|
if (def->pollin) {
|
2020-08-25 18:27:50 +00:00
|
|
|
rw = READ;
|
2021-06-26 20:40:44 +00:00
|
|
|
mask |= POLLIN | POLLRDNORM;
|
|
|
|
|
|
|
|
/* If reading from MSG_ERRQUEUE using recvmsg, ignore POLLIN */
|
|
|
|
if ((req->opcode == IORING_OP_RECVMSG) &&
|
|
|
|
(req->sr_msg.msg_flags & MSG_ERRQUEUE))
|
|
|
|
mask &= ~POLLIN;
|
|
|
|
} else {
|
2020-08-25 18:27:50 +00:00
|
|
|
rw = WRITE;
|
2021-06-26 20:40:44 +00:00
|
|
|
mask |= POLLOUT | POLLWRNORM;
|
|
|
|
}
|
|
|
|
|
2020-08-25 18:27:50 +00:00
|
|
|
/* if we can't nonblock try, then no point in arming a poll handler */
|
2021-08-09 12:04:03 +00:00
|
|
|
if (!io_file_supports_nowait(req, rw))
|
2021-06-22 12:17:39 +00:00
|
|
|
return IO_APOLL_ABORTED;
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
|
|
|
|
apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
|
|
|
|
if (unlikely(!apoll))
|
2021-06-22 12:17:39 +00:00
|
|
|
return IO_APOLL_ABORTED;
|
2020-07-17 23:09:27 +00:00
|
|
|
apoll->double_poll = NULL;
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
req->apoll = apoll;
|
2021-06-26 20:40:44 +00:00
|
|
|
req->flags |= REQ_F_POLLED;
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
ipt.pt._qproc = io_async_queue_proc;
|
2021-08-15 09:40:18 +00:00
|
|
|
io_req_set_refcount(req);
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
|
|
|
|
ret = __io_arm_poll_handler(req, &apoll->poll, &ipt, mask,
|
|
|
|
io_async_wake);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-08-12 07:47:02 +00:00
|
|
|
if (ret || ipt.error)
|
|
|
|
return ret ? IO_APOLL_READY : IO_APOLL_ABORTED;
|
|
|
|
|
2021-05-31 06:36:37 +00:00
|
|
|
trace_io_uring_poll_arm(ctx, req, req->opcode, req->user_data,
|
|
|
|
mask, apoll->poll.events);
|
2021-06-22 12:17:39 +00:00
|
|
|
return IO_APOLL_OK;
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static bool __io_poll_remove_one(struct io_kiocb *req,
|
2021-03-31 15:03:03 +00:00
|
|
|
struct io_poll_iocb *poll, bool do_cancel)
|
2021-04-01 14:43:57 +00:00
|
|
|
__must_hold(&req->ctx->completion_lock)
|
2019-01-17 16:41:58 +00:00
|
|
|
{
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 16:52:41 +00:00
|
|
|
bool do_complete = false;
|
2019-01-17 16:41:58 +00:00
|
|
|
|
2021-02-23 16:02:26 +00:00
|
|
|
if (!poll->head)
|
|
|
|
return false;
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock_irq(&poll->head->lock);
|
2021-03-31 15:03:03 +00:00
|
|
|
if (do_cancel)
|
|
|
|
WRITE_ONCE(poll->canceled, true);
|
2019-12-10 00:52:20 +00:00
|
|
|
if (!list_empty(&poll->wait.entry)) {
|
|
|
|
list_del_init(&poll->wait.entry);
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 16:52:41 +00:00
|
|
|
do_complete = true;
|
2019-01-17 16:41:58 +00:00
|
|
|
}
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock_irq(&poll->head->lock);
|
2020-05-17 19:54:12 +00:00
|
|
|
hash_del(&req->hash_node);
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
return do_complete;
|
|
|
|
}
|
|
|
|
|
2021-08-09 19:18:13 +00:00
|
|
|
static bool io_poll_remove_one(struct io_kiocb *req)
|
2021-04-01 14:43:57 +00:00
|
|
|
__must_hold(&req->ctx->completion_lock)
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
{
|
|
|
|
bool do_complete;
|
|
|
|
|
2020-08-15 18:44:50 +00:00
|
|
|
io_poll_remove_double(req);
|
2021-04-13 01:58:43 +00:00
|
|
|
do_complete = __io_poll_remove_one(req, io_poll_get_single(req), true);
|
2020-08-15 18:44:50 +00:00
|
|
|
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 16:52:41 +00:00
|
|
|
if (do_complete) {
|
2021-04-25 13:32:17 +00:00
|
|
|
io_cqring_fill_event(req->ctx, req->user_data, -ECANCELED, 0);
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 16:52:41 +00:00
|
|
|
io_commit_cqring(req->ctx);
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-08-11 18:28:28 +00:00
|
|
|
io_put_req_deferred(req);
|
2021-08-09 19:18:13 +00:00
|
|
|
}
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 16:52:41 +00:00
|
|
|
return do_complete;
|
2019-01-17 16:41:58 +00:00
|
|
|
}
|
|
|
|
|
2020-09-26 21:05:03 +00:00
|
|
|
/*
|
|
|
|
* Returns true if we found and killed one or more poll requests
|
|
|
|
*/
|
2020-11-06 13:00:25 +00:00
|
|
|
static bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk,
|
2021-05-16 21:58:04 +00:00
|
|
|
bool cancel_all)
|
2019-01-17 16:41:58 +00:00
|
|
|
{
|
2019-12-05 02:56:40 +00:00
|
|
|
struct hlist_node *tmp;
|
2019-01-17 16:41:58 +00:00
|
|
|
struct io_kiocb *req;
|
2020-04-13 23:05:14 +00:00
|
|
|
int posted = 0, i;
|
2019-01-17 16:41:58 +00:00
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2019-12-05 02:56:40 +00:00
|
|
|
for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
|
|
|
|
struct hlist_head *list;
|
|
|
|
|
|
|
|
list = &ctx->cancel_hash[i];
|
2020-09-22 14:18:24 +00:00
|
|
|
hlist_for_each_entry_safe(req, tmp, list, hash_node) {
|
2021-05-16 21:58:04 +00:00
|
|
|
if (io_match_task(req, tsk, cancel_all))
|
2020-09-22 14:18:24 +00:00
|
|
|
posted += io_poll_remove_one(req);
|
|
|
|
}
|
2019-01-17 16:41:58 +00:00
|
|
|
}
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 16:52:41 +00:00
|
|
|
|
2020-04-13 23:05:14 +00:00
|
|
|
if (posted)
|
|
|
|
io_cqring_ev_posted(ctx);
|
2020-09-26 21:05:03 +00:00
|
|
|
|
|
|
|
return posted != 0;
|
2019-01-17 16:41:58 +00:00
|
|
|
}
|
|
|
|
|
2021-04-14 12:38:35 +00:00
|
|
|
static struct io_kiocb *io_poll_find(struct io_ring_ctx *ctx, __u64 sqe_addr,
|
|
|
|
bool poll_only)
|
2021-04-01 14:43:57 +00:00
|
|
|
__must_hold(&ctx->completion_lock)
|
2019-11-10 00:43:02 +00:00
|
|
|
{
|
2019-12-05 02:56:40 +00:00
|
|
|
struct hlist_head *list;
|
2019-11-10 00:43:02 +00:00
|
|
|
struct io_kiocb *req;
|
|
|
|
|
2019-12-05 02:56:40 +00:00
|
|
|
list = &ctx->cancel_hash[hash_long(sqe_addr, ctx->cancel_hash_bits)];
|
|
|
|
hlist_for_each_entry(req, list, hash_node) {
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 16:52:41 +00:00
|
|
|
if (sqe_addr != req->user_data)
|
|
|
|
continue;
|
2021-04-14 12:38:35 +00:00
|
|
|
if (poll_only && req->opcode != IORING_OP_POLL_ADD)
|
|
|
|
continue;
|
2021-03-17 14:17:19 +00:00
|
|
|
return req;
|
2019-11-10 00:43:02 +00:00
|
|
|
}
|
2021-03-17 14:17:19 +00:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2021-04-14 12:38:35 +00:00
|
|
|
static int io_poll_cancel(struct io_ring_ctx *ctx, __u64 sqe_addr,
|
|
|
|
bool poll_only)
|
2021-04-01 14:43:57 +00:00
|
|
|
__must_hold(&ctx->completion_lock)
|
2021-03-17 14:17:19 +00:00
|
|
|
{
|
|
|
|
struct io_kiocb *req;
|
|
|
|
|
2021-04-14 12:38:35 +00:00
|
|
|
req = io_poll_find(ctx, sqe_addr, poll_only);
|
2021-03-17 14:17:19 +00:00
|
|
|
if (!req)
|
|
|
|
return -ENOENT;
|
|
|
|
if (io_poll_remove_one(req))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return -EALREADY;
|
2019-11-10 00:43:02 +00:00
|
|
|
}
|
|
|
|
|
2021-04-14 12:38:36 +00:00
|
|
|
static __poll_t io_poll_parse_events(const struct io_uring_sqe *sqe,
|
|
|
|
unsigned int flags)
|
|
|
|
{
|
|
|
|
u32 events;
|
2019-11-10 00:43:02 +00:00
|
|
|
|
2021-04-14 12:38:36 +00:00
|
|
|
events = READ_ONCE(sqe->poll32_events);
|
|
|
|
#ifdef __BIG_ENDIAN
|
|
|
|
events = swahw32(events);
|
|
|
|
#endif
|
|
|
|
if (!(flags & IORING_POLL_ADD_MULTI))
|
|
|
|
events |= EPOLLONESHOT;
|
|
|
|
return demangle_poll(events) | (events & (EPOLLEXCLUSIVE|EPOLLONESHOT));
|
2019-11-10 00:43:02 +00:00
|
|
|
}
|
|
|
|
|
2021-04-14 12:38:37 +00:00
|
|
|
static int io_poll_update_prep(struct io_kiocb *req,
|
2019-12-20 01:24:38 +00:00
|
|
|
const struct io_uring_sqe *sqe)
|
2019-12-18 01:40:57 +00:00
|
|
|
{
|
2021-04-14 12:38:37 +00:00
|
|
|
struct io_poll_update *upd = &req->poll_update;
|
|
|
|
u32 flags;
|
|
|
|
|
2019-12-18 01:40:57 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
2021-04-14 12:38:37 +00:00
|
|
|
if (sqe->ioprio || sqe->buf_index)
|
|
|
|
return -EINVAL;
|
|
|
|
flags = READ_ONCE(sqe->len);
|
|
|
|
if (flags & ~(IORING_POLL_UPDATE_EVENTS | IORING_POLL_UPDATE_USER_DATA |
|
|
|
|
IORING_POLL_ADD_MULTI))
|
|
|
|
return -EINVAL;
|
|
|
|
/* meaningless without update */
|
|
|
|
if (flags == IORING_POLL_ADD_MULTI)
|
2019-12-18 01:40:57 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
2021-04-14 12:38:37 +00:00
|
|
|
upd->old_user_data = READ_ONCE(sqe->addr);
|
|
|
|
upd->update_events = flags & IORING_POLL_UPDATE_EVENTS;
|
|
|
|
upd->update_user_data = flags & IORING_POLL_UPDATE_USER_DATA;
|
2019-01-17 16:41:58 +00:00
|
|
|
|
2021-04-14 12:38:37 +00:00
|
|
|
upd->new_user_data = READ_ONCE(sqe->off);
|
|
|
|
if (!upd->update_user_data && upd->new_user_data)
|
|
|
|
return -EINVAL;
|
|
|
|
if (upd->update_events)
|
|
|
|
upd->events = io_poll_parse_events(sqe, flags);
|
|
|
|
else if (sqe->poll32_events)
|
|
|
|
return -EINVAL;
|
2019-01-17 16:41:58 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
|
|
|
|
void *key)
|
|
|
|
{
|
2020-02-10 16:07:05 +00:00
|
|
|
struct io_kiocb *req = wait->private;
|
|
|
|
struct io_poll_iocb *poll = &req->poll;
|
2019-01-17 16:41:58 +00:00
|
|
|
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
return __io_async_wake(req, poll, key_to_poll(key), io_poll_task_func);
|
2019-01-17 16:41:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void io_poll_queue_proc(struct file *file, struct wait_queue_head *head,
|
|
|
|
struct poll_table_struct *p)
|
|
|
|
{
|
|
|
|
struct io_poll_table *pt = container_of(p, struct io_poll_table, pt);
|
|
|
|
|
2020-08-16 01:44:09 +00:00
|
|
|
__io_queue_proc(&pt->req->poll, pt, head, (struct io_poll_iocb **) &pt->req->async_data);
|
2019-11-14 19:09:58 +00:00
|
|
|
}
|
|
|
|
|
2019-12-20 01:24:38 +00:00
|
|
|
static int io_poll_add_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
2019-01-17 16:41:58 +00:00
|
|
|
{
|
|
|
|
struct io_poll_iocb *poll = &req->poll;
|
2021-04-14 12:38:37 +00:00
|
|
|
u32 flags;
|
2019-01-17 16:41:58 +00:00
|
|
|
|
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
2021-04-14 12:38:37 +00:00
|
|
|
if (sqe->ioprio || sqe->buf_index || sqe->off || sqe->addr)
|
2021-02-23 05:08:01 +00:00
|
|
|
return -EINVAL;
|
|
|
|
flags = READ_ONCE(sqe->len);
|
2021-04-14 12:38:37 +00:00
|
|
|
if (flags & ~IORING_POLL_ADD_MULTI)
|
2019-01-17 16:41:58 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
2021-08-15 09:40:18 +00:00
|
|
|
io_req_set_refcount(req);
|
2021-04-14 12:38:37 +00:00
|
|
|
poll->events = io_poll_parse_events(sqe, flags);
|
2019-12-18 01:40:57 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:08 +00:00
|
|
|
static int io_poll_add(struct io_kiocb *req, unsigned int issue_flags)
|
2019-12-18 01:40:57 +00:00
|
|
|
{
|
|
|
|
struct io_poll_iocb *poll = &req->poll;
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct io_poll_table ipt;
|
|
|
|
__poll_t mask;
|
|
|
|
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
ipt.pt._qproc = io_poll_queue_proc;
|
2019-07-25 16:20:18 +00:00
|
|
|
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
mask = __io_arm_poll_handler(req, &req->poll, &ipt, poll->events,
|
|
|
|
io_poll_wake);
|
2019-01-17 16:41:58 +00:00
|
|
|
|
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-12 21:48:16 +00:00
|
|
|
if (mask) { /* no async, we'd stolen it */
|
2019-01-17 16:41:58 +00:00
|
|
|
ipt.error = 0;
|
2021-04-09 08:13:20 +00:00
|
|
|
io_poll_complete(req, mask);
|
2019-01-17 16:41:58 +00:00
|
|
|
}
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2019-01-17 16:41:58 +00:00
|
|
|
|
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-12 21:48:16 +00:00
|
|
|
if (mask) {
|
|
|
|
io_cqring_ev_posted(ctx);
|
2021-02-23 05:08:01 +00:00
|
|
|
if (poll->events & EPOLLONESHOT)
|
|
|
|
io_put_req(req);
|
2019-01-17 16:41:58 +00:00
|
|
|
}
|
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-12 21:48:16 +00:00
|
|
|
return ipt.error;
|
2019-01-17 16:41:58 +00:00
|
|
|
}
|
|
|
|
|
2021-04-14 12:38:37 +00:00
|
|
|
static int io_poll_update(struct io_kiocb *req, unsigned int issue_flags)
|
io_uring: allow events and user_data update of running poll requests
This adds two new POLL_ADD flags, IORING_POLL_UPDATE_EVENTS and
IORING_POLL_UPDATE_USER_DATA. As with the other POLL_ADD flag, these are
masked into sqe->len. If set, the POLL_ADD will have the following
behavior:
- sqe->addr must contain the the user_data of the poll request that
needs to be modified. This field is otherwise invalid for a POLL_ADD
command.
- If IORING_POLL_UPDATE_EVENTS is set, sqe->poll_events must contain the
new mask for the existing poll request. There are no checks for whether
these are identical or not, if a matching poll request is found, then it
is re-armed with the new mask.
- If IORING_POLL_UPDATE_USER_DATA is set, sqe->off must contain the new
user_data for the existing poll request.
A POLL_ADD with any of these flags set may complete with any of the
following results:
1) 0, which means that we successfully found the existing poll request
specified, and performed the re-arm procedure. Any error from that
re-arm will be exposed as a completion event for that original poll
request, not for the update request.
2) -ENOENT, if no existing poll request was found with the given
user_data.
3) -EALREADY, if the existing poll request was already in the process of
being removed/canceled/completing.
4) -EACCES, if an attempt was made to modify an internal poll request
(eg not one originally issued ass IORING_OP_POLL_ADD).
The usual -EINVAL cases apply as well, if any invalid fields are set
in the sqe for this command type.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-17 14:37:41 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct io_kiocb *preq;
|
2021-04-06 15:49:31 +00:00
|
|
|
bool completing;
|
io_uring: allow events and user_data update of running poll requests
This adds two new POLL_ADD flags, IORING_POLL_UPDATE_EVENTS and
IORING_POLL_UPDATE_USER_DATA. As with the other POLL_ADD flag, these are
masked into sqe->len. If set, the POLL_ADD will have the following
behavior:
- sqe->addr must contain the the user_data of the poll request that
needs to be modified. This field is otherwise invalid for a POLL_ADD
command.
- If IORING_POLL_UPDATE_EVENTS is set, sqe->poll_events must contain the
new mask for the existing poll request. There are no checks for whether
these are identical or not, if a matching poll request is found, then it
is re-armed with the new mask.
- If IORING_POLL_UPDATE_USER_DATA is set, sqe->off must contain the new
user_data for the existing poll request.
A POLL_ADD with any of these flags set may complete with any of the
following results:
1) 0, which means that we successfully found the existing poll request
specified, and performed the re-arm procedure. Any error from that
re-arm will be exposed as a completion event for that original poll
request, not for the update request.
2) -ENOENT, if no existing poll request was found with the given
user_data.
3) -EALREADY, if the existing poll request was already in the process of
being removed/canceled/completing.
4) -EACCES, if an attempt was made to modify an internal poll request
(eg not one originally issued ass IORING_OP_POLL_ADD).
The usual -EINVAL cases apply as well, if any invalid fields are set
in the sqe for this command type.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-17 14:37:41 +00:00
|
|
|
int ret;
|
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-04-14 12:38:35 +00:00
|
|
|
preq = io_poll_find(ctx, req->poll_update.old_user_data, true);
|
io_uring: allow events and user_data update of running poll requests
This adds two new POLL_ADD flags, IORING_POLL_UPDATE_EVENTS and
IORING_POLL_UPDATE_USER_DATA. As with the other POLL_ADD flag, these are
masked into sqe->len. If set, the POLL_ADD will have the following
behavior:
- sqe->addr must contain the the user_data of the poll request that
needs to be modified. This field is otherwise invalid for a POLL_ADD
command.
- If IORING_POLL_UPDATE_EVENTS is set, sqe->poll_events must contain the
new mask for the existing poll request. There are no checks for whether
these are identical or not, if a matching poll request is found, then it
is re-armed with the new mask.
- If IORING_POLL_UPDATE_USER_DATA is set, sqe->off must contain the new
user_data for the existing poll request.
A POLL_ADD with any of these flags set may complete with any of the
following results:
1) 0, which means that we successfully found the existing poll request
specified, and performed the re-arm procedure. Any error from that
re-arm will be exposed as a completion event for that original poll
request, not for the update request.
2) -ENOENT, if no existing poll request was found with the given
user_data.
3) -EALREADY, if the existing poll request was already in the process of
being removed/canceled/completing.
4) -EACCES, if an attempt was made to modify an internal poll request
(eg not one originally issued ass IORING_OP_POLL_ADD).
The usual -EINVAL cases apply as well, if any invalid fields are set
in the sqe for this command type.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-17 14:37:41 +00:00
|
|
|
if (!preq) {
|
|
|
|
ret = -ENOENT;
|
|
|
|
goto err;
|
|
|
|
}
|
2021-04-06 15:49:31 +00:00
|
|
|
|
2021-04-14 12:38:37 +00:00
|
|
|
if (!req->poll_update.update_events && !req->poll_update.update_user_data) {
|
|
|
|
completing = true;
|
|
|
|
ret = io_poll_remove_one(preq) ? 0 : -EALREADY;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
2021-04-06 15:49:31 +00:00
|
|
|
/*
|
|
|
|
* Don't allow racy completion with singleshot, as we cannot safely
|
|
|
|
* update those. For multishot, if we're racing with completion, just
|
|
|
|
* let completion re-add it.
|
|
|
|
*/
|
|
|
|
completing = !__io_poll_remove_one(preq, &preq->poll, false);
|
|
|
|
if (completing && (preq->poll.events & EPOLLONESHOT)) {
|
|
|
|
ret = -EALREADY;
|
|
|
|
goto err;
|
io_uring: allow events and user_data update of running poll requests
This adds two new POLL_ADD flags, IORING_POLL_UPDATE_EVENTS and
IORING_POLL_UPDATE_USER_DATA. As with the other POLL_ADD flag, these are
masked into sqe->len. If set, the POLL_ADD will have the following
behavior:
- sqe->addr must contain the the user_data of the poll request that
needs to be modified. This field is otherwise invalid for a POLL_ADD
command.
- If IORING_POLL_UPDATE_EVENTS is set, sqe->poll_events must contain the
new mask for the existing poll request. There are no checks for whether
these are identical or not, if a matching poll request is found, then it
is re-armed with the new mask.
- If IORING_POLL_UPDATE_USER_DATA is set, sqe->off must contain the new
user_data for the existing poll request.
A POLL_ADD with any of these flags set may complete with any of the
following results:
1) 0, which means that we successfully found the existing poll request
specified, and performed the re-arm procedure. Any error from that
re-arm will be exposed as a completion event for that original poll
request, not for the update request.
2) -ENOENT, if no existing poll request was found with the given
user_data.
3) -EALREADY, if the existing poll request was already in the process of
being removed/canceled/completing.
4) -EACCES, if an attempt was made to modify an internal poll request
(eg not one originally issued ass IORING_OP_POLL_ADD).
The usual -EINVAL cases apply as well, if any invalid fields are set
in the sqe for this command type.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-17 14:37:41 +00:00
|
|
|
}
|
|
|
|
/* we now have a detached poll request. reissue. */
|
|
|
|
ret = 0;
|
|
|
|
err:
|
|
|
|
if (ret < 0) {
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
io_uring: allow events and user_data update of running poll requests
This adds two new POLL_ADD flags, IORING_POLL_UPDATE_EVENTS and
IORING_POLL_UPDATE_USER_DATA. As with the other POLL_ADD flag, these are
masked into sqe->len. If set, the POLL_ADD will have the following
behavior:
- sqe->addr must contain the the user_data of the poll request that
needs to be modified. This field is otherwise invalid for a POLL_ADD
command.
- If IORING_POLL_UPDATE_EVENTS is set, sqe->poll_events must contain the
new mask for the existing poll request. There are no checks for whether
these are identical or not, if a matching poll request is found, then it
is re-armed with the new mask.
- If IORING_POLL_UPDATE_USER_DATA is set, sqe->off must contain the new
user_data for the existing poll request.
A POLL_ADD with any of these flags set may complete with any of the
following results:
1) 0, which means that we successfully found the existing poll request
specified, and performed the re-arm procedure. Any error from that
re-arm will be exposed as a completion event for that original poll
request, not for the update request.
2) -ENOENT, if no existing poll request was found with the given
user_data.
3) -EALREADY, if the existing poll request was already in the process of
being removed/canceled/completing.
4) -EACCES, if an attempt was made to modify an internal poll request
(eg not one originally issued ass IORING_OP_POLL_ADD).
The usual -EINVAL cases apply as well, if any invalid fields are set
in the sqe for this command type.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-17 14:37:41 +00:00
|
|
|
io_req_complete(req, ret);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
/* only mask one event flags, keep behavior flags */
|
2021-04-13 01:58:40 +00:00
|
|
|
if (req->poll_update.update_events) {
|
io_uring: allow events and user_data update of running poll requests
This adds two new POLL_ADD flags, IORING_POLL_UPDATE_EVENTS and
IORING_POLL_UPDATE_USER_DATA. As with the other POLL_ADD flag, these are
masked into sqe->len. If set, the POLL_ADD will have the following
behavior:
- sqe->addr must contain the the user_data of the poll request that
needs to be modified. This field is otherwise invalid for a POLL_ADD
command.
- If IORING_POLL_UPDATE_EVENTS is set, sqe->poll_events must contain the
new mask for the existing poll request. There are no checks for whether
these are identical or not, if a matching poll request is found, then it
is re-armed with the new mask.
- If IORING_POLL_UPDATE_USER_DATA is set, sqe->off must contain the new
user_data for the existing poll request.
A POLL_ADD with any of these flags set may complete with any of the
following results:
1) 0, which means that we successfully found the existing poll request
specified, and performed the re-arm procedure. Any error from that
re-arm will be exposed as a completion event for that original poll
request, not for the update request.
2) -ENOENT, if no existing poll request was found with the given
user_data.
3) -EALREADY, if the existing poll request was already in the process of
being removed/canceled/completing.
4) -EACCES, if an attempt was made to modify an internal poll request
(eg not one originally issued ass IORING_OP_POLL_ADD).
The usual -EINVAL cases apply as well, if any invalid fields are set
in the sqe for this command type.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-17 14:37:41 +00:00
|
|
|
preq->poll.events &= ~0xffff;
|
2021-04-13 01:58:40 +00:00
|
|
|
preq->poll.events |= req->poll_update.events & 0xffff;
|
io_uring: allow events and user_data update of running poll requests
This adds two new POLL_ADD flags, IORING_POLL_UPDATE_EVENTS and
IORING_POLL_UPDATE_USER_DATA. As with the other POLL_ADD flag, these are
masked into sqe->len. If set, the POLL_ADD will have the following
behavior:
- sqe->addr must contain the the user_data of the poll request that
needs to be modified. This field is otherwise invalid for a POLL_ADD
command.
- If IORING_POLL_UPDATE_EVENTS is set, sqe->poll_events must contain the
new mask for the existing poll request. There are no checks for whether
these are identical or not, if a matching poll request is found, then it
is re-armed with the new mask.
- If IORING_POLL_UPDATE_USER_DATA is set, sqe->off must contain the new
user_data for the existing poll request.
A POLL_ADD with any of these flags set may complete with any of the
following results:
1) 0, which means that we successfully found the existing poll request
specified, and performed the re-arm procedure. Any error from that
re-arm will be exposed as a completion event for that original poll
request, not for the update request.
2) -ENOENT, if no existing poll request was found with the given
user_data.
3) -EALREADY, if the existing poll request was already in the process of
being removed/canceled/completing.
4) -EACCES, if an attempt was made to modify an internal poll request
(eg not one originally issued ass IORING_OP_POLL_ADD).
The usual -EINVAL cases apply as well, if any invalid fields are set
in the sqe for this command type.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-17 14:37:41 +00:00
|
|
|
preq->poll.events |= IO_POLL_UNMASK;
|
|
|
|
}
|
2021-04-13 01:58:40 +00:00
|
|
|
if (req->poll_update.update_user_data)
|
|
|
|
preq->user_data = req->poll_update.new_user_data;
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-04-06 15:49:31 +00:00
|
|
|
|
io_uring: allow events and user_data update of running poll requests
This adds two new POLL_ADD flags, IORING_POLL_UPDATE_EVENTS and
IORING_POLL_UPDATE_USER_DATA. As with the other POLL_ADD flag, these are
masked into sqe->len. If set, the POLL_ADD will have the following
behavior:
- sqe->addr must contain the the user_data of the poll request that
needs to be modified. This field is otherwise invalid for a POLL_ADD
command.
- If IORING_POLL_UPDATE_EVENTS is set, sqe->poll_events must contain the
new mask for the existing poll request. There are no checks for whether
these are identical or not, if a matching poll request is found, then it
is re-armed with the new mask.
- If IORING_POLL_UPDATE_USER_DATA is set, sqe->off must contain the new
user_data for the existing poll request.
A POLL_ADD with any of these flags set may complete with any of the
following results:
1) 0, which means that we successfully found the existing poll request
specified, and performed the re-arm procedure. Any error from that
re-arm will be exposed as a completion event for that original poll
request, not for the update request.
2) -ENOENT, if no existing poll request was found with the given
user_data.
3) -EALREADY, if the existing poll request was already in the process of
being removed/canceled/completing.
4) -EACCES, if an attempt was made to modify an internal poll request
(eg not one originally issued ass IORING_OP_POLL_ADD).
The usual -EINVAL cases apply as well, if any invalid fields are set
in the sqe for this command type.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-17 14:37:41 +00:00
|
|
|
/* complete update request, we're done with it */
|
|
|
|
io_req_complete(req, ret);
|
|
|
|
|
2021-04-06 15:49:31 +00:00
|
|
|
if (!completing) {
|
2021-04-14 12:38:37 +00:00
|
|
|
ret = io_poll_add(preq, issue_flags);
|
2021-04-06 15:49:31 +00:00
|
|
|
if (ret < 0) {
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(preq);
|
2021-04-06 15:49:31 +00:00
|
|
|
io_req_complete(preq, ret);
|
|
|
|
}
|
io_uring: allow events and user_data update of running poll requests
This adds two new POLL_ADD flags, IORING_POLL_UPDATE_EVENTS and
IORING_POLL_UPDATE_USER_DATA. As with the other POLL_ADD flag, these are
masked into sqe->len. If set, the POLL_ADD will have the following
behavior:
- sqe->addr must contain the the user_data of the poll request that
needs to be modified. This field is otherwise invalid for a POLL_ADD
command.
- If IORING_POLL_UPDATE_EVENTS is set, sqe->poll_events must contain the
new mask for the existing poll request. There are no checks for whether
these are identical or not, if a matching poll request is found, then it
is re-armed with the new mask.
- If IORING_POLL_UPDATE_USER_DATA is set, sqe->off must contain the new
user_data for the existing poll request.
A POLL_ADD with any of these flags set may complete with any of the
following results:
1) 0, which means that we successfully found the existing poll request
specified, and performed the re-arm procedure. Any error from that
re-arm will be exposed as a completion event for that original poll
request, not for the update request.
2) -ENOENT, if no existing poll request was found with the given
user_data.
3) -EALREADY, if the existing poll request was already in the process of
being removed/canceled/completing.
4) -EACCES, if an attempt was made to modify an internal poll request
(eg not one originally issued ass IORING_OP_POLL_ADD).
The usual -EINVAL cases apply as well, if any invalid fields are set
in the sqe for this command type.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-17 14:37:41 +00:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-08-10 21:11:51 +00:00
|
|
|
static void io_req_task_timeout(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-08-10 21:11:51 +00:00
|
|
|
io_cqring_fill_event(ctx, req->user_data, -ETIME, 0);
|
|
|
|
io_commit_cqring(ctx);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-08-10 21:11:51 +00:00
|
|
|
|
|
|
|
io_cqring_ev_posted(ctx);
|
|
|
|
req_set_fail(req);
|
|
|
|
io_put_req(req);
|
|
|
|
}
|
|
|
|
|
2019-09-17 18:26:57 +00:00
|
|
|
static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer)
|
|
|
|
{
|
2019-11-15 15:49:11 +00:00
|
|
|
struct io_timeout_data *data = container_of(timer,
|
|
|
|
struct io_timeout_data, timer);
|
|
|
|
struct io_kiocb *req = data->req;
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2019-09-17 18:26:57 +00:00
|
|
|
unsigned long flags;
|
|
|
|
|
2021-08-10 21:11:51 +00:00
|
|
|
spin_lock_irqsave(&ctx->timeout_lock, flags);
|
2020-10-10 17:34:11 +00:00
|
|
|
list_del_init(&req->timeout.list);
|
2020-07-30 15:43:50 +00:00
|
|
|
atomic_set(&req->ctx->cq_timeouts,
|
|
|
|
atomic_read(&req->ctx->cq_timeouts) + 1);
|
2021-08-10 21:11:51 +00:00
|
|
|
spin_unlock_irqrestore(&ctx->timeout_lock, flags);
|
2020-07-30 15:43:50 +00:00
|
|
|
|
2021-08-10 21:11:51 +00:00
|
|
|
req->io_task_work.func = io_req_task_timeout;
|
|
|
|
io_req_task_work_add(req);
|
2019-09-17 18:26:57 +00:00
|
|
|
return HRTIMER_NORESTART;
|
|
|
|
}
|
|
|
|
|
2020-11-30 19:11:15 +00:00
|
|
|
static struct io_kiocb *io_timeout_extract(struct io_ring_ctx *ctx,
|
|
|
|
__u64 user_data)
|
2021-08-10 21:11:51 +00:00
|
|
|
__must_hold(&ctx->timeout_lock)
|
2020-08-12 23:33:30 +00:00
|
|
|
{
|
2020-11-30 19:11:15 +00:00
|
|
|
struct io_timeout_data *io;
|
2019-11-10 00:43:02 +00:00
|
|
|
struct io_kiocb *req;
|
2021-04-13 01:58:42 +00:00
|
|
|
bool found = false;
|
2020-08-12 23:33:30 +00:00
|
|
|
|
2020-07-13 20:37:12 +00:00
|
|
|
list_for_each_entry(req, &ctx->timeout_list, timeout.list) {
|
2021-04-13 01:58:42 +00:00
|
|
|
found = user_data == req->user_data;
|
|
|
|
if (found)
|
2019-11-10 00:43:02 +00:00
|
|
|
break;
|
|
|
|
}
|
2021-04-13 01:58:42 +00:00
|
|
|
if (!found)
|
|
|
|
return ERR_PTR(-ENOENT);
|
2020-11-30 19:11:15 +00:00
|
|
|
|
|
|
|
io = req->async_data;
|
2021-04-13 01:58:42 +00:00
|
|
|
if (hrtimer_try_to_cancel(&io->timer) == -1)
|
2020-11-30 19:11:15 +00:00
|
|
|
return ERR_PTR(-EALREADY);
|
2020-10-10 17:34:11 +00:00
|
|
|
list_del_init(&req->timeout.list);
|
2020-11-30 19:11:15 +00:00
|
|
|
return req;
|
|
|
|
}
|
2019-11-10 00:43:02 +00:00
|
|
|
|
2020-11-30 19:11:15 +00:00
|
|
|
static int io_timeout_cancel(struct io_ring_ctx *ctx, __u64 user_data)
|
2021-08-10 21:11:51 +00:00
|
|
|
__must_hold(&ctx->timeout_lock)
|
2020-11-30 19:11:15 +00:00
|
|
|
{
|
|
|
|
struct io_kiocb *req = io_timeout_extract(ctx, user_data);
|
|
|
|
|
|
|
|
if (IS_ERR(req))
|
|
|
|
return PTR_ERR(req);
|
2020-08-12 23:33:30 +00:00
|
|
|
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-04-25 13:32:17 +00:00
|
|
|
io_cqring_fill_event(ctx, req->user_data, -ECANCELED, 0);
|
2021-08-11 18:28:28 +00:00
|
|
|
io_put_req_deferred(req);
|
2020-08-12 23:33:30 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-11-30 19:11:16 +00:00
|
|
|
static int io_timeout_update(struct io_ring_ctx *ctx, __u64 user_data,
|
|
|
|
struct timespec64 *ts, enum hrtimer_mode mode)
|
2021-08-10 21:11:51 +00:00
|
|
|
__must_hold(&ctx->timeout_lock)
|
2019-11-10 00:43:02 +00:00
|
|
|
{
|
2020-11-30 19:11:16 +00:00
|
|
|
struct io_kiocb *req = io_timeout_extract(ctx, user_data);
|
|
|
|
struct io_timeout_data *data;
|
2019-11-10 00:43:02 +00:00
|
|
|
|
2020-11-30 19:11:16 +00:00
|
|
|
if (IS_ERR(req))
|
|
|
|
return PTR_ERR(req);
|
2019-11-10 00:43:02 +00:00
|
|
|
|
2020-11-30 19:11:16 +00:00
|
|
|
req->timeout.off = 0; /* noseq */
|
|
|
|
data = req->async_data;
|
|
|
|
list_add_tail(&req->timeout.list, &ctx->timeout_list);
|
|
|
|
hrtimer_init(&data->timer, CLOCK_MONOTONIC, mode);
|
|
|
|
data->timer.function = io_timeout_fn;
|
|
|
|
hrtimer_start(&data->timer, timespec64_to_ktime(*ts), mode);
|
|
|
|
return 0;
|
2019-11-10 00:43:02 +00:00
|
|
|
}
|
|
|
|
|
2019-12-20 01:24:38 +00:00
|
|
|
static int io_timeout_remove_prep(struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
2019-12-18 01:50:29 +00:00
|
|
|
{
|
2020-11-30 19:11:16 +00:00
|
|
|
struct io_timeout_rem *tr = &req->timeout_rem;
|
|
|
|
|
2019-12-18 01:50:29 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
2020-07-18 20:15:16 +00:00
|
|
|
if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
|
|
|
|
return -EINVAL;
|
2020-11-30 19:11:16 +00:00
|
|
|
if (sqe->ioprio || sqe->buf_index || sqe->len)
|
2019-12-18 01:50:29 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
2020-11-30 19:11:16 +00:00
|
|
|
tr->addr = READ_ONCE(sqe->addr);
|
|
|
|
tr->flags = READ_ONCE(sqe->timeout_flags);
|
|
|
|
if (tr->flags & IORING_TIMEOUT_UPDATE) {
|
|
|
|
if (tr->flags & ~(IORING_TIMEOUT_UPDATE|IORING_TIMEOUT_ABS))
|
|
|
|
return -EINVAL;
|
|
|
|
if (get_timespec64(&tr->ts, u64_to_user_ptr(sqe->addr2)))
|
|
|
|
return -EFAULT;
|
|
|
|
} else if (tr->flags) {
|
|
|
|
/* timeout removal doesn't support flags */
|
2019-12-18 01:50:29 +00:00
|
|
|
return -EINVAL;
|
2020-11-30 19:11:16 +00:00
|
|
|
}
|
2019-12-18 01:50:29 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-01-19 13:32:44 +00:00
|
|
|
static inline enum hrtimer_mode io_translate_timeout_mode(unsigned int flags)
|
|
|
|
{
|
|
|
|
return (flags & IORING_TIMEOUT_ABS) ? HRTIMER_MODE_ABS
|
|
|
|
: HRTIMER_MODE_REL;
|
|
|
|
}
|
|
|
|
|
2019-10-16 15:08:32 +00:00
|
|
|
/*
|
|
|
|
* Remove or update an existing timeout command
|
|
|
|
*/
|
2021-02-10 00:03:08 +00:00
|
|
|
static int io_timeout_remove(struct io_kiocb *req, unsigned int issue_flags)
|
2019-10-16 15:08:32 +00:00
|
|
|
{
|
2020-11-30 19:11:16 +00:00
|
|
|
struct io_timeout_rem *tr = &req->timeout_rem;
|
2019-10-16 15:08:32 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2019-11-10 00:43:02 +00:00
|
|
|
int ret;
|
2019-10-16 15:08:32 +00:00
|
|
|
|
2021-08-10 21:11:51 +00:00
|
|
|
spin_lock_irq(&ctx->timeout_lock);
|
2021-01-19 13:32:44 +00:00
|
|
|
if (!(req->timeout_rem.flags & IORING_TIMEOUT_UPDATE))
|
2020-11-30 19:11:16 +00:00
|
|
|
ret = io_timeout_cancel(ctx, tr->addr);
|
2021-01-19 13:32:44 +00:00
|
|
|
else
|
|
|
|
ret = io_timeout_update(ctx, tr->addr, &tr->ts,
|
|
|
|
io_translate_timeout_mode(tr->flags));
|
2021-08-10 21:11:51 +00:00
|
|
|
spin_unlock_irq(&ctx->timeout_lock);
|
2019-10-16 15:08:32 +00:00
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-04-25 13:32:17 +00:00
|
|
|
io_cqring_fill_event(ctx, req->user_data, ret, 0);
|
2019-10-16 15:08:32 +00:00
|
|
|
io_commit_cqring(ctx);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2019-09-17 18:26:57 +00:00
|
|
|
io_cqring_ev_posted(ctx);
|
2019-12-08 03:59:47 +00:00
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2019-11-08 15:50:36 +00:00
|
|
|
io_put_req(req);
|
2019-10-16 15:08:32 +00:00
|
|
|
return 0;
|
2019-09-17 18:26:57 +00:00
|
|
|
}
|
|
|
|
|
2019-12-20 01:24:38 +00:00
|
|
|
static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
|
2019-12-04 18:08:05 +00:00
|
|
|
bool is_timeout_link)
|
2019-09-17 18:26:57 +00:00
|
|
|
{
|
2019-11-15 15:49:11 +00:00
|
|
|
struct io_timeout_data *data;
|
2019-10-15 22:48:15 +00:00
|
|
|
unsigned flags;
|
2020-05-26 17:34:04 +00:00
|
|
|
u32 off = READ_ONCE(sqe->off);
|
2019-09-17 18:26:57 +00:00
|
|
|
|
2019-11-15 15:49:11 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
2019-09-17 18:26:57 +00:00
|
|
|
return -EINVAL;
|
2019-11-15 15:49:11 +00:00
|
|
|
if (sqe->ioprio || sqe->buf_index || sqe->len != 1)
|
2019-10-15 22:48:15 +00:00
|
|
|
return -EINVAL;
|
2020-05-26 17:34:04 +00:00
|
|
|
if (off && is_timeout_link)
|
2019-12-04 18:08:05 +00:00
|
|
|
return -EINVAL;
|
2019-10-15 22:48:15 +00:00
|
|
|
flags = READ_ONCE(sqe->timeout_flags);
|
|
|
|
if (flags & ~IORING_TIMEOUT_ABS)
|
2019-09-17 18:26:57 +00:00
|
|
|
return -EINVAL;
|
2019-10-01 15:53:29 +00:00
|
|
|
|
2020-05-30 11:54:18 +00:00
|
|
|
req->timeout.off = off;
|
2021-06-14 22:37:25 +00:00
|
|
|
if (unlikely(off && !req->ctx->off_timeout_used))
|
|
|
|
req->ctx->off_timeout_used = true;
|
2019-12-20 16:02:01 +00:00
|
|
|
|
2020-08-16 01:44:09 +00:00
|
|
|
if (!req->async_data && io_alloc_async_data(req))
|
2019-12-20 16:02:01 +00:00
|
|
|
return -ENOMEM;
|
|
|
|
|
2020-08-16 01:44:09 +00:00
|
|
|
data = req->async_data;
|
2019-11-15 15:49:11 +00:00
|
|
|
data->req = req;
|
|
|
|
|
|
|
|
if (get_timespec64(&data->ts, u64_to_user_ptr(sqe->addr)))
|
2019-09-17 18:26:57 +00:00
|
|
|
return -EFAULT;
|
|
|
|
|
2021-01-19 13:32:44 +00:00
|
|
|
data->mode = io_translate_timeout_mode(flags);
|
2019-11-15 15:49:11 +00:00
|
|
|
hrtimer_init(&data->timer, CLOCK_MONOTONIC, data->mode);
|
2021-08-15 09:40:23 +00:00
|
|
|
|
|
|
|
if (is_timeout_link) {
|
|
|
|
struct io_submit_link *link = &req->ctx->submit_state.link;
|
|
|
|
|
|
|
|
if (!link->head)
|
|
|
|
return -EINVAL;
|
|
|
|
if (link->last->opcode == IORING_OP_LINK_TIMEOUT)
|
|
|
|
return -EINVAL;
|
2021-08-15 09:40:24 +00:00
|
|
|
req->timeout.head = link->last;
|
|
|
|
link->last->flags |= REQ_F_ARM_LTIMEOUT;
|
2021-08-15 09:40:23 +00:00
|
|
|
}
|
2019-11-15 15:49:11 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:08 +00:00
|
|
|
static int io_timeout(struct io_kiocb *req, unsigned int issue_flags)
|
2019-11-15 15:49:11 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2020-08-16 01:44:09 +00:00
|
|
|
struct io_timeout_data *data = req->async_data;
|
2019-11-15 15:49:11 +00:00
|
|
|
struct list_head *entry;
|
2020-05-30 11:54:18 +00:00
|
|
|
u32 tail, off = req->timeout.off;
|
2019-11-15 15:49:11 +00:00
|
|
|
|
2021-08-10 21:11:51 +00:00
|
|
|
spin_lock_irq(&ctx->timeout_lock);
|
2019-11-12 06:34:31 +00:00
|
|
|
|
2019-09-17 18:26:57 +00:00
|
|
|
/*
|
|
|
|
* sqe->off holds how many events that need to occur for this
|
2019-11-12 06:34:31 +00:00
|
|
|
* timeout event to be satisfied. If it isn't set, then this is
|
|
|
|
* a pure timeout request, sequence isn't used.
|
2019-09-17 18:26:57 +00:00
|
|
|
*/
|
2020-06-29 10:13:02 +00:00
|
|
|
if (io_is_timeout_noseq(req)) {
|
2019-11-12 06:34:31 +00:00
|
|
|
entry = ctx->timeout_list.prev;
|
|
|
|
goto add;
|
|
|
|
}
|
2019-09-17 18:26:57 +00:00
|
|
|
|
2020-05-30 11:54:18 +00:00
|
|
|
tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
|
|
|
|
req->timeout.target_seq = tail + off;
|
2019-09-17 18:26:57 +00:00
|
|
|
|
2021-01-15 16:54:40 +00:00
|
|
|
/* Update the last seq here in case io_flush_timeouts() hasn't.
|
|
|
|
* This is safe because ->completion_lock is held, and submissions
|
|
|
|
* and completions are never mixed in the same ->completion_lock section.
|
|
|
|
*/
|
|
|
|
ctx->cq_last_tm_flush = tail;
|
|
|
|
|
2019-09-17 18:26:57 +00:00
|
|
|
/*
|
|
|
|
* Insertion sort, ensuring the first entry in the list is always
|
|
|
|
* the one we need first.
|
|
|
|
*/
|
|
|
|
list_for_each_prev(entry, &ctx->timeout_list) {
|
2020-07-13 20:37:12 +00:00
|
|
|
struct io_kiocb *nxt = list_entry(entry, struct io_kiocb,
|
|
|
|
timeout.list);
|
2019-09-17 18:26:57 +00:00
|
|
|
|
2020-06-29 10:13:02 +00:00
|
|
|
if (io_is_timeout_noseq(nxt))
|
2019-11-12 06:34:31 +00:00
|
|
|
continue;
|
2020-05-30 11:54:18 +00:00
|
|
|
/* nxt.seq is behind @tail, otherwise would've been completed */
|
|
|
|
if (off >= nxt->timeout.target_seq - tail)
|
2019-09-17 18:26:57 +00:00
|
|
|
break;
|
|
|
|
}
|
2019-11-12 06:34:31 +00:00
|
|
|
add:
|
2020-07-13 20:37:12 +00:00
|
|
|
list_add(&req->timeout.list, entry);
|
2019-11-15 15:49:11 +00:00
|
|
|
data->timer.function = io_timeout_fn;
|
|
|
|
hrtimer_start(&data->timer, timespec64_to_ktime(data->ts), data->mode);
|
2021-08-10 21:11:51 +00:00
|
|
|
spin_unlock_irq(&ctx->timeout_lock);
|
2019-09-17 18:26:57 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-03-08 12:14:14 +00:00
|
|
|
struct io_cancel_data {
|
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
u64 user_data;
|
|
|
|
};
|
|
|
|
|
2019-10-29 03:49:21 +00:00
|
|
|
static bool io_cancel_cb(struct io_wq_work *work, void *data)
|
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
2021-03-08 12:14:14 +00:00
|
|
|
struct io_cancel_data *cd = data;
|
2019-10-29 03:49:21 +00:00
|
|
|
|
2021-03-08 12:14:14 +00:00
|
|
|
return req->ctx == cd->ctx && req->user_data == cd->user_data;
|
2019-10-29 03:49:21 +00:00
|
|
|
}
|
|
|
|
|
2021-03-08 12:14:14 +00:00
|
|
|
static int io_async_cancel_one(struct io_uring_task *tctx, u64 user_data,
|
|
|
|
struct io_ring_ctx *ctx)
|
2019-10-29 03:49:21 +00:00
|
|
|
{
|
2021-03-08 12:14:14 +00:00
|
|
|
struct io_cancel_data data = { .ctx = ctx, .user_data = user_data, };
|
2019-10-29 03:49:21 +00:00
|
|
|
enum io_wq_cancel cancel_ret;
|
|
|
|
int ret = 0;
|
|
|
|
|
2021-03-08 12:14:14 +00:00
|
|
|
if (!tctx || !tctx->io_wq)
|
2021-02-16 19:56:50 +00:00
|
|
|
return -ENOENT;
|
|
|
|
|
2021-03-08 12:14:14 +00:00
|
|
|
cancel_ret = io_wq_cancel_cb(tctx->io_wq, io_cancel_cb, &data, false);
|
2019-10-29 03:49:21 +00:00
|
|
|
switch (cancel_ret) {
|
|
|
|
case IO_WQ_CANCEL_OK:
|
|
|
|
ret = 0;
|
|
|
|
break;
|
|
|
|
case IO_WQ_CANCEL_RUNNING:
|
|
|
|
ret = -EALREADY;
|
|
|
|
break;
|
|
|
|
case IO_WQ_CANCEL_NOTFOUND:
|
|
|
|
ret = -ENOENT;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2019-11-05 19:39:45 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-08-15 09:40:22 +00:00
|
|
|
static int io_try_cancel_userdata(struct io_kiocb *req, u64 sqe_addr)
|
|
|
|
__acquires(&req->ctx->completion_lock)
|
2019-11-10 00:43:02 +00:00
|
|
|
{
|
2021-08-15 09:40:22 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2019-11-10 00:43:02 +00:00
|
|
|
int ret;
|
|
|
|
|
2021-08-15 09:40:22 +00:00
|
|
|
WARN_ON_ONCE(req->task != current);
|
|
|
|
|
2021-03-08 12:14:14 +00:00
|
|
|
ret = io_async_cancel_one(req->task->io_uring, sqe_addr, ctx);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-04-01 14:43:59 +00:00
|
|
|
if (ret != -ENOENT)
|
2021-08-15 09:40:22 +00:00
|
|
|
return ret;
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock_irq(&ctx->timeout_lock);
|
2019-11-10 00:43:02 +00:00
|
|
|
ret = io_timeout_cancel(ctx, sqe_addr);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock_irq(&ctx->timeout_lock);
|
2019-11-10 00:43:02 +00:00
|
|
|
if (ret != -ENOENT)
|
2021-08-15 09:40:22 +00:00
|
|
|
return ret;
|
|
|
|
return io_poll_cancel(ctx, sqe_addr, false);
|
2019-11-10 00:43:02 +00:00
|
|
|
}
|
|
|
|
|
2019-12-20 01:24:38 +00:00
|
|
|
static int io_async_cancel_prep(struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
2019-11-05 19:39:45 +00:00
|
|
|
{
|
2019-12-18 01:45:56 +00:00
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
2019-11-05 19:39:45 +00:00
|
|
|
return -EINVAL;
|
2020-07-18 20:15:16 +00:00
|
|
|
if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
|
|
|
|
return -EINVAL;
|
|
|
|
if (sqe->ioprio || sqe->off || sqe->len || sqe->cancel_flags)
|
2019-11-05 19:39:45 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
2019-12-18 01:45:56 +00:00
|
|
|
req->cancel.addr = READ_ONCE(sqe->addr);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:08 +00:00
|
|
|
static int io_async_cancel(struct io_kiocb *req, unsigned int issue_flags)
|
2019-12-18 01:45:56 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2021-03-12 16:25:55 +00:00
|
|
|
u64 sqe_addr = req->cancel.addr;
|
|
|
|
struct io_tctx_node *node;
|
|
|
|
int ret;
|
|
|
|
|
2021-08-15 09:40:22 +00:00
|
|
|
ret = io_try_cancel_userdata(req, sqe_addr);
|
2021-03-12 16:25:55 +00:00
|
|
|
if (ret != -ENOENT)
|
|
|
|
goto done;
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-03-12 16:25:55 +00:00
|
|
|
|
|
|
|
/* slow path, try all io-wq's */
|
|
|
|
io_ring_submit_lock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
|
|
|
|
ret = -ENOENT;
|
|
|
|
list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
|
|
|
|
struct io_uring_task *tctx = node->task->io_uring;
|
2019-12-18 01:45:56 +00:00
|
|
|
|
2021-03-12 16:25:55 +00:00
|
|
|
ret = io_async_cancel_one(tctx, req->cancel.addr, ctx);
|
|
|
|
if (ret != -ENOENT)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
io_ring_submit_unlock(ctx, !(issue_flags & IO_URING_F_NONBLOCK));
|
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-03-12 16:25:55 +00:00
|
|
|
done:
|
2021-04-25 13:32:17 +00:00
|
|
|
io_cqring_fill_event(ctx, req->user_data, ret, 0);
|
2021-03-12 16:25:55 +00:00
|
|
|
io_commit_cqring(ctx);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-03-12 16:25:55 +00:00
|
|
|
io_cqring_ev_posted(ctx);
|
|
|
|
|
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-03-12 16:25:55 +00:00
|
|
|
io_put_req(req);
|
2019-09-17 18:26:57 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-01-15 17:37:44 +00:00
|
|
|
static int io_rsrc_update_prep(struct io_kiocb *req,
|
2019-12-09 18:22:50 +00:00
|
|
|
const struct io_uring_sqe *sqe)
|
|
|
|
{
|
2020-07-18 20:15:16 +00:00
|
|
|
if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
|
|
|
|
return -EINVAL;
|
|
|
|
if (sqe->ioprio || sqe->rw_flags)
|
2019-12-09 18:22:50 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
2021-01-15 17:37:44 +00:00
|
|
|
req->rsrc_update.offset = READ_ONCE(sqe->off);
|
|
|
|
req->rsrc_update.nr_args = READ_ONCE(sqe->len);
|
|
|
|
if (!req->rsrc_update.nr_args)
|
2019-12-09 18:22:50 +00:00
|
|
|
return -EINVAL;
|
2021-01-15 17:37:44 +00:00
|
|
|
req->rsrc_update.arg = READ_ONCE(sqe->addr);
|
2019-12-09 18:22:50 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
static int io_files_update(struct io_kiocb *req, unsigned int issue_flags)
|
2019-12-18 01:45:56 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2021-04-25 13:32:22 +00:00
|
|
|
struct io_uring_rsrc_update2 up;
|
2019-12-09 18:22:50 +00:00
|
|
|
int ret;
|
2019-12-18 01:45:56 +00:00
|
|
|
|
2021-02-10 00:03:07 +00:00
|
|
|
if (issue_flags & IO_URING_F_NONBLOCK)
|
2019-12-09 18:22:50 +00:00
|
|
|
return -EAGAIN;
|
|
|
|
|
2021-01-15 17:37:44 +00:00
|
|
|
up.offset = req->rsrc_update.offset;
|
|
|
|
up.data = req->rsrc_update.arg;
|
2021-04-25 13:32:22 +00:00
|
|
|
up.nr = 0;
|
|
|
|
up.tags = 0;
|
2021-04-26 09:47:35 +00:00
|
|
|
up.resv = 0;
|
2019-12-09 18:22:50 +00:00
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-04-25 13:32:20 +00:00
|
|
|
ret = __io_register_rsrc_update(ctx, IORING_RSRC_FILE,
|
2021-04-25 13:32:19 +00:00
|
|
|
&up, req->rsrc_update.nr_args);
|
2019-12-09 18:22:50 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
|
|
|
|
if (ret < 0)
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2021-02-10 00:03:09 +00:00
|
|
|
__io_req_complete(req, issue_flags, ret, 0);
|
2019-09-17 18:26:57 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-09-30 19:57:55 +00:00
|
|
|
static int io_req_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
2019-12-02 18:03:47 +00:00
|
|
|
{
|
2019-12-18 02:53:05 +00:00
|
|
|
switch (req->opcode) {
|
2019-12-18 02:45:06 +00:00
|
|
|
case IORING_OP_NOP:
|
2020-09-30 19:57:55 +00:00
|
|
|
return 0;
|
2019-12-02 18:03:47 +00:00
|
|
|
case IORING_OP_READV:
|
|
|
|
case IORING_OP_READ_FIXED:
|
2019-12-22 22:19:35 +00:00
|
|
|
case IORING_OP_READ:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_read_prep(req, sqe);
|
2019-12-02 18:03:47 +00:00
|
|
|
case IORING_OP_WRITEV:
|
|
|
|
case IORING_OP_WRITE_FIXED:
|
2019-12-22 22:19:35 +00:00
|
|
|
case IORING_OP_WRITE:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_write_prep(req, sqe);
|
2019-12-18 01:40:57 +00:00
|
|
|
case IORING_OP_POLL_ADD:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_poll_add_prep(req, sqe);
|
2019-12-18 01:40:57 +00:00
|
|
|
case IORING_OP_POLL_REMOVE:
|
2021-04-14 12:38:37 +00:00
|
|
|
return io_poll_update_prep(req, sqe);
|
2019-12-16 18:55:28 +00:00
|
|
|
case IORING_OP_FSYNC:
|
2021-02-18 18:29:38 +00:00
|
|
|
return io_fsync_prep(req, sqe);
|
2019-12-16 18:55:28 +00:00
|
|
|
case IORING_OP_SYNC_FILE_RANGE:
|
2021-02-18 18:29:38 +00:00
|
|
|
return io_sfr_prep(req, sqe);
|
2019-12-03 01:50:25 +00:00
|
|
|
case IORING_OP_SENDMSG:
|
2020-01-05 03:19:44 +00:00
|
|
|
case IORING_OP_SEND:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_sendmsg_prep(req, sqe);
|
2019-12-03 01:50:25 +00:00
|
|
|
case IORING_OP_RECVMSG:
|
2020-01-05 03:19:44 +00:00
|
|
|
case IORING_OP_RECV:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_recvmsg_prep(req, sqe);
|
2019-12-02 23:28:46 +00:00
|
|
|
case IORING_OP_CONNECT:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_connect_prep(req, sqe);
|
2019-12-04 18:08:05 +00:00
|
|
|
case IORING_OP_TIMEOUT:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_timeout_prep(req, sqe, false);
|
2019-12-18 01:50:29 +00:00
|
|
|
case IORING_OP_TIMEOUT_REMOVE:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_timeout_remove_prep(req, sqe);
|
2019-12-18 01:45:56 +00:00
|
|
|
case IORING_OP_ASYNC_CANCEL:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_async_cancel_prep(req, sqe);
|
2019-12-04 18:08:05 +00:00
|
|
|
case IORING_OP_LINK_TIMEOUT:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_timeout_prep(req, sqe, true);
|
2019-12-16 18:55:28 +00:00
|
|
|
case IORING_OP_ACCEPT:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_accept_prep(req, sqe);
|
2019-12-10 17:38:56 +00:00
|
|
|
case IORING_OP_FALLOCATE:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_fallocate_prep(req, sqe);
|
2019-12-11 18:20:36 +00:00
|
|
|
case IORING_OP_OPENAT:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_openat_prep(req, sqe);
|
2019-12-11 21:02:38 +00:00
|
|
|
case IORING_OP_CLOSE:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_close_prep(req, sqe);
|
2019-12-09 18:22:50 +00:00
|
|
|
case IORING_OP_FILES_UPDATE:
|
2021-01-15 17:37:44 +00:00
|
|
|
return io_rsrc_update_prep(req, sqe);
|
2019-12-14 04:18:10 +00:00
|
|
|
case IORING_OP_STATX:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_statx_prep(req, sqe);
|
2019-12-26 05:03:45 +00:00
|
|
|
case IORING_OP_FADVISE:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_fadvise_prep(req, sqe);
|
2019-12-26 05:18:28 +00:00
|
|
|
case IORING_OP_MADVISE:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_madvise_prep(req, sqe);
|
2020-01-09 00:59:24 +00:00
|
|
|
case IORING_OP_OPENAT2:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_openat2_prep(req, sqe);
|
2020-01-08 22:18:09 +00:00
|
|
|
case IORING_OP_EPOLL_CTL:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_epoll_ctl_prep(req, sqe);
|
2020-02-24 08:32:45 +00:00
|
|
|
case IORING_OP_SPLICE:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_splice_prep(req, sqe);
|
2020-02-23 23:41:33 +00:00
|
|
|
case IORING_OP_PROVIDE_BUFFERS:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_provide_buffers_prep(req, sqe);
|
2020-03-02 23:32:28 +00:00
|
|
|
case IORING_OP_REMOVE_BUFFERS:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_remove_buffers_prep(req, sqe);
|
2020-05-17 11:18:06 +00:00
|
|
|
case IORING_OP_TEE:
|
2020-09-30 19:57:55 +00:00
|
|
|
return io_tee_prep(req, sqe);
|
2020-09-05 17:14:22 +00:00
|
|
|
case IORING_OP_SHUTDOWN:
|
|
|
|
return io_shutdown_prep(req, sqe);
|
2020-09-28 20:23:58 +00:00
|
|
|
case IORING_OP_RENAMEAT:
|
|
|
|
return io_renameat_prep(req, sqe);
|
2020-09-28 20:27:37 +00:00
|
|
|
case IORING_OP_UNLINKAT:
|
|
|
|
return io_unlinkat_prep(req, sqe);
|
2019-12-02 18:03:47 +00:00
|
|
|
}
|
|
|
|
|
2020-09-30 19:57:55 +00:00
|
|
|
printk_once(KERN_WARNING "io_uring: unhandled opcode %d\n",
|
|
|
|
req->opcode);
|
2021-04-25 13:32:25 +00:00
|
|
|
return -EINVAL;
|
2020-09-30 19:57:55 +00:00
|
|
|
}
|
|
|
|
|
2021-02-18 18:29:44 +00:00
|
|
|
static int io_req_prep_async(struct io_kiocb *req)
|
2020-09-30 19:57:55 +00:00
|
|
|
{
|
2021-02-28 22:35:19 +00:00
|
|
|
if (!io_op_defs[req->opcode].needs_async_setup)
|
|
|
|
return 0;
|
|
|
|
if (WARN_ON_ONCE(req->async_data))
|
|
|
|
return -EFAULT;
|
|
|
|
if (io_alloc_async_data(req))
|
|
|
|
return -EAGAIN;
|
|
|
|
|
2021-02-18 18:29:44 +00:00
|
|
|
switch (req->opcode) {
|
|
|
|
case IORING_OP_READV:
|
|
|
|
return io_rw_prep_async(req, READ);
|
|
|
|
case IORING_OP_WRITEV:
|
|
|
|
return io_rw_prep_async(req, WRITE);
|
|
|
|
case IORING_OP_SENDMSG:
|
|
|
|
return io_sendmsg_prep_async(req);
|
|
|
|
case IORING_OP_RECVMSG:
|
|
|
|
return io_recvmsg_prep_async(req);
|
|
|
|
case IORING_OP_CONNECT:
|
|
|
|
return io_connect_prep_async(req);
|
|
|
|
}
|
2021-02-28 22:35:19 +00:00
|
|
|
printk_once(KERN_WARNING "io_uring: prep_async() bad opcode %d\n",
|
|
|
|
req->opcode);
|
|
|
|
return -EFAULT;
|
2019-12-02 18:03:47 +00:00
|
|
|
}
|
|
|
|
|
2020-07-13 20:37:15 +00:00
|
|
|
static u32 io_get_sequence(struct io_kiocb *req)
|
|
|
|
{
|
2021-06-17 17:14:05 +00:00
|
|
|
u32 seq = req->ctx->cached_sq_head;
|
2020-07-13 20:37:15 +00:00
|
|
|
|
2021-06-17 17:14:05 +00:00
|
|
|
/* need original cached_sq_head, but it was increased for each req */
|
|
|
|
io_for_each_link(req, req)
|
|
|
|
seq--;
|
|
|
|
return seq;
|
2020-07-13 20:37:15 +00:00
|
|
|
}
|
|
|
|
|
2021-06-14 22:37:30 +00:00
|
|
|
static bool io_drain_req(struct io_kiocb *req)
|
2019-04-07 03:51:27 +00:00
|
|
|
{
|
2021-06-15 15:47:57 +00:00
|
|
|
struct io_kiocb *pos;
|
2019-11-08 15:09:12 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2020-07-13 20:37:14 +00:00
|
|
|
struct io_defer_entry *de;
|
2019-12-02 18:03:47 +00:00
|
|
|
int ret;
|
2020-07-13 20:37:15 +00:00
|
|
|
u32 seq;
|
2019-04-07 03:51:27 +00:00
|
|
|
|
2021-06-15 15:47:57 +00:00
|
|
|
/*
|
|
|
|
* If we need to drain a request in the middle of a link, drain the
|
|
|
|
* head request and the next request/link after the current link.
|
|
|
|
* Considering sequential execution of links, IOSQE_IO_DRAIN will be
|
|
|
|
* maintained for every request of our link.
|
|
|
|
*/
|
|
|
|
if (ctx->drain_next) {
|
|
|
|
req->flags |= REQ_F_IO_DRAIN;
|
|
|
|
ctx->drain_next = false;
|
|
|
|
}
|
|
|
|
/* not interested in head, start from the first linked */
|
|
|
|
io_for_each_link(pos, req->link) {
|
|
|
|
if (pos->flags & REQ_F_IO_DRAIN) {
|
|
|
|
ctx->drain_next = true;
|
|
|
|
req->flags |= REQ_F_IO_DRAIN;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-11-13 10:06:25 +00:00
|
|
|
/* Still need defer if there is pending req in defer list. */
|
2020-07-13 20:37:15 +00:00
|
|
|
if (likely(list_empty_careful(&ctx->defer_list) &&
|
2021-06-15 15:47:56 +00:00
|
|
|
!(req->flags & REQ_F_IO_DRAIN))) {
|
|
|
|
ctx->drain_active = false;
|
2021-06-14 22:37:30 +00:00
|
|
|
return false;
|
2021-06-15 15:47:56 +00:00
|
|
|
}
|
2020-07-13 20:37:15 +00:00
|
|
|
|
|
|
|
seq = io_get_sequence(req);
|
|
|
|
/* Still a chance to pass the sequence check */
|
|
|
|
if (!req_need_defer(req, seq) && list_empty_careful(&ctx->defer_list))
|
2021-06-14 22:37:30 +00:00
|
|
|
return false;
|
2019-04-07 03:51:27 +00:00
|
|
|
|
2021-02-28 22:35:19 +00:00
|
|
|
ret = io_req_prep_async(req);
|
2021-02-18 18:29:45 +00:00
|
|
|
if (ret)
|
2021-07-11 21:41:13 +00:00
|
|
|
goto fail;
|
2020-06-29 16:18:43 +00:00
|
|
|
io_prep_async_link(req);
|
2020-07-13 20:37:14 +00:00
|
|
|
de = kmalloc(sizeof(*de), GFP_KERNEL);
|
2021-06-14 22:37:30 +00:00
|
|
|
if (!de) {
|
2021-07-11 21:41:13 +00:00
|
|
|
ret = -ENOMEM;
|
|
|
|
fail:
|
|
|
|
io_req_complete_failed(req, ret);
|
2021-06-14 22:37:30 +00:00
|
|
|
return true;
|
|
|
|
}
|
2019-12-04 18:08:05 +00:00
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2020-07-13 20:37:15 +00:00
|
|
|
if (!req_need_defer(req, seq) && list_empty(&ctx->defer_list)) {
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2020-07-13 20:37:14 +00:00
|
|
|
kfree(de);
|
2020-07-23 17:25:20 +00:00
|
|
|
io_queue_async_work(req);
|
2021-06-14 22:37:30 +00:00
|
|
|
return true;
|
2019-04-07 03:51:27 +00:00
|
|
|
}
|
|
|
|
|
2019-11-21 16:01:20 +00:00
|
|
|
trace_io_uring_defer(ctx, req, req->user_data);
|
2020-07-13 20:37:14 +00:00
|
|
|
de->req = req;
|
2020-07-13 20:37:15 +00:00
|
|
|
de->seq = seq;
|
2020-07-13 20:37:14 +00:00
|
|
|
list_add_tail(&de->list, &ctx->defer_list);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-06-14 22:37:30 +00:00
|
|
|
return true;
|
2019-04-07 03:51:27 +00:00
|
|
|
}
|
|
|
|
|
2021-03-19 17:22:41 +00:00
|
|
|
static void io_clean_op(struct io_kiocb *req)
|
2020-02-07 19:04:45 +00:00
|
|
|
{
|
2020-07-16 20:28:02 +00:00
|
|
|
if (req->flags & REQ_F_BUFFER_SELECTED) {
|
|
|
|
switch (req->opcode) {
|
|
|
|
case IORING_OP_READV:
|
|
|
|
case IORING_OP_READ_FIXED:
|
|
|
|
case IORING_OP_READ:
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
kfree((void *)(unsigned long)req->rw.addr);
|
2020-07-16 20:28:02 +00:00
|
|
|
break;
|
|
|
|
case IORING_OP_RECVMSG:
|
|
|
|
case IORING_OP_RECV:
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
kfree(req->sr_msg.kbuf);
|
2020-07-16 20:28:02 +00:00
|
|
|
break;
|
|
|
|
}
|
2020-02-07 19:04:45 +00:00
|
|
|
}
|
|
|
|
|
2020-07-16 20:28:02 +00:00
|
|
|
if (req->flags & REQ_F_NEED_CLEANUP) {
|
|
|
|
switch (req->opcode) {
|
|
|
|
case IORING_OP_READV:
|
|
|
|
case IORING_OP_READ_FIXED:
|
|
|
|
case IORING_OP_READ:
|
|
|
|
case IORING_OP_WRITEV:
|
|
|
|
case IORING_OP_WRITE_FIXED:
|
2020-08-16 01:44:09 +00:00
|
|
|
case IORING_OP_WRITE: {
|
|
|
|
struct io_async_rw *io = req->async_data;
|
2021-06-17 17:14:03 +00:00
|
|
|
|
|
|
|
kfree(io->free_iovec);
|
2020-07-16 20:28:02 +00:00
|
|
|
break;
|
2020-08-16 01:44:09 +00:00
|
|
|
}
|
2020-07-16 20:28:02 +00:00
|
|
|
case IORING_OP_RECVMSG:
|
2020-08-16 01:44:09 +00:00
|
|
|
case IORING_OP_SENDMSG: {
|
|
|
|
struct io_async_msghdr *io = req->async_data;
|
2021-02-05 00:58:00 +00:00
|
|
|
|
|
|
|
kfree(io->free_iov);
|
2020-07-16 20:28:02 +00:00
|
|
|
break;
|
2020-08-16 01:44:09 +00:00
|
|
|
}
|
2020-07-16 20:28:02 +00:00
|
|
|
case IORING_OP_SPLICE:
|
|
|
|
case IORING_OP_TEE:
|
2021-03-19 17:22:43 +00:00
|
|
|
if (!(req->splice.flags & SPLICE_F_FD_IN_FIXED))
|
|
|
|
io_put_file(req->splice.file_in);
|
2020-07-16 20:28:02 +00:00
|
|
|
break;
|
2020-09-24 20:55:54 +00:00
|
|
|
case IORING_OP_OPENAT:
|
|
|
|
case IORING_OP_OPENAT2:
|
|
|
|
if (req->open.filename)
|
|
|
|
putname(req->open.filename);
|
|
|
|
break;
|
2020-09-28 20:23:58 +00:00
|
|
|
case IORING_OP_RENAMEAT:
|
|
|
|
putname(req->rename.oldpath);
|
|
|
|
putname(req->rename.newpath);
|
|
|
|
break;
|
2020-09-28 20:27:37 +00:00
|
|
|
case IORING_OP_UNLINKAT:
|
|
|
|
putname(req->unlink.filename);
|
|
|
|
break;
|
2020-07-16 20:28:02 +00:00
|
|
|
}
|
2020-02-07 19:04:45 +00:00
|
|
|
}
|
2021-04-15 15:52:40 +00:00
|
|
|
if ((req->flags & REQ_F_POLLED) && req->apoll) {
|
|
|
|
kfree(req->apoll->double_poll);
|
|
|
|
kfree(req->apoll);
|
|
|
|
req->apoll = NULL;
|
|
|
|
}
|
2021-04-20 11:03:31 +00:00
|
|
|
if (req->flags & REQ_F_INFLIGHT) {
|
|
|
|
struct io_uring_task *tctx = req->task->io_uring;
|
|
|
|
|
|
|
|
atomic_dec(&tctx->inflight_tracked);
|
|
|
|
}
|
2021-06-17 17:14:04 +00:00
|
|
|
if (req->flags & REQ_F_CREDS)
|
2021-06-17 17:14:02 +00:00
|
|
|
put_cred(req->creds);
|
2021-06-17 17:14:04 +00:00
|
|
|
|
|
|
|
req->flags &= ~IO_REQ_CLEAN_FLAGS;
|
2020-02-07 19:04:45 +00:00
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2019-11-08 15:09:12 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2021-02-27 22:57:30 +00:00
|
|
|
const struct cred *creds = NULL;
|
2019-12-18 02:53:05 +00:00
|
|
|
int ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2021-06-17 17:14:02 +00:00
|
|
|
if ((req->flags & REQ_F_CREDS) && req->creds != current_cred())
|
2021-06-17 17:14:01 +00:00
|
|
|
creds = override_creds(req->creds);
|
2021-02-27 22:57:30 +00:00
|
|
|
|
2019-12-18 02:53:05 +00:00
|
|
|
switch (req->opcode) {
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
case IORING_OP_NOP:
|
2021-02-10 00:03:09 +00:00
|
|
|
ret = io_nop(req, issue_flags);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
break;
|
|
|
|
case IORING_OP_READV:
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
case IORING_OP_READ_FIXED:
|
2019-12-22 22:19:35 +00:00
|
|
|
case IORING_OP_READ:
|
2021-02-10 00:03:09 +00:00
|
|
|
ret = io_read(req, issue_flags);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
break;
|
2019-12-20 01:24:38 +00:00
|
|
|
case IORING_OP_WRITEV:
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
case IORING_OP_WRITE_FIXED:
|
2019-12-22 22:19:35 +00:00
|
|
|
case IORING_OP_WRITE:
|
2021-02-10 00:03:09 +00:00
|
|
|
ret = io_write(req, issue_flags);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
break;
|
2019-01-11 16:43:02 +00:00
|
|
|
case IORING_OP_FSYNC:
|
2021-02-10 00:03:07 +00:00
|
|
|
ret = io_fsync(req, issue_flags);
|
2019-01-11 16:43:02 +00:00
|
|
|
break;
|
2019-01-17 16:41:58 +00:00
|
|
|
case IORING_OP_POLL_ADD:
|
2021-02-10 00:03:08 +00:00
|
|
|
ret = io_poll_add(req, issue_flags);
|
2019-01-17 16:41:58 +00:00
|
|
|
break;
|
|
|
|
case IORING_OP_POLL_REMOVE:
|
2021-04-14 12:38:37 +00:00
|
|
|
ret = io_poll_update(req, issue_flags);
|
2019-01-17 16:41:58 +00:00
|
|
|
break;
|
2019-04-09 20:56:44 +00:00
|
|
|
case IORING_OP_SYNC_FILE_RANGE:
|
2021-02-10 00:03:07 +00:00
|
|
|
ret = io_sync_file_range(req, issue_flags);
|
2019-04-09 20:56:44 +00:00
|
|
|
break;
|
2019-04-19 19:34:07 +00:00
|
|
|
case IORING_OP_SENDMSG:
|
2021-02-10 00:03:09 +00:00
|
|
|
ret = io_sendmsg(req, issue_flags);
|
2020-10-10 17:34:12 +00:00
|
|
|
break;
|
2020-01-05 03:19:44 +00:00
|
|
|
case IORING_OP_SEND:
|
2021-02-10 00:03:09 +00:00
|
|
|
ret = io_send(req, issue_flags);
|
2019-04-19 19:34:07 +00:00
|
|
|
break;
|
2019-04-19 19:38:09 +00:00
|
|
|
case IORING_OP_RECVMSG:
|
2021-02-10 00:03:09 +00:00
|
|
|
ret = io_recvmsg(req, issue_flags);
|
2020-10-10 17:34:12 +00:00
|
|
|
break;
|
2020-01-05 03:19:44 +00:00
|
|
|
case IORING_OP_RECV:
|
2021-02-10 00:03:09 +00:00
|
|
|
ret = io_recv(req, issue_flags);
|
2019-04-19 19:38:09 +00:00
|
|
|
break;
|
2019-09-17 18:26:57 +00:00
|
|
|
case IORING_OP_TIMEOUT:
|
2021-02-10 00:03:08 +00:00
|
|
|
ret = io_timeout(req, issue_flags);
|
2019-09-17 18:26:57 +00:00
|
|
|
break;
|
2019-10-16 15:08:32 +00:00
|
|
|
case IORING_OP_TIMEOUT_REMOVE:
|
2021-02-10 00:03:08 +00:00
|
|
|
ret = io_timeout_remove(req, issue_flags);
|
2019-10-16 15:08:32 +00:00
|
|
|
break;
|
2019-10-17 20:42:58 +00:00
|
|
|
case IORING_OP_ACCEPT:
|
2021-02-10 00:03:09 +00:00
|
|
|
ret = io_accept(req, issue_flags);
|
2019-10-17 20:42:58 +00:00
|
|
|
break;
|
2019-11-23 21:24:24 +00:00
|
|
|
case IORING_OP_CONNECT:
|
2021-02-10 00:03:09 +00:00
|
|
|
ret = io_connect(req, issue_flags);
|
2019-11-23 21:24:24 +00:00
|
|
|
break;
|
2019-10-29 03:49:21 +00:00
|
|
|
case IORING_OP_ASYNC_CANCEL:
|
2021-02-10 00:03:08 +00:00
|
|
|
ret = io_async_cancel(req, issue_flags);
|
2019-10-29 03:49:21 +00:00
|
|
|
break;
|
2019-12-10 17:38:56 +00:00
|
|
|
case IORING_OP_FALLOCATE:
|
2021-02-10 00:03:07 +00:00
|
|
|
ret = io_fallocate(req, issue_flags);
|
2019-12-10 17:38:56 +00:00
|
|
|
break;
|
2019-12-11 18:20:36 +00:00
|
|
|
case IORING_OP_OPENAT:
|
2021-02-10 00:03:07 +00:00
|
|
|
ret = io_openat(req, issue_flags);
|
2019-12-11 18:20:36 +00:00
|
|
|
break;
|
2019-12-11 21:02:38 +00:00
|
|
|
case IORING_OP_CLOSE:
|
2021-02-10 00:03:09 +00:00
|
|
|
ret = io_close(req, issue_flags);
|
2019-12-11 21:02:38 +00:00
|
|
|
break;
|
2019-12-09 18:22:50 +00:00
|
|
|
case IORING_OP_FILES_UPDATE:
|
2021-02-10 00:03:09 +00:00
|
|
|
ret = io_files_update(req, issue_flags);
|
2019-12-09 18:22:50 +00:00
|
|
|
break;
|
2019-12-14 04:18:10 +00:00
|
|
|
case IORING_OP_STATX:
|
2021-02-10 00:03:07 +00:00
|
|
|
ret = io_statx(req, issue_flags);
|
2019-12-14 04:18:10 +00:00
|
|
|
break;
|
2019-12-26 05:03:45 +00:00
|
|
|
case IORING_OP_FADVISE:
|
2021-02-10 00:03:07 +00:00
|
|
|
ret = io_fadvise(req, issue_flags);
|
2019-12-26 05:03:45 +00:00
|
|
|
break;
|
2019-12-26 05:18:28 +00:00
|
|
|
case IORING_OP_MADVISE:
|
2021-02-10 00:03:07 +00:00
|
|
|
ret = io_madvise(req, issue_flags);
|
2019-12-26 05:18:28 +00:00
|
|
|
break;
|
2020-01-09 00:59:24 +00:00
|
|
|
case IORING_OP_OPENAT2:
|
2021-02-10 00:03:07 +00:00
|
|
|
ret = io_openat2(req, issue_flags);
|
2020-01-09 00:59:24 +00:00
|
|
|
break;
|
2020-01-08 22:18:09 +00:00
|
|
|
case IORING_OP_EPOLL_CTL:
|
2021-02-10 00:03:09 +00:00
|
|
|
ret = io_epoll_ctl(req, issue_flags);
|
2020-01-08 22:18:09 +00:00
|
|
|
break;
|
2020-02-24 08:32:45 +00:00
|
|
|
case IORING_OP_SPLICE:
|
2021-02-10 00:03:07 +00:00
|
|
|
ret = io_splice(req, issue_flags);
|
2020-02-24 08:32:45 +00:00
|
|
|
break;
|
2020-02-23 23:41:33 +00:00
|
|
|
case IORING_OP_PROVIDE_BUFFERS:
|
2021-02-10 00:03:09 +00:00
|
|
|
ret = io_provide_buffers(req, issue_flags);
|
2020-02-23 23:41:33 +00:00
|
|
|
break;
|
2020-03-02 23:32:28 +00:00
|
|
|
case IORING_OP_REMOVE_BUFFERS:
|
2021-02-10 00:03:09 +00:00
|
|
|
ret = io_remove_buffers(req, issue_flags);
|
2020-01-08 22:18:09 +00:00
|
|
|
break;
|
2020-05-17 11:18:06 +00:00
|
|
|
case IORING_OP_TEE:
|
2021-02-10 00:03:07 +00:00
|
|
|
ret = io_tee(req, issue_flags);
|
2020-05-17 11:18:06 +00:00
|
|
|
break;
|
2020-09-05 17:14:22 +00:00
|
|
|
case IORING_OP_SHUTDOWN:
|
2021-02-10 00:03:07 +00:00
|
|
|
ret = io_shutdown(req, issue_flags);
|
2020-09-05 17:14:22 +00:00
|
|
|
break;
|
2020-09-28 20:23:58 +00:00
|
|
|
case IORING_OP_RENAMEAT:
|
2021-02-10 00:03:07 +00:00
|
|
|
ret = io_renameat(req, issue_flags);
|
2020-09-28 20:23:58 +00:00
|
|
|
break;
|
2020-09-28 20:27:37 +00:00
|
|
|
case IORING_OP_UNLINKAT:
|
2021-02-10 00:03:07 +00:00
|
|
|
ret = io_unlinkat(req, issue_flags);
|
2020-09-28 20:27:37 +00:00
|
|
|
break;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
default:
|
|
|
|
ret = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2021-02-27 22:57:30 +00:00
|
|
|
if (creds)
|
|
|
|
revert_creds(creds);
|
2019-01-09 15:59:42 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2020-05-20 03:20:27 +00:00
|
|
|
/* If the op doesn't have a file, we're not polling for it */
|
2021-06-14 01:36:14 +00:00
|
|
|
if ((ctx->flags & IORING_SETUP_IOPOLL) && req->file)
|
|
|
|
io_iopoll_req_issued(req);
|
2019-01-09 15:59:42 +00:00
|
|
|
|
|
|
|
return 0;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2021-08-09 12:04:05 +00:00
|
|
|
static struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
|
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
|
|
|
|
|
|
|
req = io_put_req_find_next(req);
|
|
|
|
return req ? &req->work : NULL;
|
|
|
|
}
|
|
|
|
|
2021-02-04 13:52:08 +00:00
|
|
|
static void io_wq_submit_work(struct io_wq_work *work)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
2020-07-03 19:15:06 +00:00
|
|
|
struct io_kiocb *timeout;
|
2019-10-24 13:25:42 +00:00
|
|
|
int ret = 0;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2021-08-15 09:40:18 +00:00
|
|
|
/* one will be dropped by ->io_free_work() after returning to io-wq */
|
|
|
|
if (!(req->flags & REQ_F_REFCOUNT))
|
|
|
|
__io_req_set_refcount(req, 2);
|
|
|
|
else
|
|
|
|
req_ref_get(req);
|
io_uring: remove submission references
Requests are by default given with two references, submission and
completion. Completion references are straightforward, they represent
request ownership and are put when a request is completed or so.
Submission references are a bit more trickier. They're needed when
io_issue_sqe() followed deep into the submission stack (e.g. in fs,
block, drivers, etc.), request may have given away for concurrent
execution or already completed, and the code unwinding back to
io_issue_sqe() may be accessing some pieces of our requests, e.g.
file or iov.
Now, we prevent such async/in-depth completions by pushing requests
through task_work. Punting to io-wq is also done through task_works,
apart from a couple of cases with a pretty well known context. So,
there're two cases:
1) io_issue_sqe() from the task context and protected by ->uring_lock.
Either requests return back to io_uring or handed to task_work, which
won't be executed because we're currently controlling that task. So,
we can be sure that requests are staying alive all the time and we don't
need submission references to pin them.
2) io_issue_sqe() from io-wq, which doesn't hold the mutex. The role of
submission reference is played by io-wq reference, which is put by
io_wq_submit_work(). Hence, it should be fine.
Considering that, we can carefully kill the submission reference.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/6b68f1c763229a590f2a27148aee77767a8d7750.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 18:28:29 +00:00
|
|
|
|
2020-07-03 19:15:06 +00:00
|
|
|
timeout = io_prep_linked_timeout(req);
|
|
|
|
if (timeout)
|
|
|
|
io_queue_linked_timeout(timeout);
|
2020-06-08 18:08:19 +00:00
|
|
|
|
2021-01-19 22:53:54 +00:00
|
|
|
if (work->flags & IO_WQ_WORK_CANCEL)
|
2019-10-24 13:25:42 +00:00
|
|
|
ret = -ECANCELED;
|
2019-01-19 05:56:34 +00:00
|
|
|
|
2019-10-24 13:25:42 +00:00
|
|
|
if (!ret) {
|
|
|
|
do {
|
2021-02-10 00:03:09 +00:00
|
|
|
ret = io_issue_sqe(req, 0);
|
2019-10-24 13:25:42 +00:00
|
|
|
/*
|
|
|
|
* We can get EAGAIN for polled IO even though we're
|
|
|
|
* forcing a sync submission from here, since we can't
|
|
|
|
* wait for request slots on the block side.
|
|
|
|
*/
|
|
|
|
if (ret != -EAGAIN)
|
|
|
|
break;
|
|
|
|
cond_resched();
|
|
|
|
} while (1);
|
|
|
|
}
|
2019-01-19 05:56:34 +00:00
|
|
|
|
2021-02-18 22:32:52 +00:00
|
|
|
/* avoid locking problems by failing it from a clean context */
|
io_uring: remove submission references
Requests are by default given with two references, submission and
completion. Completion references are straightforward, they represent
request ownership and are put when a request is completed or so.
Submission references are a bit more trickier. They're needed when
io_issue_sqe() followed deep into the submission stack (e.g. in fs,
block, drivers, etc.), request may have given away for concurrent
execution or already completed, and the code unwinding back to
io_issue_sqe() may be accessing some pieces of our requests, e.g.
file or iov.
Now, we prevent such async/in-depth completions by pushing requests
through task_work. Punting to io-wq is also done through task_works,
apart from a couple of cases with a pretty well known context. So,
there're two cases:
1) io_issue_sqe() from the task context and protected by ->uring_lock.
Either requests return back to io_uring or handed to task_work, which
won't be executed because we're currently controlling that task. So,
we can be sure that requests are staying alive all the time and we don't
need submission references to pin them.
2) io_issue_sqe() from io-wq, which doesn't hold the mutex. The role of
submission reference is played by io-wq reference, which is put by
io_wq_submit_work(). Hence, it should be fine.
Considering that, we can carefully kill the submission reference.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/6b68f1c763229a590f2a27148aee77767a8d7750.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 18:28:29 +00:00
|
|
|
if (ret)
|
2021-02-18 22:32:52 +00:00
|
|
|
io_req_task_queue_fail(req, ret);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2021-04-11 00:46:37 +00:00
|
|
|
static inline struct io_fixed_file *io_fixed_file_slot(struct io_file_table *table,
|
2021-08-09 12:04:01 +00:00
|
|
|
unsigned i)
|
2019-10-26 13:20:21 +00:00
|
|
|
{
|
2021-08-09 12:04:01 +00:00
|
|
|
return &table->files[i];
|
2021-02-28 22:35:11 +00:00
|
|
|
}
|
|
|
|
|
2019-10-26 13:20:21 +00:00
|
|
|
static inline struct file *io_file_from_index(struct io_ring_ctx *ctx,
|
|
|
|
int index)
|
|
|
|
{
|
2021-04-11 00:46:37 +00:00
|
|
|
struct io_fixed_file *slot = io_fixed_file_slot(&ctx->file_table, index);
|
2019-10-26 13:20:21 +00:00
|
|
|
|
2021-04-01 14:44:04 +00:00
|
|
|
return (struct file *) (slot->file_ptr & FFS_MASK);
|
2019-10-26 13:20:21 +00:00
|
|
|
}
|
|
|
|
|
2021-04-01 14:44:04 +00:00
|
|
|
static void io_fixed_file_set(struct io_fixed_file *file_slot, struct file *file)
|
2021-04-01 14:44:01 +00:00
|
|
|
{
|
|
|
|
unsigned long file_ptr = (unsigned long) file;
|
|
|
|
|
2021-08-09 12:04:03 +00:00
|
|
|
if (__io_file_supports_nowait(file, READ))
|
2021-04-01 14:44:01 +00:00
|
|
|
file_ptr |= FFS_ASYNC_READ;
|
2021-08-09 12:04:03 +00:00
|
|
|
if (__io_file_supports_nowait(file, WRITE))
|
2021-04-01 14:44:01 +00:00
|
|
|
file_ptr |= FFS_ASYNC_WRITE;
|
|
|
|
if (S_ISREG(file_inode(file)->i_mode))
|
|
|
|
file_ptr |= FFS_ISREG;
|
2021-04-01 14:44:04 +00:00
|
|
|
file_slot->file_ptr = file_ptr;
|
2019-10-26 13:20:21 +00:00
|
|
|
}
|
|
|
|
|
2021-08-09 12:04:02 +00:00
|
|
|
static inline struct file *io_file_get_fixed(struct io_ring_ctx *ctx,
|
|
|
|
struct io_kiocb *req, int fd)
|
2019-03-13 18:39:28 +00:00
|
|
|
{
|
2020-02-24 08:32:44 +00:00
|
|
|
struct file *file;
|
2021-08-09 12:04:02 +00:00
|
|
|
unsigned long file_ptr;
|
2019-03-13 18:39:28 +00:00
|
|
|
|
2021-08-09 12:04:02 +00:00
|
|
|
if (unlikely((unsigned int)fd >= ctx->nr_user_files))
|
|
|
|
return NULL;
|
|
|
|
fd = array_index_nospec(fd, ctx->nr_user_files);
|
|
|
|
file_ptr = io_fixed_file_slot(&ctx->file_table, fd)->file_ptr;
|
|
|
|
file = (struct file *) (file_ptr & FFS_MASK);
|
|
|
|
file_ptr &= ~FFS_MASK;
|
|
|
|
/* mask in overlapping REQ_F and FFS bits */
|
2021-08-09 12:04:03 +00:00
|
|
|
req->flags |= (file_ptr << REQ_F_NOWAIT_READ_BIT);
|
2021-08-09 12:04:02 +00:00
|
|
|
io_req_set_rsrc_node(req);
|
|
|
|
return file;
|
|
|
|
}
|
2021-03-12 15:27:05 +00:00
|
|
|
|
2021-08-09 12:04:02 +00:00
|
|
|
static struct file *io_file_get_normal(struct io_ring_ctx *ctx,
|
|
|
|
struct io_kiocb *req, int fd)
|
|
|
|
{
|
io_uring: remove file batch-get optimisation
For requests with non-fixed files, instead of grabbing just one
reference, we get by the number of left requests, so the following
requests using the same file can take it without atomics.
However, it's not all win. If there is one request in the middle
not using files or having a fixed file, we'll need to put back the left
references. Even worse if an application submits requests dealing with
different files, it will do a put for each new request, so doubling the
number of atomics needed. Also, even if not used, it's still takes some
cycles in the submission path.
If a file used many times, it rather makes sense to pre-register it, if
not, we may fall in the described pitfall. So, this optimisation is a
matter of use case. Go with the simpliest code-wise way, remove it.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-10 13:52:47 +00:00
|
|
|
struct file *file = fget(fd);
|
2021-08-09 12:04:02 +00:00
|
|
|
|
|
|
|
trace_io_uring_file_get(ctx, fd);
|
2019-03-13 18:39:28 +00:00
|
|
|
|
2021-08-09 12:04:02 +00:00
|
|
|
/* we don't allow fixed io_uring files */
|
|
|
|
if (file && unlikely(file->f_op == &io_uring_fops))
|
|
|
|
io_req_track_inflight(req);
|
2020-10-10 17:34:08 +00:00
|
|
|
return file;
|
2019-03-13 18:39:28 +00:00
|
|
|
}
|
|
|
|
|
2021-08-09 12:04:02 +00:00
|
|
|
static inline struct file *io_file_get(struct io_ring_ctx *ctx,
|
|
|
|
struct io_kiocb *req, int fd, bool fixed)
|
|
|
|
{
|
|
|
|
if (fixed)
|
|
|
|
return io_file_get_fixed(ctx, req, fd);
|
|
|
|
else
|
io_uring: remove file batch-get optimisation
For requests with non-fixed files, instead of grabbing just one
reference, we get by the number of left requests, so the following
requests using the same file can take it without atomics.
However, it's not all win. If there is one request in the middle
not using files or having a fixed file, we'll need to put back the left
references. Even worse if an application submits requests dealing with
different files, it will do a put for each new request, so doubling the
number of atomics needed. Also, even if not used, it's still takes some
cycles in the submission path.
If a file used many times, it rather makes sense to pre-register it, if
not, we may fall in the described pitfall. So, this optimisation is a
matter of use case. Go with the simpliest code-wise way, remove it.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-10 13:52:47 +00:00
|
|
|
return io_file_get_normal(ctx, req, fd);
|
2021-08-09 12:04:02 +00:00
|
|
|
}
|
|
|
|
|
2021-08-10 21:14:18 +00:00
|
|
|
static void io_req_task_link_timeout(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
struct io_kiocb *prev = req->timeout.prev;
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2021-08-15 09:40:22 +00:00
|
|
|
int ret;
|
2021-08-10 21:14:18 +00:00
|
|
|
|
|
|
|
if (prev) {
|
2021-08-15 09:40:22 +00:00
|
|
|
ret = io_try_cancel_userdata(req, prev->user_data);
|
|
|
|
if (!ret)
|
|
|
|
ret = -ETIME;
|
|
|
|
io_cqring_fill_event(ctx, req->user_data, ret, 0);
|
|
|
|
io_commit_cqring(ctx);
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
io_cqring_ev_posted(ctx);
|
|
|
|
|
2021-08-10 21:14:18 +00:00
|
|
|
io_put_req(prev);
|
|
|
|
io_put_req(req);
|
|
|
|
} else {
|
|
|
|
io_req_complete_post(req, -ETIME, 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-11-05 19:40:47 +00:00
|
|
|
static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2019-11-15 15:49:11 +00:00
|
|
|
struct io_timeout_data *data = container_of(timer,
|
|
|
|
struct io_timeout_data, timer);
|
2020-10-27 23:25:36 +00:00
|
|
|
struct io_kiocb *prev, *req = data->req;
|
2019-11-05 19:40:47 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
unsigned long flags;
|
|
|
|
|
2021-08-10 21:14:18 +00:00
|
|
|
spin_lock_irqsave(&ctx->timeout_lock, flags);
|
2020-10-27 23:25:36 +00:00
|
|
|
prev = req->timeout.head;
|
|
|
|
req->timeout.head = NULL;
|
2019-11-05 19:40:47 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We don't expect the list to be empty, that will only happen if we
|
|
|
|
* race with the completion of the linked work.
|
|
|
|
*/
|
2021-05-14 11:02:50 +00:00
|
|
|
if (prev) {
|
2020-10-27 23:25:37 +00:00
|
|
|
io_remove_next_linked(prev);
|
2021-05-14 11:02:50 +00:00
|
|
|
if (!req_ref_inc_not_zero(prev))
|
|
|
|
prev = NULL;
|
|
|
|
}
|
2021-08-10 21:14:18 +00:00
|
|
|
req->timeout.prev = prev;
|
|
|
|
spin_unlock_irqrestore(&ctx->timeout_lock, flags);
|
2019-11-05 19:40:47 +00:00
|
|
|
|
2021-08-10 21:14:18 +00:00
|
|
|
req->io_task_work.func = io_req_task_link_timeout;
|
|
|
|
io_req_task_work_add(req);
|
2019-11-05 19:40:47 +00:00
|
|
|
return HRTIMER_NORESTART;
|
|
|
|
}
|
|
|
|
|
2021-03-19 17:22:33 +00:00
|
|
|
static void io_queue_linked_timeout(struct io_kiocb *req)
|
2019-11-05 19:40:47 +00:00
|
|
|
{
|
2021-03-19 17:22:33 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2021-08-10 21:14:18 +00:00
|
|
|
spin_lock_irq(&ctx->timeout_lock);
|
2019-11-11 06:34:16 +00:00
|
|
|
/*
|
2020-10-27 23:25:37 +00:00
|
|
|
* If the back reference is NULL, then our linked request finished
|
|
|
|
* before we got a chance to setup the timer
|
2019-11-11 06:34:16 +00:00
|
|
|
*/
|
2020-10-27 23:25:36 +00:00
|
|
|
if (req->timeout.head) {
|
2020-08-16 01:44:09 +00:00
|
|
|
struct io_timeout_data *data = req->async_data;
|
2019-11-15 02:39:52 +00:00
|
|
|
|
2019-11-15 15:49:11 +00:00
|
|
|
data->timer.function = io_link_timeout_fn;
|
|
|
|
hrtimer_start(&data->timer, timespec64_to_ktime(data->ts),
|
|
|
|
data->mode);
|
2019-11-05 19:40:47 +00:00
|
|
|
}
|
2021-08-10 21:14:18 +00:00
|
|
|
spin_unlock_irq(&ctx->timeout_lock);
|
2019-11-05 19:40:47 +00:00
|
|
|
/* drop submission reference */
|
2019-11-11 06:34:16 +00:00
|
|
|
io_put_req(req);
|
|
|
|
}
|
2019-11-05 19:40:47 +00:00
|
|
|
|
2021-02-10 00:03:22 +00:00
|
|
|
static void __io_queue_sqe(struct io_kiocb *req)
|
2021-08-09 12:04:10 +00:00
|
|
|
__must_hold(&req->ctx->uring_lock)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2021-08-15 09:40:26 +00:00
|
|
|
struct io_kiocb *linked_timeout;
|
2019-03-12 16:18:47 +00:00
|
|
|
int ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2021-06-22 12:17:39 +00:00
|
|
|
issue_sqe:
|
2021-02-10 00:03:22 +00:00
|
|
|
ret = io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
|
2020-02-23 06:22:19 +00:00
|
|
|
|
2019-10-17 15:20:46 +00:00
|
|
|
/*
|
|
|
|
* We async punt it if the file wasn't marked NOWAIT, or if the file
|
|
|
|
* doesn't support non-blocking read/write attempts
|
|
|
|
*/
|
2021-03-19 17:22:34 +00:00
|
|
|
if (likely(!ret)) {
|
2021-01-19 13:32:47 +00:00
|
|
|
if (req->flags & REQ_F_COMPLETE_INLINE) {
|
2021-02-10 00:03:22 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2021-08-09 19:18:11 +00:00
|
|
|
struct io_submit_state *state = &ctx->submit_state;
|
2019-03-12 16:16:44 +00:00
|
|
|
|
2021-08-09 19:18:11 +00:00
|
|
|
state->compl_reqs[state->compl_nr++] = req;
|
|
|
|
if (state->compl_nr == ARRAY_SIZE(state->compl_reqs))
|
2021-06-17 17:14:00 +00:00
|
|
|
io_submit_flush_completions(ctx);
|
2021-08-15 09:40:26 +00:00
|
|
|
return;
|
2020-10-22 15:47:18 +00:00
|
|
|
}
|
2021-08-15 09:40:26 +00:00
|
|
|
|
|
|
|
linked_timeout = io_prep_linked_timeout(req);
|
|
|
|
if (linked_timeout)
|
|
|
|
io_queue_linked_timeout(linked_timeout);
|
2021-03-19 17:22:34 +00:00
|
|
|
} else if (ret == -EAGAIN && !(req->flags & REQ_F_NOWAIT)) {
|
2021-08-15 09:40:26 +00:00
|
|
|
linked_timeout = io_prep_linked_timeout(req);
|
|
|
|
|
2021-06-22 12:17:39 +00:00
|
|
|
switch (io_arm_poll_handler(req)) {
|
|
|
|
case IO_APOLL_READY:
|
2021-08-15 09:40:26 +00:00
|
|
|
if (linked_timeout)
|
|
|
|
io_unprep_linked_timeout(req);
|
2021-06-22 12:17:39 +00:00
|
|
|
goto issue_sqe;
|
|
|
|
case IO_APOLL_ABORTED:
|
2021-03-19 17:22:34 +00:00
|
|
|
/*
|
|
|
|
* Queued up for async execution, worker will release
|
|
|
|
* submit reference when the iocb is actually submitted.
|
|
|
|
*/
|
|
|
|
io_queue_async_work(req);
|
2021-06-22 12:17:39 +00:00
|
|
|
break;
|
2021-03-19 17:22:34 +00:00
|
|
|
}
|
2021-08-15 09:40:26 +00:00
|
|
|
|
|
|
|
if (linked_timeout)
|
|
|
|
io_queue_linked_timeout(linked_timeout);
|
2020-10-22 15:47:18 +00:00
|
|
|
} else {
|
2021-02-28 22:35:12 +00:00
|
|
|
io_req_complete_failed(req, ret);
|
2019-05-10 22:07:28 +00:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2021-06-14 22:37:31 +00:00
|
|
|
static inline void io_queue_sqe(struct io_kiocb *req)
|
2021-08-09 12:04:10 +00:00
|
|
|
__must_hold(&req->ctx->uring_lock)
|
2019-09-09 12:50:40 +00:00
|
|
|
{
|
2021-06-15 15:47:56 +00:00
|
|
|
if (unlikely(req->ctx->drain_active) && io_drain_req(req))
|
2021-06-14 22:37:30 +00:00
|
|
|
return;
|
2019-09-09 12:50:40 +00:00
|
|
|
|
2021-06-14 22:37:30 +00:00
|
|
|
if (likely(!(req->flags & REQ_F_FORCE_ASYNC))) {
|
2021-02-10 00:03:22 +00:00
|
|
|
__io_queue_sqe(req);
|
2021-06-14 22:37:30 +00:00
|
|
|
} else {
|
|
|
|
int ret = io_req_prep_async(req);
|
|
|
|
|
|
|
|
if (unlikely(ret))
|
|
|
|
io_req_complete_failed(req, ret);
|
|
|
|
else
|
|
|
|
io_queue_async_work(req);
|
2019-12-17 15:04:44 +00:00
|
|
|
}
|
2019-09-09 12:50:40 +00:00
|
|
|
}
|
|
|
|
|
2021-02-18 18:29:40 +00:00
|
|
|
/*
|
|
|
|
* Check SQE restrictions (opcode and flags).
|
|
|
|
*
|
|
|
|
* Returns 'true' if SQE is allowed, 'false' otherwise.
|
|
|
|
*/
|
|
|
|
static inline bool io_check_restriction(struct io_ring_ctx *ctx,
|
|
|
|
struct io_kiocb *req,
|
|
|
|
unsigned int sqe_flags)
|
2019-09-09 12:50:40 +00:00
|
|
|
{
|
2021-06-26 20:40:47 +00:00
|
|
|
if (likely(!ctx->restricted))
|
2021-02-18 18:29:40 +00:00
|
|
|
return true;
|
|
|
|
|
|
|
|
if (!test_bit(req->opcode, ctx->restrictions.sqe_op))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if ((sqe_flags & ctx->restrictions.sqe_flags_required) !=
|
|
|
|
ctx->restrictions.sqe_flags_required)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (sqe_flags & ~(ctx->restrictions.sqe_flags_allowed |
|
|
|
|
ctx->restrictions.sqe_flags_required))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
2019-09-09 12:50:40 +00:00
|
|
|
}
|
|
|
|
|
2021-02-18 18:29:40 +00:00
|
|
|
static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
2021-08-09 12:04:10 +00:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2021-02-18 18:29:40 +00:00
|
|
|
{
|
|
|
|
struct io_submit_state *state;
|
|
|
|
unsigned int sqe_flags;
|
2021-03-06 16:22:27 +00:00
|
|
|
int personality, ret = 0;
|
2021-02-18 18:29:40 +00:00
|
|
|
|
2021-08-09 12:04:08 +00:00
|
|
|
/* req is partially pre-initialised, see io_preinit_req() */
|
2021-02-18 18:29:40 +00:00
|
|
|
req->opcode = READ_ONCE(sqe->opcode);
|
|
|
|
/* same numerical values with corresponding REQ_F_*, safe to copy */
|
|
|
|
req->flags = sqe_flags = READ_ONCE(sqe->flags);
|
|
|
|
req->user_data = READ_ONCE(sqe->user_data);
|
|
|
|
req->file = NULL;
|
|
|
|
req->fixed_rsrc_refs = NULL;
|
|
|
|
req->task = current;
|
|
|
|
|
|
|
|
/* enforce forwards compatibility on users */
|
2021-04-27 15:13:52 +00:00
|
|
|
if (unlikely(sqe_flags & ~SQE_VALID_FLAGS))
|
2021-02-18 18:29:40 +00:00
|
|
|
return -EINVAL;
|
|
|
|
if (unlikely(req->opcode >= IORING_OP_LAST))
|
|
|
|
return -EINVAL;
|
2021-06-26 20:40:47 +00:00
|
|
|
if (!io_check_restriction(ctx, req, sqe_flags))
|
2021-02-18 18:29:40 +00:00
|
|
|
return -EACCES;
|
|
|
|
|
|
|
|
if ((sqe_flags & IOSQE_BUFFER_SELECT) &&
|
|
|
|
!io_op_defs[req->opcode].buffer_select)
|
|
|
|
return -EOPNOTSUPP;
|
2021-06-15 15:47:57 +00:00
|
|
|
if (unlikely(sqe_flags & IOSQE_IO_DRAIN))
|
|
|
|
ctx->drain_active = true;
|
2020-10-27 23:25:35 +00:00
|
|
|
|
2021-03-06 16:22:27 +00:00
|
|
|
personality = READ_ONCE(sqe->personality);
|
|
|
|
if (personality) {
|
2021-06-17 17:14:01 +00:00
|
|
|
req->creds = xa_load(&ctx->personalities, personality);
|
|
|
|
if (!req->creds)
|
2021-03-06 16:22:27 +00:00
|
|
|
return -EINVAL;
|
2021-06-17 17:14:01 +00:00
|
|
|
get_cred(req->creds);
|
2021-06-17 17:14:02 +00:00
|
|
|
req->flags |= REQ_F_CREDS;
|
2021-03-06 16:22:27 +00:00
|
|
|
}
|
2021-02-18 18:29:40 +00:00
|
|
|
state = &ctx->submit_state;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Plug now if we have more than 1 IO left after this, and the target
|
|
|
|
* is potentially a read/write to block based storage.
|
|
|
|
*/
|
|
|
|
if (!state->plug_started && state->ios_left > 1 &&
|
|
|
|
io_op_defs[req->opcode].plug) {
|
|
|
|
blk_start_plug(&state->plug);
|
|
|
|
state->plug_started = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (io_op_defs[req->opcode].needs_file) {
|
io_uring: remove file batch-get optimisation
For requests with non-fixed files, instead of grabbing just one
reference, we get by the number of left requests, so the following
requests using the same file can take it without atomics.
However, it's not all win. If there is one request in the middle
not using files or having a fixed file, we'll need to put back the left
references. Even worse if an application submits requests dealing with
different files, it will do a put for each new request, so doubling the
number of atomics needed. Also, even if not used, it's still takes some
cycles in the submission path.
If a file used many times, it rather makes sense to pre-register it, if
not, we may fall in the described pitfall. So, this optimisation is a
matter of use case. Go with the simpliest code-wise way, remove it.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-10 13:52:47 +00:00
|
|
|
req->file = io_file_get(ctx, req, READ_ONCE(sqe->fd),
|
2021-08-09 12:04:02 +00:00
|
|
|
(sqe_flags & IOSQE_FIXED_FILE));
|
2021-02-18 18:29:40 +00:00
|
|
|
if (unlikely(!req->file))
|
|
|
|
ret = -EBADF;
|
|
|
|
}
|
|
|
|
|
|
|
|
state->ios_left--;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-02-18 18:29:41 +00:00
|
|
|
static int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
|
2021-02-18 18:29:42 +00:00
|
|
|
const struct io_uring_sqe *sqe)
|
2021-08-09 12:04:10 +00:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2019-05-10 22:07:28 +00:00
|
|
|
{
|
2021-02-18 18:29:42 +00:00
|
|
|
struct io_submit_link *link = &ctx->submit_state.link;
|
2020-04-11 23:05:05 +00:00
|
|
|
int ret;
|
2019-05-10 22:07:28 +00:00
|
|
|
|
2021-02-18 18:29:41 +00:00
|
|
|
ret = io_init_req(ctx, req, sqe);
|
|
|
|
if (unlikely(ret)) {
|
|
|
|
fail_req:
|
2021-02-18 18:29:47 +00:00
|
|
|
if (link->head) {
|
|
|
|
/* fail even hard links since we don't submit */
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(link->head);
|
2021-02-28 22:35:12 +00:00
|
|
|
io_req_complete_failed(link->head, -ECANCELED);
|
2021-02-18 18:29:47 +00:00
|
|
|
link->head = NULL;
|
|
|
|
}
|
2021-02-28 22:35:12 +00:00
|
|
|
io_req_complete_failed(req, ret);
|
2021-02-18 18:29:41 +00:00
|
|
|
return ret;
|
|
|
|
}
|
2021-06-14 22:37:31 +00:00
|
|
|
|
2021-02-18 18:29:45 +00:00
|
|
|
ret = io_req_prep(req, sqe);
|
|
|
|
if (unlikely(ret))
|
|
|
|
goto fail_req;
|
2021-02-18 18:29:41 +00:00
|
|
|
|
2021-02-18 18:29:45 +00:00
|
|
|
/* don't need @sqe from now on */
|
2021-05-31 06:36:37 +00:00
|
|
|
trace_io_uring_submit_sqe(ctx, req, req->opcode, req->user_data,
|
|
|
|
req->flags, true,
|
|
|
|
ctx->flags & IORING_SETUP_SQPOLL);
|
2021-02-18 18:29:41 +00:00
|
|
|
|
2019-05-10 22:07:28 +00:00
|
|
|
/*
|
|
|
|
* If we already have a head request, queue this one for async
|
|
|
|
* submittal once the head completes. If we don't have a head but
|
|
|
|
* IOSQE_IO_LINK is set in the sqe, start a new head. This one will be
|
|
|
|
* submitted sync once the chain is complete. If none of those
|
|
|
|
* conditions are true (normal request), then just queue it.
|
|
|
|
*/
|
2020-10-27 23:25:35 +00:00
|
|
|
if (link->head) {
|
|
|
|
struct io_kiocb *head = link->head;
|
2019-12-08 03:59:47 +00:00
|
|
|
|
2021-02-28 22:35:19 +00:00
|
|
|
ret = io_req_prep_async(req);
|
2021-02-18 18:29:43 +00:00
|
|
|
if (unlikely(ret))
|
2021-02-18 18:29:41 +00:00
|
|
|
goto fail_req;
|
2019-12-16 23:22:07 +00:00
|
|
|
trace_io_uring_link(ctx, req, head);
|
2020-10-27 23:25:37 +00:00
|
|
|
link->last->link = req;
|
2020-10-27 23:25:35 +00:00
|
|
|
link->last = req;
|
2019-12-17 19:26:58 +00:00
|
|
|
|
|
|
|
/* last request of a link, enqueue the link */
|
2020-04-11 23:05:05 +00:00
|
|
|
if (!(req->flags & (REQ_F_LINK | REQ_F_HARDLINK))) {
|
2020-10-27 23:25:35 +00:00
|
|
|
link->head = NULL;
|
2021-06-14 22:37:26 +00:00
|
|
|
io_queue_sqe(head);
|
2019-12-17 19:26:58 +00:00
|
|
|
}
|
2019-05-10 22:07:28 +00:00
|
|
|
} else {
|
2020-04-11 23:05:05 +00:00
|
|
|
if (req->flags & (REQ_F_LINK | REQ_F_HARDLINK)) {
|
2020-10-27 23:25:35 +00:00
|
|
|
link->head = req;
|
|
|
|
link->last = req;
|
2020-01-17 00:57:59 +00:00
|
|
|
} else {
|
2021-02-18 18:29:45 +00:00
|
|
|
io_queue_sqe(req);
|
2020-01-17 00:57:59 +00:00
|
|
|
}
|
2019-05-10 22:07:28 +00:00
|
|
|
}
|
2019-12-05 13:15:45 +00:00
|
|
|
|
2020-04-11 23:05:03 +00:00
|
|
|
return 0;
|
2019-05-10 22:07:28 +00:00
|
|
|
}
|
|
|
|
|
2019-01-09 16:06:50 +00:00
|
|
|
/*
|
|
|
|
* Batched submission is done, ensure local IO is flushed out.
|
|
|
|
*/
|
2021-02-10 00:03:11 +00:00
|
|
|
static void io_submit_state_end(struct io_submit_state *state,
|
|
|
|
struct io_ring_ctx *ctx)
|
2019-01-09 16:06:50 +00:00
|
|
|
{
|
2021-02-18 18:29:42 +00:00
|
|
|
if (state->link.head)
|
2021-02-18 18:29:47 +00:00
|
|
|
io_queue_sqe(state->link.head);
|
2021-08-09 19:18:11 +00:00
|
|
|
if (state->compl_nr)
|
2021-06-17 17:14:00 +00:00
|
|
|
io_submit_flush_completions(ctx);
|
2020-10-28 15:33:23 +00:00
|
|
|
if (state->plug_started)
|
|
|
|
blk_finish_plug(&state->plug);
|
2019-01-09 16:06:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Start submission side cache.
|
|
|
|
*/
|
|
|
|
static void io_submit_state_start(struct io_submit_state *state,
|
2021-02-10 00:03:11 +00:00
|
|
|
unsigned int max_ios)
|
2019-01-09 16:06:50 +00:00
|
|
|
{
|
2020-10-28 15:33:23 +00:00
|
|
|
state->plug_started = false;
|
2019-01-09 16:06:50 +00:00
|
|
|
state->ios_left = max_ios;
|
2021-02-18 18:29:42 +00:00
|
|
|
/* set only head, no need to init link_last in advance */
|
|
|
|
state->link.head = NULL;
|
2019-01-09 16:06:50 +00:00
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static void io_commit_sqring(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2019-08-26 17:23:46 +00:00
|
|
|
struct io_rings *rings = ctx->rings;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-12-30 18:24:46 +00:00
|
|
|
/*
|
|
|
|
* Ensure any loads from the SQEs are done at this point,
|
|
|
|
* since once we write the new head, the application could
|
|
|
|
* write new data to them.
|
|
|
|
*/
|
|
|
|
smp_store_release(&rings->sq.head, ctx->cached_sq_head);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2021-06-04 16:42:56 +00:00
|
|
|
* Fetch an sqe, if one is available. Note this returns a pointer to memory
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
* that is mapped by userspace. This means that care needs to be taken to
|
|
|
|
* ensure that reads are stable, as we cannot rely on userspace always
|
|
|
|
* being a good citizen. If members of the sqe are validated and then later
|
|
|
|
* used, it's important that those reads are done through READ_ONCE() to
|
|
|
|
* prevent a re-load down the line.
|
|
|
|
*/
|
2020-04-08 05:58:43 +00:00
|
|
|
static const struct io_uring_sqe *io_get_sqe(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2021-05-16 21:58:09 +00:00
|
|
|
unsigned head, mask = ctx->sq_entries - 1;
|
2021-06-14 22:37:23 +00:00
|
|
|
unsigned sq_idx = ctx->cached_sq_head++ & mask;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The cached sq head (or cq tail) serves two purposes:
|
|
|
|
*
|
|
|
|
* 1) allows us to batch the cost of updating the user visible
|
|
|
|
* head updates.
|
|
|
|
* 2) allows the kernel side to track the head on its own, even
|
|
|
|
* though the application is the one updating it.
|
|
|
|
*/
|
2021-06-14 22:37:23 +00:00
|
|
|
head = READ_ONCE(ctx->sq_array[sq_idx]);
|
2020-04-08 05:58:43 +00:00
|
|
|
if (likely(head < ctx->sq_entries))
|
|
|
|
return &ctx->sq_sqes[head];
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
/* drop invalid entries */
|
2021-06-14 22:37:24 +00:00
|
|
|
ctx->cq_extra--;
|
|
|
|
WRITE_ONCE(ctx->rings->sq_dropped,
|
|
|
|
READ_ONCE(ctx->rings->sq_dropped) + 1);
|
2020-04-08 05:58:43 +00:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2020-09-13 19:09:39 +00:00
|
|
|
static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
|
2021-08-09 12:04:10 +00:00
|
|
|
__must_hold(&ctx->uring_lock)
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
{
|
2021-06-14 01:36:22 +00:00
|
|
|
struct io_uring_task *tctx;
|
2021-02-18 18:29:37 +00:00
|
|
|
int submitted = 0;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
|
2019-12-30 18:24:45 +00:00
|
|
|
/* make sure SQ entry isn't read before tail */
|
|
|
|
nr = min3(nr, ctx->sq_entries, io_sqring_entries(ctx));
|
2019-12-28 11:13:03 +00:00
|
|
|
if (!percpu_ref_tryget_many(&ctx->refs, nr))
|
|
|
|
return -EAGAIN;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
|
2021-06-14 01:36:22 +00:00
|
|
|
tctx = current->io_uring;
|
|
|
|
tctx->cached_refs -= nr;
|
|
|
|
if (unlikely(tctx->cached_refs < 0)) {
|
|
|
|
unsigned int refill = -tctx->cached_refs + IO_TCTX_REFS_CACHE_NR;
|
|
|
|
|
|
|
|
percpu_counter_add(&tctx->inflight, refill);
|
|
|
|
refcount_add(refill, ¤t->usage);
|
|
|
|
tctx->cached_refs += refill;
|
|
|
|
}
|
2021-02-10 00:03:11 +00:00
|
|
|
io_submit_state_start(&ctx->submit_state, nr);
|
2020-01-17 01:45:59 +00:00
|
|
|
|
2021-02-18 18:29:37 +00:00
|
|
|
while (submitted < nr) {
|
2019-12-20 01:24:38 +00:00
|
|
|
const struct io_uring_sqe *sqe;
|
2019-11-06 22:41:06 +00:00
|
|
|
struct io_kiocb *req;
|
2019-10-25 09:31:30 +00:00
|
|
|
|
2021-02-10 00:03:10 +00:00
|
|
|
req = io_alloc_req(ctx);
|
2019-11-06 22:41:06 +00:00
|
|
|
if (unlikely(!req)) {
|
|
|
|
if (!submitted)
|
|
|
|
submitted = -EAGAIN;
|
2019-10-25 09:31:30 +00:00
|
|
|
break;
|
2019-11-06 22:41:06 +00:00
|
|
|
}
|
2021-02-12 11:55:17 +00:00
|
|
|
sqe = io_get_sqe(ctx);
|
|
|
|
if (unlikely(!sqe)) {
|
|
|
|
kmem_cache_free(req_cachep, req);
|
|
|
|
break;
|
|
|
|
}
|
2019-12-18 16:50:26 +00:00
|
|
|
/* will complete beyond this point, count as submitted */
|
|
|
|
submitted++;
|
2021-02-18 18:29:42 +00:00
|
|
|
if (io_submit_sqe(ctx, req, sqe))
|
2019-11-06 22:41:06 +00:00
|
|
|
break;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
}
|
|
|
|
|
2020-01-25 19:34:01 +00:00
|
|
|
if (unlikely(submitted != nr)) {
|
|
|
|
int ref_used = (submitted == -EAGAIN) ? 0 : submitted;
|
2020-10-15 22:24:45 +00:00
|
|
|
int unused = nr - ref_used;
|
2020-01-25 19:34:01 +00:00
|
|
|
|
2021-06-14 01:36:22 +00:00
|
|
|
current->io_uring->cached_refs += unused;
|
2020-10-15 22:24:45 +00:00
|
|
|
percpu_ref_put_many(&ctx->refs, unused);
|
2020-01-25 19:34:01 +00:00
|
|
|
}
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
|
2021-02-18 18:29:42 +00:00
|
|
|
io_submit_state_end(&ctx->submit_state, ctx);
|
2019-11-05 21:22:14 +00:00
|
|
|
/* Commit SQ ring head once we've consumed and submitted all SQEs */
|
|
|
|
io_commit_sqring(ctx);
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
return submitted;
|
|
|
|
}
|
|
|
|
|
2021-05-16 21:58:00 +00:00
|
|
|
static inline bool io_sqd_events_pending(struct io_sq_data *sqd)
|
|
|
|
{
|
|
|
|
return READ_ONCE(sqd->state);
|
|
|
|
}
|
|
|
|
|
2020-07-23 12:57:24 +00:00
|
|
|
static inline void io_ring_set_wakeup_flag(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
/* Tell userspace we may need a wakeup call */
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-08-08 00:13:42 +00:00
|
|
|
WRITE_ONCE(ctx->rings->sq_flags,
|
|
|
|
ctx->rings->sq_flags | IORING_SQ_NEED_WAKEUP);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2020-07-23 12:57:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void io_ring_clear_wakeup_flag(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-08-08 00:13:42 +00:00
|
|
|
WRITE_ONCE(ctx->rings->sq_flags,
|
|
|
|
ctx->rings->sq_flags & ~IORING_SQ_NEED_WAKEUP);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2020-07-23 12:57:24 +00:00
|
|
|
}
|
|
|
|
|
io_uring: refactor io_sq_thread() handling
There are some issues about current io_sq_thread() implementation:
1. The prepare_to_wait() usage in __io_sq_thread() is weird. If
multiple ctxs share one same poll thread, one ctx will put poll thread
in TASK_INTERRUPTIBLE, but if other ctxs have work to do, we don't
need to change task's stat at all. I think only if all ctxs don't have
work to do, we can do it.
2. We use round-robin strategy to make multiple ctxs share one same
poll thread, but there are various condition in __io_sq_thread(), which
seems complicated and may affect round-robin strategy.
To improve above issues, I take below actions:
1. If multiple ctxs share one same poll thread, only if all all ctxs
don't have work to do, we can call prepare_to_wait() and schedule() to
make poll thread enter sleep state.
2. To make round-robin strategy more straight, I simplify
__io_sq_thread() a bit, it just does io poll and sqes submit work once,
does not check various condition.
3. For multiple ctxs share one same poll thread, we choose the biggest
sq_thread_idle among these ctxs as timeout condition, and will update
it when ctx is in or out.
4. Not need to check EBUSY especially, if io_submit_sqes() returns
EBUSY, IORING_SQ_CQ_OVERFLOW should be set, helper in liburing should
be aware of cq overflow and enters kernel to flush work.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-11-03 06:15:59 +00:00
|
|
|
static int __io_sq_thread(struct io_ring_ctx *ctx, bool cap_entries)
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
{
|
2020-09-14 17:07:26 +00:00
|
|
|
unsigned int to_submit;
|
io_uring: fix poll_list race for SETUP_IOPOLL|SETUP_SQPOLL
After making ext4 support iopoll method:
let ext4_file_operations's iopoll method be iomap_dio_iopoll(),
we found fio can easily hang in fio_ioring_getevents() with below fio
job:
rm -f testfile; sync;
sudo fio -name=fiotest -filename=testfile -iodepth=128 -thread
-rw=write -ioengine=io_uring -hipri=1 -sqthread_poll=1 -direct=1
-bs=4k -size=10G -numjobs=8 -runtime=2000 -group_reporting
with IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL enabled.
There are two issues that results in this hang, one reason is that
when IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL are enabled, fio
does not use io_uring_enter to get completed events, it relies on
kernel io_sq_thread to poll for completed events.
Another reason is that there is a race: when io_submit_sqes() in
io_sq_thread() submits a batch of sqes, variable 'inflight' will
record the number of submitted reqs, then io_sq_thread will poll for
reqs which have been added to poll_list. But note, if some previous
reqs have been punted to io worker, these reqs will won't be in
poll_list timely. io_sq_thread() will only poll for a part of previous
submitted reqs, and then find poll_list is empty, reset variable
'inflight' to be zero. If app just waits these deferred reqs and does
not wake up io_sq_thread again, then hang happens.
For app that entirely relies on io_sq_thread to poll completed requests,
let io_iopoll_req_issued() wake up io_sq_thread properly when adding new
element to poll_list, and when io_sq_thread prepares to sleep, check
whether poll_list is empty again, if not empty, continue to poll.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-25 14:12:08 +00:00
|
|
|
int ret = 0;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
|
2020-09-14 17:07:26 +00:00
|
|
|
to_submit = io_sqring_entries(ctx);
|
2020-09-08 15:11:32 +00:00
|
|
|
/* if we're handling multiple rings, cap submit size for fairness */
|
2021-06-23 18:50:18 +00:00
|
|
|
if (cap_entries && to_submit > IORING_SQPOLL_CAP_ENTRIES_VALUE)
|
|
|
|
to_submit = IORING_SQPOLL_CAP_ENTRIES_VALUE;
|
2020-09-08 15:11:32 +00:00
|
|
|
|
2020-11-12 06:56:00 +00:00
|
|
|
if (!list_empty(&ctx->iopoll_list) || to_submit) {
|
2020-09-14 17:07:26 +00:00
|
|
|
unsigned nr_events = 0;
|
2021-06-24 14:09:55 +00:00
|
|
|
const struct cred *creds = NULL;
|
|
|
|
|
|
|
|
if (ctx->sq_creds != current_cred())
|
|
|
|
creds = override_creds(ctx->sq_creds);
|
2019-07-08 05:41:12 +00:00
|
|
|
|
2020-09-14 17:07:26 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2020-11-12 06:56:00 +00:00
|
|
|
if (!list_empty(&ctx->iopoll_list))
|
2021-08-15 09:40:21 +00:00
|
|
|
io_do_iopoll(ctx, &nr_events, 0);
|
2020-11-12 06:56:00 +00:00
|
|
|
|
2021-04-18 13:52:08 +00:00
|
|
|
/*
|
|
|
|
* Don't submit if refs are dying, good for io_uring_register(),
|
|
|
|
* but also it is relied upon by io_ring_exit_work()
|
|
|
|
*/
|
2021-03-08 13:20:57 +00:00
|
|
|
if (to_submit && likely(!percpu_ref_is_dying(&ctx->refs)) &&
|
|
|
|
!(ctx->flags & IORING_SETUP_R_DISABLED))
|
io_uring: refactor io_sq_thread() handling
There are some issues about current io_sq_thread() implementation:
1. The prepare_to_wait() usage in __io_sq_thread() is weird. If
multiple ctxs share one same poll thread, one ctx will put poll thread
in TASK_INTERRUPTIBLE, but if other ctxs have work to do, we don't
need to change task's stat at all. I think only if all ctxs don't have
work to do, we can do it.
2. We use round-robin strategy to make multiple ctxs share one same
poll thread, but there are various condition in __io_sq_thread(), which
seems complicated and may affect round-robin strategy.
To improve above issues, I take below actions:
1. If multiple ctxs share one same poll thread, only if all all ctxs
don't have work to do, we can call prepare_to_wait() and schedule() to
make poll thread enter sleep state.
2. To make round-robin strategy more straight, I simplify
__io_sq_thread() a bit, it just does io poll and sqes submit work once,
does not check various condition.
3. For multiple ctxs share one same poll thread, we choose the biggest
sq_thread_idle among these ctxs as timeout condition, and will update
it when ctx is in or out.
4. Not need to check EBUSY especially, if io_submit_sqes() returns
EBUSY, IORING_SQ_CQ_OVERFLOW should be set, helper in liburing should
be aware of cq overflow and enters kernel to flush work.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-11-03 06:15:59 +00:00
|
|
|
ret = io_submit_sqes(ctx, to_submit);
|
2020-09-14 17:07:26 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
|
2021-05-16 21:58:03 +00:00
|
|
|
if (to_submit && wq_has_sleeper(&ctx->sqo_sq_wait))
|
|
|
|
wake_up(&ctx->sqo_sq_wait);
|
2021-06-24 14:09:55 +00:00
|
|
|
if (creds)
|
|
|
|
revert_creds(creds);
|
2021-05-16 21:58:03 +00:00
|
|
|
}
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
|
io_uring: refactor io_sq_thread() handling
There are some issues about current io_sq_thread() implementation:
1. The prepare_to_wait() usage in __io_sq_thread() is weird. If
multiple ctxs share one same poll thread, one ctx will put poll thread
in TASK_INTERRUPTIBLE, but if other ctxs have work to do, we don't
need to change task's stat at all. I think only if all ctxs don't have
work to do, we can do it.
2. We use round-robin strategy to make multiple ctxs share one same
poll thread, but there are various condition in __io_sq_thread(), which
seems complicated and may affect round-robin strategy.
To improve above issues, I take below actions:
1. If multiple ctxs share one same poll thread, only if all all ctxs
don't have work to do, we can call prepare_to_wait() and schedule() to
make poll thread enter sleep state.
2. To make round-robin strategy more straight, I simplify
__io_sq_thread() a bit, it just does io poll and sqes submit work once,
does not check various condition.
3. For multiple ctxs share one same poll thread, we choose the biggest
sq_thread_idle among these ctxs as timeout condition, and will update
it when ctx is in or out.
4. Not need to check EBUSY especially, if io_submit_sqes() returns
EBUSY, IORING_SQ_CQ_OVERFLOW should be set, helper in liburing should
be aware of cq overflow and enters kernel to flush work.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-11-03 06:15:59 +00:00
|
|
|
return ret;
|
|
|
|
}
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
|
io_uring: refactor io_sq_thread() handling
There are some issues about current io_sq_thread() implementation:
1. The prepare_to_wait() usage in __io_sq_thread() is weird. If
multiple ctxs share one same poll thread, one ctx will put poll thread
in TASK_INTERRUPTIBLE, but if other ctxs have work to do, we don't
need to change task's stat at all. I think only if all ctxs don't have
work to do, we can do it.
2. We use round-robin strategy to make multiple ctxs share one same
poll thread, but there are various condition in __io_sq_thread(), which
seems complicated and may affect round-robin strategy.
To improve above issues, I take below actions:
1. If multiple ctxs share one same poll thread, only if all all ctxs
don't have work to do, we can call prepare_to_wait() and schedule() to
make poll thread enter sleep state.
2. To make round-robin strategy more straight, I simplify
__io_sq_thread() a bit, it just does io poll and sqes submit work once,
does not check various condition.
3. For multiple ctxs share one same poll thread, we choose the biggest
sq_thread_idle among these ctxs as timeout condition, and will update
it when ctx is in or out.
4. Not need to check EBUSY especially, if io_submit_sqes() returns
EBUSY, IORING_SQ_CQ_OVERFLOW should be set, helper in liburing should
be aware of cq overflow and enters kernel to flush work.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-11-03 06:15:59 +00:00
|
|
|
static void io_sqd_update_thread_idle(struct io_sq_data *sqd)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
unsigned sq_thread_idle = 0;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
|
2021-03-10 13:13:55 +00:00
|
|
|
list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
|
|
|
|
sq_thread_idle = max(sq_thread_idle, ctx->sq_thread_idle);
|
io_uring: refactor io_sq_thread() handling
There are some issues about current io_sq_thread() implementation:
1. The prepare_to_wait() usage in __io_sq_thread() is weird. If
multiple ctxs share one same poll thread, one ctx will put poll thread
in TASK_INTERRUPTIBLE, but if other ctxs have work to do, we don't
need to change task's stat at all. I think only if all ctxs don't have
work to do, we can do it.
2. We use round-robin strategy to make multiple ctxs share one same
poll thread, but there are various condition in __io_sq_thread(), which
seems complicated and may affect round-robin strategy.
To improve above issues, I take below actions:
1. If multiple ctxs share one same poll thread, only if all all ctxs
don't have work to do, we can call prepare_to_wait() and schedule() to
make poll thread enter sleep state.
2. To make round-robin strategy more straight, I simplify
__io_sq_thread() a bit, it just does io poll and sqes submit work once,
does not check various condition.
3. For multiple ctxs share one same poll thread, we choose the biggest
sq_thread_idle among these ctxs as timeout condition, and will update
it when ctx is in or out.
4. Not need to check EBUSY especially, if io_submit_sqes() returns
EBUSY, IORING_SQ_CQ_OVERFLOW should be set, helper in liburing should
be aware of cq overflow and enters kernel to flush work.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-11-03 06:15:59 +00:00
|
|
|
sqd->sq_thread_idle = sq_thread_idle;
|
2020-09-14 17:07:26 +00:00
|
|
|
}
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
|
2021-05-16 21:58:00 +00:00
|
|
|
static bool io_sqd_handle_event(struct io_sq_data *sqd)
|
|
|
|
{
|
|
|
|
bool did_sig = false;
|
|
|
|
struct ksignal ksig;
|
|
|
|
|
|
|
|
if (test_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state) ||
|
|
|
|
signal_pending(current)) {
|
|
|
|
mutex_unlock(&sqd->lock);
|
|
|
|
if (signal_pending(current))
|
|
|
|
did_sig = get_signal(&ksig);
|
|
|
|
cond_resched();
|
|
|
|
mutex_lock(&sqd->lock);
|
|
|
|
}
|
|
|
|
return did_sig || test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);
|
|
|
|
}
|
|
|
|
|
2020-09-14 17:07:26 +00:00
|
|
|
static int io_sq_thread(void *data)
|
|
|
|
{
|
2020-09-14 17:16:23 +00:00
|
|
|
struct io_sq_data *sqd = data;
|
|
|
|
struct io_ring_ctx *ctx;
|
2020-11-12 06:55:59 +00:00
|
|
|
unsigned long timeout = 0;
|
2021-02-18 04:03:43 +00:00
|
|
|
char buf[TASK_COMM_LEN];
|
io_uring: refactor io_sq_thread() handling
There are some issues about current io_sq_thread() implementation:
1. The prepare_to_wait() usage in __io_sq_thread() is weird. If
multiple ctxs share one same poll thread, one ctx will put poll thread
in TASK_INTERRUPTIBLE, but if other ctxs have work to do, we don't
need to change task's stat at all. I think only if all ctxs don't have
work to do, we can do it.
2. We use round-robin strategy to make multiple ctxs share one same
poll thread, but there are various condition in __io_sq_thread(), which
seems complicated and may affect round-robin strategy.
To improve above issues, I take below actions:
1. If multiple ctxs share one same poll thread, only if all all ctxs
don't have work to do, we can call prepare_to_wait() and schedule() to
make poll thread enter sleep state.
2. To make round-robin strategy more straight, I simplify
__io_sq_thread() a bit, it just does io poll and sqes submit work once,
does not check various condition.
3. For multiple ctxs share one same poll thread, we choose the biggest
sq_thread_idle among these ctxs as timeout condition, and will update
it when ctx is in or out.
4. Not need to check EBUSY especially, if io_submit_sqes() returns
EBUSY, IORING_SQ_CQ_OVERFLOW should be set, helper in liburing should
be aware of cq overflow and enters kernel to flush work.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-11-03 06:15:59 +00:00
|
|
|
DEFINE_WAIT(wait);
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
|
2021-04-01 08:55:04 +00:00
|
|
|
snprintf(buf, sizeof(buf), "iou-sqp-%d", sqd->task_pid);
|
2021-02-18 04:03:43 +00:00
|
|
|
set_task_comm(current, buf);
|
|
|
|
|
|
|
|
if (sqd->sq_cpu != -1)
|
|
|
|
set_cpus_allowed_ptr(current, cpumask_of(sqd->sq_cpu));
|
|
|
|
else
|
|
|
|
set_cpus_allowed_ptr(current, cpu_online_mask);
|
|
|
|
current->flags |= PF_NO_SETAFFINITY;
|
|
|
|
|
2021-03-14 20:57:10 +00:00
|
|
|
mutex_lock(&sqd->lock);
|
2021-05-16 21:58:00 +00:00
|
|
|
while (1) {
|
2021-06-24 14:09:56 +00:00
|
|
|
bool cap_entries, sqt_spin = false;
|
2019-11-10 23:56:04 +00:00
|
|
|
|
2021-05-16 21:58:00 +00:00
|
|
|
if (io_sqd_events_pending(sqd) || signal_pending(current)) {
|
|
|
|
if (io_sqd_handle_event(sqd))
|
2021-04-13 10:43:00 +00:00
|
|
|
break;
|
io_uring: refactor io_sq_thread() handling
There are some issues about current io_sq_thread() implementation:
1. The prepare_to_wait() usage in __io_sq_thread() is weird. If
multiple ctxs share one same poll thread, one ctx will put poll thread
in TASK_INTERRUPTIBLE, but if other ctxs have work to do, we don't
need to change task's stat at all. I think only if all ctxs don't have
work to do, we can do it.
2. We use round-robin strategy to make multiple ctxs share one same
poll thread, but there are various condition in __io_sq_thread(), which
seems complicated and may affect round-robin strategy.
To improve above issues, I take below actions:
1. If multiple ctxs share one same poll thread, only if all all ctxs
don't have work to do, we can call prepare_to_wait() and schedule() to
make poll thread enter sleep state.
2. To make round-robin strategy more straight, I simplify
__io_sq_thread() a bit, it just does io poll and sqes submit work once,
does not check various condition.
3. For multiple ctxs share one same poll thread, we choose the biggest
sq_thread_idle among these ctxs as timeout condition, and will update
it when ctx is in or out.
4. Not need to check EBUSY especially, if io_submit_sqes() returns
EBUSY, IORING_SQ_CQ_OVERFLOW should be set, helper in liburing should
be aware of cq overflow and enters kernel to flush work.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-11-03 06:15:59 +00:00
|
|
|
timeout = jiffies + sqd->sq_thread_idle;
|
|
|
|
}
|
2021-05-16 21:58:00 +00:00
|
|
|
|
2020-09-08 15:11:32 +00:00
|
|
|
cap_entries = !list_is_singular(&sqd->ctx_list);
|
2020-09-14 17:16:23 +00:00
|
|
|
list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) {
|
2021-06-24 14:09:55 +00:00
|
|
|
int ret = __io_sq_thread(ctx, cap_entries);
|
2021-03-07 10:54:28 +00:00
|
|
|
|
io_uring: refactor io_sq_thread() handling
There are some issues about current io_sq_thread() implementation:
1. The prepare_to_wait() usage in __io_sq_thread() is weird. If
multiple ctxs share one same poll thread, one ctx will put poll thread
in TASK_INTERRUPTIBLE, but if other ctxs have work to do, we don't
need to change task's stat at all. I think only if all ctxs don't have
work to do, we can do it.
2. We use round-robin strategy to make multiple ctxs share one same
poll thread, but there are various condition in __io_sq_thread(), which
seems complicated and may affect round-robin strategy.
To improve above issues, I take below actions:
1. If multiple ctxs share one same poll thread, only if all all ctxs
don't have work to do, we can call prepare_to_wait() and schedule() to
make poll thread enter sleep state.
2. To make round-robin strategy more straight, I simplify
__io_sq_thread() a bit, it just does io poll and sqes submit work once,
does not check various condition.
3. For multiple ctxs share one same poll thread, we choose the biggest
sq_thread_idle among these ctxs as timeout condition, and will update
it when ctx is in or out.
4. Not need to check EBUSY especially, if io_submit_sqes() returns
EBUSY, IORING_SQ_CQ_OVERFLOW should be set, helper in liburing should
be aware of cq overflow and enters kernel to flush work.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-11-03 06:15:59 +00:00
|
|
|
if (!sqt_spin && (ret > 0 || !list_empty(&ctx->iopoll_list)))
|
|
|
|
sqt_spin = true;
|
2020-09-14 17:16:23 +00:00
|
|
|
}
|
2021-06-26 20:40:45 +00:00
|
|
|
if (io_run_task_work())
|
|
|
|
sqt_spin = true;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
|
io_uring: refactor io_sq_thread() handling
There are some issues about current io_sq_thread() implementation:
1. The prepare_to_wait() usage in __io_sq_thread() is weird. If
multiple ctxs share one same poll thread, one ctx will put poll thread
in TASK_INTERRUPTIBLE, but if other ctxs have work to do, we don't
need to change task's stat at all. I think only if all ctxs don't have
work to do, we can do it.
2. We use round-robin strategy to make multiple ctxs share one same
poll thread, but there are various condition in __io_sq_thread(), which
seems complicated and may affect round-robin strategy.
To improve above issues, I take below actions:
1. If multiple ctxs share one same poll thread, only if all all ctxs
don't have work to do, we can call prepare_to_wait() and schedule() to
make poll thread enter sleep state.
2. To make round-robin strategy more straight, I simplify
__io_sq_thread() a bit, it just does io poll and sqes submit work once,
does not check various condition.
3. For multiple ctxs share one same poll thread, we choose the biggest
sq_thread_idle among these ctxs as timeout condition, and will update
it when ctx is in or out.
4. Not need to check EBUSY especially, if io_submit_sqes() returns
EBUSY, IORING_SQ_CQ_OVERFLOW should be set, helper in liburing should
be aware of cq overflow and enters kernel to flush work.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-11-03 06:15:59 +00:00
|
|
|
if (sqt_spin || !time_after(jiffies, timeout)) {
|
2020-09-14 17:07:26 +00:00
|
|
|
cond_resched();
|
io_uring: refactor io_sq_thread() handling
There are some issues about current io_sq_thread() implementation:
1. The prepare_to_wait() usage in __io_sq_thread() is weird. If
multiple ctxs share one same poll thread, one ctx will put poll thread
in TASK_INTERRUPTIBLE, but if other ctxs have work to do, we don't
need to change task's stat at all. I think only if all ctxs don't have
work to do, we can do it.
2. We use round-robin strategy to make multiple ctxs share one same
poll thread, but there are various condition in __io_sq_thread(), which
seems complicated and may affect round-robin strategy.
To improve above issues, I take below actions:
1. If multiple ctxs share one same poll thread, only if all all ctxs
don't have work to do, we can call prepare_to_wait() and schedule() to
make poll thread enter sleep state.
2. To make round-robin strategy more straight, I simplify
__io_sq_thread() a bit, it just does io poll and sqes submit work once,
does not check various condition.
3. For multiple ctxs share one same poll thread, we choose the biggest
sq_thread_idle among these ctxs as timeout condition, and will update
it when ctx is in or out.
4. Not need to check EBUSY especially, if io_submit_sqes() returns
EBUSY, IORING_SQ_CQ_OVERFLOW should be set, helper in liburing should
be aware of cq overflow and enters kernel to flush work.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-11-03 06:15:59 +00:00
|
|
|
if (sqt_spin)
|
|
|
|
timeout = jiffies + sqd->sq_thread_idle;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
prepare_to_wait(&sqd->wait, &wait, TASK_INTERRUPTIBLE);
|
2021-06-26 20:40:45 +00:00
|
|
|
if (!io_sqd_events_pending(sqd) && !current->task_works) {
|
2021-06-24 14:09:56 +00:00
|
|
|
bool needs_sched = true;
|
|
|
|
|
2021-04-21 15:19:11 +00:00
|
|
|
list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) {
|
2021-05-16 21:58:01 +00:00
|
|
|
io_ring_set_wakeup_flag(ctx);
|
|
|
|
|
2021-04-21 15:19:11 +00:00
|
|
|
if ((ctx->flags & IORING_SETUP_IOPOLL) &&
|
|
|
|
!list_empty_careful(&ctx->iopoll_list)) {
|
|
|
|
needs_sched = false;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (io_sqring_entries(ctx)) {
|
|
|
|
needs_sched = false;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (needs_sched) {
|
|
|
|
mutex_unlock(&sqd->lock);
|
|
|
|
schedule();
|
|
|
|
mutex_lock(&sqd->lock);
|
|
|
|
}
|
2020-09-14 17:16:23 +00:00
|
|
|
list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
|
|
|
|
io_ring_clear_wakeup_flag(ctx);
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
}
|
io_uring: refactor io_sq_thread() handling
There are some issues about current io_sq_thread() implementation:
1. The prepare_to_wait() usage in __io_sq_thread() is weird. If
multiple ctxs share one same poll thread, one ctx will put poll thread
in TASK_INTERRUPTIBLE, but if other ctxs have work to do, we don't
need to change task's stat at all. I think only if all ctxs don't have
work to do, we can do it.
2. We use round-robin strategy to make multiple ctxs share one same
poll thread, but there are various condition in __io_sq_thread(), which
seems complicated and may affect round-robin strategy.
To improve above issues, I take below actions:
1. If multiple ctxs share one same poll thread, only if all all ctxs
don't have work to do, we can call prepare_to_wait() and schedule() to
make poll thread enter sleep state.
2. To make round-robin strategy more straight, I simplify
__io_sq_thread() a bit, it just does io poll and sqes submit work once,
does not check various condition.
3. For multiple ctxs share one same poll thread, we choose the biggest
sq_thread_idle among these ctxs as timeout condition, and will update
it when ctx is in or out.
4. Not need to check EBUSY especially, if io_submit_sqes() returns
EBUSY, IORING_SQ_CQ_OVERFLOW should be set, helper in liburing should
be aware of cq overflow and enters kernel to flush work.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-11-03 06:15:59 +00:00
|
|
|
|
|
|
|
finish_wait(&sqd->wait, &wait);
|
|
|
|
timeout = jiffies + sqd->sq_thread_idle;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
}
|
2020-09-14 16:51:17 +00:00
|
|
|
|
2021-06-14 01:36:23 +00:00
|
|
|
io_uring_cancel_generic(true, sqd);
|
2021-02-18 04:03:43 +00:00
|
|
|
sqd->thread = NULL;
|
2021-03-06 20:58:48 +00:00
|
|
|
list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
|
2021-02-25 17:17:46 +00:00
|
|
|
io_ring_set_wakeup_flag(ctx);
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-11 23:29:38 +00:00
|
|
|
io_run_task_work();
|
2021-04-18 13:52:09 +00:00
|
|
|
mutex_unlock(&sqd->lock);
|
|
|
|
|
2021-02-18 04:03:43 +00:00
|
|
|
complete(&sqd->exited);
|
|
|
|
do_exit(0);
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
}
|
|
|
|
|
2019-09-24 19:47:15 +00:00
|
|
|
struct io_wait_queue {
|
|
|
|
struct wait_queue_entry wq;
|
|
|
|
struct io_ring_ctx *ctx;
|
2021-08-06 20:04:31 +00:00
|
|
|
unsigned cq_tail;
|
2019-09-24 19:47:15 +00:00
|
|
|
unsigned nr_timeouts;
|
|
|
|
};
|
|
|
|
|
2021-01-04 20:36:36 +00:00
|
|
|
static inline bool io_should_wake(struct io_wait_queue *iowq)
|
2019-09-24 19:47:15 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = iowq->ctx;
|
2021-08-06 20:04:31 +00:00
|
|
|
int dist = ctx->cached_cq_tail - (int) iowq->cq_tail;
|
2019-09-24 19:47:15 +00:00
|
|
|
|
|
|
|
/*
|
2019-12-13 11:09:50 +00:00
|
|
|
* Wake up if we have enough events, or if a timeout occurred since we
|
2019-09-24 19:47:15 +00:00
|
|
|
* started waiting. For timeouts, we always want to return to userspace,
|
|
|
|
* regardless of event count.
|
|
|
|
*/
|
2021-08-06 20:04:31 +00:00
|
|
|
return dist >= 0 || atomic_read(&ctx->cq_timeouts) != iowq->nr_timeouts;
|
2019-09-24 19:47:15 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int io_wake_function(struct wait_queue_entry *curr, unsigned int mode,
|
|
|
|
int wake_flags, void *key)
|
|
|
|
{
|
|
|
|
struct io_wait_queue *iowq = container_of(curr, struct io_wait_queue,
|
|
|
|
wq);
|
|
|
|
|
2021-01-04 20:36:36 +00:00
|
|
|
/*
|
|
|
|
* Cannot safely flush overflowed CQEs from here, ensure we wake up
|
|
|
|
* the task, and the next invocation will do it.
|
|
|
|
*/
|
2021-06-14 22:37:27 +00:00
|
|
|
if (io_should_wake(iowq) || test_bit(0, &iowq->ctx->check_cq_overflow))
|
2021-01-04 20:36:36 +00:00
|
|
|
return autoremove_wake_function(curr, mode, wake_flags, key);
|
|
|
|
return -1;
|
2019-09-24 19:47:15 +00:00
|
|
|
}
|
|
|
|
|
2020-09-24 19:32:18 +00:00
|
|
|
static int io_run_task_work_sig(void)
|
|
|
|
{
|
|
|
|
if (io_run_task_work())
|
|
|
|
return 1;
|
|
|
|
if (!signal_pending(current))
|
|
|
|
return 0;
|
2021-03-21 20:16:08 +00:00
|
|
|
if (test_thread_flag(TIF_NOTIFY_SIGNAL))
|
2020-10-23 02:17:18 +00:00
|
|
|
return -ERESTARTSYS;
|
2020-09-24 19:32:18 +00:00
|
|
|
return -EINTR;
|
|
|
|
}
|
|
|
|
|
2021-02-04 13:51:58 +00:00
|
|
|
/* when returns >0, the caller should retry */
|
|
|
|
static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
|
|
|
|
struct io_wait_queue *iowq,
|
|
|
|
signed long *timeout)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
/* make sure we run task_work before checking for signals */
|
|
|
|
ret = io_run_task_work_sig();
|
|
|
|
if (ret || io_should_wake(iowq))
|
|
|
|
return ret;
|
|
|
|
/* let the caller flush overflows, retry */
|
2021-06-14 22:37:27 +00:00
|
|
|
if (test_bit(0, &ctx->check_cq_overflow))
|
2021-02-04 13:51:58 +00:00
|
|
|
return 1;
|
|
|
|
|
|
|
|
*timeout = schedule_timeout(*timeout);
|
|
|
|
return !*timeout ? -ETIME : 1;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
/*
|
|
|
|
* Wait until events become available, if we don't already have some. The
|
|
|
|
* application must reap them itself, as they reside on the shared cq ring.
|
|
|
|
*/
|
|
|
|
static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
|
2020-11-03 02:54:37 +00:00
|
|
|
const sigset_t __user *sig, size_t sigsz,
|
|
|
|
struct __kernel_timespec __user *uts)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2021-08-09 15:07:32 +00:00
|
|
|
struct io_wait_queue iowq;
|
2019-08-26 17:23:46 +00:00
|
|
|
struct io_rings *rings = ctx->rings;
|
2021-02-04 13:51:57 +00:00
|
|
|
signed long timeout = MAX_SCHEDULE_TIMEOUT;
|
|
|
|
int ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 16:52:41 +00:00
|
|
|
do {
|
2021-08-09 19:18:12 +00:00
|
|
|
io_cqring_overflow_flush(ctx);
|
2021-01-04 20:36:36 +00:00
|
|
|
if (io_cqring_events(ctx) >= min_events)
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 16:52:41 +00:00
|
|
|
return 0;
|
2020-07-01 17:29:10 +00:00
|
|
|
if (!io_run_task_work())
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 16:52:41 +00:00
|
|
|
break;
|
|
|
|
} while (1);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
if (sig) {
|
2019-03-25 14:34:53 +00:00
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
if (in_compat_syscall())
|
|
|
|
ret = set_compat_user_sigmask((const compat_sigset_t __user *)sig,
|
2019-07-16 23:29:53 +00:00
|
|
|
sigsz);
|
2019-03-25 14:34:53 +00:00
|
|
|
else
|
|
|
|
#endif
|
2019-07-16 23:29:53 +00:00
|
|
|
ret = set_user_sigmask(sig, sigsz);
|
2019-03-25 14:34:53 +00:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-11-03 02:54:37 +00:00
|
|
|
if (uts) {
|
2021-02-04 13:51:57 +00:00
|
|
|
struct timespec64 ts;
|
|
|
|
|
2020-11-03 02:54:37 +00:00
|
|
|
if (get_timespec64(&ts, uts))
|
|
|
|
return -EFAULT;
|
|
|
|
timeout = timespec64_to_jiffies(&ts);
|
|
|
|
}
|
|
|
|
|
2021-08-09 15:07:32 +00:00
|
|
|
init_waitqueue_func_entry(&iowq.wq, io_wake_function);
|
|
|
|
iowq.wq.private = current;
|
|
|
|
INIT_LIST_HEAD(&iowq.wq.entry);
|
|
|
|
iowq.ctx = ctx;
|
2019-09-24 19:47:15 +00:00
|
|
|
iowq.nr_timeouts = atomic_read(&ctx->cq_timeouts);
|
2021-08-06 20:04:31 +00:00
|
|
|
iowq.cq_tail = READ_ONCE(ctx->rings->cq.head) + min_events;
|
2021-08-09 15:07:32 +00:00
|
|
|
|
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 17:02:01 +00:00
|
|
|
trace_io_uring_cqring_wait(ctx, min_events);
|
2019-09-24 19:47:15 +00:00
|
|
|
do {
|
2021-03-05 00:15:48 +00:00
|
|
|
/* if we can't even flush overflow, don't wait for more */
|
2021-08-09 19:18:12 +00:00
|
|
|
if (!io_cqring_overflow_flush(ctx)) {
|
2021-03-05 00:15:48 +00:00
|
|
|
ret = -EBUSY;
|
|
|
|
break;
|
|
|
|
}
|
2021-06-14 22:37:28 +00:00
|
|
|
prepare_to_wait_exclusive(&ctx->cq_wait, &iowq.wq,
|
2019-09-24 19:47:15 +00:00
|
|
|
TASK_INTERRUPTIBLE);
|
2021-02-04 13:51:58 +00:00
|
|
|
ret = io_cqring_wait_schedule(ctx, &iowq, &timeout);
|
2021-06-14 22:37:28 +00:00
|
|
|
finish_wait(&ctx->cq_wait, &iowq.wq);
|
2021-03-05 00:15:48 +00:00
|
|
|
cond_resched();
|
2021-02-04 13:51:58 +00:00
|
|
|
} while (ret > 0);
|
2019-09-24 19:47:15 +00:00
|
|
|
|
2020-07-04 14:55:50 +00:00
|
|
|
restore_saved_sigmask_unless(ret == -EINTR);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-08-26 17:23:46 +00:00
|
|
|
return READ_ONCE(rings->cq.head) == READ_ONCE(rings->cq.tail) ? ret : 0;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2021-06-14 01:36:20 +00:00
|
|
|
static void io_free_page_table(void **table, size_t size)
|
2019-12-09 18:22:50 +00:00
|
|
|
{
|
2021-06-14 01:36:20 +00:00
|
|
|
unsigned i, nr_tables = DIV_ROUND_UP(size, PAGE_SIZE);
|
2019-12-09 18:22:50 +00:00
|
|
|
|
2021-04-01 14:44:03 +00:00
|
|
|
for (i = 0; i < nr_tables; i++)
|
2021-06-14 01:36:20 +00:00
|
|
|
kfree(table[i]);
|
|
|
|
kfree(table);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void **io_alloc_page_table(size_t size)
|
|
|
|
{
|
|
|
|
unsigned i, nr_tables = DIV_ROUND_UP(size, PAGE_SIZE);
|
|
|
|
size_t init_size = size;
|
|
|
|
void **table;
|
|
|
|
|
|
|
|
table = kcalloc(nr_tables, sizeof(*table), GFP_KERNEL);
|
|
|
|
if (!table)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
for (i = 0; i < nr_tables; i++) {
|
2021-06-15 12:20:13 +00:00
|
|
|
unsigned int this_size = min_t(size_t, size, PAGE_SIZE);
|
2021-06-14 01:36:20 +00:00
|
|
|
|
|
|
|
table[i] = kzalloc(this_size, GFP_KERNEL);
|
|
|
|
if (!table[i]) {
|
|
|
|
io_free_page_table(table, init_size);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
size -= this_size;
|
|
|
|
}
|
|
|
|
return table;
|
2019-12-09 18:22:50 +00:00
|
|
|
}
|
|
|
|
|
2021-04-01 14:43:47 +00:00
|
|
|
static void io_rsrc_node_destroy(struct io_rsrc_node *ref_node)
|
2020-12-30 21:34:14 +00:00
|
|
|
{
|
2021-04-01 14:43:47 +00:00
|
|
|
percpu_ref_exit(&ref_node->refs);
|
|
|
|
kfree(ref_node);
|
2020-12-30 21:34:14 +00:00
|
|
|
}
|
|
|
|
|
2021-08-09 15:09:47 +00:00
|
|
|
static void io_rsrc_node_ref_zero(struct percpu_ref *ref)
|
|
|
|
{
|
|
|
|
struct io_rsrc_node *node = container_of(ref, struct io_rsrc_node, refs);
|
|
|
|
struct io_ring_ctx *ctx = node->rsrc_data->ctx;
|
|
|
|
unsigned long flags;
|
|
|
|
bool first_add = false;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&ctx->rsrc_ref_lock, flags);
|
|
|
|
node->done = true;
|
|
|
|
|
|
|
|
while (!list_empty(&ctx->rsrc_ref_list)) {
|
|
|
|
node = list_first_entry(&ctx->rsrc_ref_list,
|
|
|
|
struct io_rsrc_node, node);
|
|
|
|
/* recycle ref nodes in order */
|
|
|
|
if (!node->done)
|
|
|
|
break;
|
|
|
|
list_del(&node->node);
|
|
|
|
first_add |= llist_add(&node->llist, &ctx->rsrc_put_llist);
|
|
|
|
}
|
|
|
|
spin_unlock_irqrestore(&ctx->rsrc_ref_lock, flags);
|
|
|
|
|
|
|
|
if (first_add)
|
|
|
|
mod_delayed_work(system_wq, &ctx->rsrc_put_work, HZ);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct io_rsrc_node *io_rsrc_node_alloc(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_rsrc_node *ref_node;
|
|
|
|
|
|
|
|
ref_node = kzalloc(sizeof(*ref_node), GFP_KERNEL);
|
|
|
|
if (!ref_node)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
if (percpu_ref_init(&ref_node->refs, io_rsrc_node_ref_zero,
|
|
|
|
0, GFP_KERNEL)) {
|
|
|
|
kfree(ref_node);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
INIT_LIST_HEAD(&ref_node->node);
|
|
|
|
INIT_LIST_HEAD(&ref_node->rsrc_list);
|
|
|
|
ref_node->done = false;
|
|
|
|
return ref_node;
|
|
|
|
}
|
|
|
|
|
2021-04-01 14:43:46 +00:00
|
|
|
static void io_rsrc_node_switch(struct io_ring_ctx *ctx,
|
|
|
|
struct io_rsrc_data *data_to_kill)
|
2019-01-11 05:13:58 +00:00
|
|
|
{
|
2021-04-01 14:43:46 +00:00
|
|
|
WARN_ON_ONCE(!ctx->rsrc_backup_node);
|
|
|
|
WARN_ON_ONCE(data_to_kill && !ctx->rsrc_node);
|
2019-01-11 05:13:58 +00:00
|
|
|
|
2021-04-01 14:43:46 +00:00
|
|
|
if (data_to_kill) {
|
|
|
|
struct io_rsrc_node *rsrc_node = ctx->rsrc_node;
|
2021-04-01 14:43:43 +00:00
|
|
|
|
2021-04-01 14:43:46 +00:00
|
|
|
rsrc_node->rsrc_data = data_to_kill;
|
2021-08-09 13:49:41 +00:00
|
|
|
spin_lock_irq(&ctx->rsrc_ref_lock);
|
2021-04-01 14:43:46 +00:00
|
|
|
list_add_tail(&rsrc_node->node, &ctx->rsrc_ref_list);
|
2021-08-09 13:49:41 +00:00
|
|
|
spin_unlock_irq(&ctx->rsrc_ref_lock);
|
2021-04-01 14:43:43 +00:00
|
|
|
|
2021-04-11 00:46:34 +00:00
|
|
|
atomic_inc(&data_to_kill->refs);
|
2021-04-01 14:43:46 +00:00
|
|
|
percpu_ref_kill(&rsrc_node->refs);
|
|
|
|
ctx->rsrc_node = NULL;
|
|
|
|
}
|
2019-01-11 05:13:58 +00:00
|
|
|
|
2021-04-01 14:43:46 +00:00
|
|
|
if (!ctx->rsrc_node) {
|
|
|
|
ctx->rsrc_node = ctx->rsrc_backup_node;
|
|
|
|
ctx->rsrc_backup_node = NULL;
|
|
|
|
}
|
2021-02-19 09:19:36 +00:00
|
|
|
}
|
|
|
|
|
2021-04-01 14:43:46 +00:00
|
|
|
static int io_rsrc_node_switch_start(struct io_ring_ctx *ctx)
|
2021-03-19 17:22:36 +00:00
|
|
|
{
|
|
|
|
if (ctx->rsrc_backup_node)
|
|
|
|
return 0;
|
2021-04-01 14:43:40 +00:00
|
|
|
ctx->rsrc_backup_node = io_rsrc_node_alloc(ctx);
|
2021-03-19 17:22:36 +00:00
|
|
|
return ctx->rsrc_backup_node ? 0 : -ENOMEM;
|
2021-02-19 09:19:36 +00:00
|
|
|
}
|
|
|
|
|
2021-04-01 14:43:44 +00:00
|
|
|
static int io_rsrc_ref_quiesce(struct io_rsrc_data *data, struct io_ring_ctx *ctx)
|
2021-02-19 09:19:36 +00:00
|
|
|
{
|
|
|
|
int ret;
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 06:05:18 +00:00
|
|
|
|
2021-04-01 14:43:48 +00:00
|
|
|
/* As we may drop ->uring_lock, other task may have started quiesce */
|
2021-02-19 09:19:36 +00:00
|
|
|
if (data->quiesce)
|
|
|
|
return -ENXIO;
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 06:05:18 +00:00
|
|
|
|
2021-02-19 09:19:36 +00:00
|
|
|
data->quiesce = true;
|
2020-12-30 21:34:15 +00:00
|
|
|
do {
|
2021-04-01 14:43:46 +00:00
|
|
|
ret = io_rsrc_node_switch_start(ctx);
|
2021-03-19 17:22:36 +00:00
|
|
|
if (ret)
|
2021-02-20 18:03:49 +00:00
|
|
|
break;
|
2021-04-01 14:43:46 +00:00
|
|
|
io_rsrc_node_switch(ctx, data);
|
2021-02-20 18:03:49 +00:00
|
|
|
|
2021-04-11 00:46:34 +00:00
|
|
|
/* kill initial ref, already quiesced if zero */
|
|
|
|
if (atomic_dec_and_test(&data->refs))
|
|
|
|
break;
|
2021-08-09 14:15:50 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2021-02-19 09:19:36 +00:00
|
|
|
flush_delayed_work(&ctx->rsrc_put_work);
|
2020-12-30 21:34:15 +00:00
|
|
|
ret = wait_for_completion_interruptible(&data->done);
|
2021-08-09 14:15:50 +00:00
|
|
|
if (!ret) {
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
2020-12-30 21:34:15 +00:00
|
|
|
break;
|
2021-08-09 14:15:50 +00:00
|
|
|
}
|
2021-02-19 09:19:36 +00:00
|
|
|
|
2021-04-11 00:46:34 +00:00
|
|
|
atomic_inc(&data->refs);
|
|
|
|
/* wait for all works potentially completing data->done */
|
|
|
|
flush_delayed_work(&ctx->rsrc_put_work);
|
2021-02-25 14:37:35 +00:00
|
|
|
reinit_completion(&data->done);
|
2021-03-19 17:22:36 +00:00
|
|
|
|
2020-12-30 21:34:15 +00:00
|
|
|
ret = io_run_task_work_sig();
|
2021-02-19 09:19:36 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-02-20 18:03:49 +00:00
|
|
|
} while (ret >= 0);
|
2021-02-19 09:19:36 +00:00
|
|
|
data->quiesce = false;
|
2019-12-09 18:22:50 +00:00
|
|
|
|
2021-02-19 09:19:36 +00:00
|
|
|
return ret;
|
2021-01-15 17:37:50 +00:00
|
|
|
}
|
|
|
|
|
2021-06-14 01:36:21 +00:00
|
|
|
static u64 *io_get_tag_slot(struct io_rsrc_data *data, unsigned int idx)
|
|
|
|
{
|
|
|
|
unsigned int off = idx & IO_RSRC_TAG_TABLE_MASK;
|
|
|
|
unsigned int table_idx = idx >> IO_RSRC_TAG_TABLE_SHIFT;
|
|
|
|
|
|
|
|
return &data->tags[table_idx][off];
|
|
|
|
}
|
|
|
|
|
2021-04-25 13:32:16 +00:00
|
|
|
static void io_rsrc_data_free(struct io_rsrc_data *data)
|
2021-01-15 17:37:51 +00:00
|
|
|
{
|
2021-06-14 01:36:21 +00:00
|
|
|
size_t size = data->nr * sizeof(data->tags[0][0]);
|
|
|
|
|
|
|
|
if (data->tags)
|
|
|
|
io_free_page_table((void **)data->tags, size);
|
2021-04-25 13:32:16 +00:00
|
|
|
kfree(data);
|
|
|
|
}
|
|
|
|
|
2021-06-14 01:36:18 +00:00
|
|
|
static int io_rsrc_data_alloc(struct io_ring_ctx *ctx, rsrc_put_fn *do_put,
|
|
|
|
u64 __user *utags, unsigned nr,
|
|
|
|
struct io_rsrc_data **pdata)
|
2021-01-15 17:37:51 +00:00
|
|
|
{
|
2021-04-01 14:43:40 +00:00
|
|
|
struct io_rsrc_data *data;
|
2021-06-14 01:36:21 +00:00
|
|
|
int ret = -ENOMEM;
|
2021-06-14 01:36:18 +00:00
|
|
|
unsigned i;
|
2021-01-15 17:37:51 +00:00
|
|
|
|
|
|
|
data = kzalloc(sizeof(*data), GFP_KERNEL);
|
|
|
|
if (!data)
|
2021-06-14 01:36:18 +00:00
|
|
|
return -ENOMEM;
|
2021-06-14 01:36:21 +00:00
|
|
|
data->tags = (u64 **)io_alloc_page_table(nr * sizeof(data->tags[0][0]));
|
2021-04-25 13:32:18 +00:00
|
|
|
if (!data->tags) {
|
2021-01-15 17:37:51 +00:00
|
|
|
kfree(data);
|
2021-06-14 01:36:18 +00:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
2021-06-14 01:36:21 +00:00
|
|
|
|
|
|
|
data->nr = nr;
|
|
|
|
data->ctx = ctx;
|
|
|
|
data->do_put = do_put;
|
2021-06-14 01:36:18 +00:00
|
|
|
if (utags) {
|
2021-06-14 01:36:21 +00:00
|
|
|
ret = -EFAULT;
|
2021-06-14 01:36:18 +00:00
|
|
|
for (i = 0; i < nr; i++) {
|
2021-06-15 13:00:11 +00:00
|
|
|
u64 *tag_slot = io_get_tag_slot(data, i);
|
|
|
|
|
|
|
|
if (copy_from_user(tag_slot, &utags[i],
|
|
|
|
sizeof(*tag_slot)))
|
2021-06-14 01:36:21 +00:00
|
|
|
goto fail;
|
2021-06-14 01:36:18 +00:00
|
|
|
}
|
2021-01-15 17:37:51 +00:00
|
|
|
}
|
2021-04-25 13:32:18 +00:00
|
|
|
|
2021-04-11 00:46:34 +00:00
|
|
|
atomic_set(&data->refs, 1);
|
2021-01-15 17:37:51 +00:00
|
|
|
init_completion(&data->done);
|
2021-06-14 01:36:18 +00:00
|
|
|
*pdata = data;
|
|
|
|
return 0;
|
2021-06-14 01:36:21 +00:00
|
|
|
fail:
|
|
|
|
io_rsrc_data_free(data);
|
|
|
|
return ret;
|
2021-01-15 17:37:51 +00:00
|
|
|
}
|
|
|
|
|
2021-06-14 01:36:20 +00:00
|
|
|
static bool io_alloc_file_tables(struct io_file_table *table, unsigned nr_files)
|
|
|
|
{
|
2021-08-09 12:04:01 +00:00
|
|
|
table->files = kvcalloc(nr_files, sizeof(table->files[0]), GFP_KERNEL);
|
2021-06-14 01:36:20 +00:00
|
|
|
return !!table->files;
|
|
|
|
}
|
|
|
|
|
2021-08-09 12:04:01 +00:00
|
|
|
static void io_free_file_tables(struct io_file_table *table)
|
2021-06-14 01:36:20 +00:00
|
|
|
{
|
2021-08-09 12:04:01 +00:00
|
|
|
kvfree(table->files);
|
2021-06-14 01:36:20 +00:00
|
|
|
table->files = NULL;
|
|
|
|
}
|
|
|
|
|
2021-04-25 13:32:15 +00:00
|
|
|
static void __io_sqe_files_unregister(struct io_ring_ctx *ctx)
|
2021-01-15 17:37:51 +00:00
|
|
|
{
|
2021-04-25 13:32:15 +00:00
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
if (ctx->ring_sock) {
|
|
|
|
struct sock *sock = ctx->ring_sock->sk;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
|
|
|
|
while ((skb = skb_dequeue(&sock->sk_receive_queue)) != NULL)
|
|
|
|
kfree_skb(skb);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < ctx->nr_user_files; i++) {
|
|
|
|
struct file *file;
|
|
|
|
|
|
|
|
file = io_file_from_index(ctx, i);
|
|
|
|
if (file)
|
|
|
|
fput(file);
|
|
|
|
}
|
|
|
|
#endif
|
2021-08-09 12:04:01 +00:00
|
|
|
io_free_file_tables(&ctx->file_table);
|
2021-04-25 13:32:16 +00:00
|
|
|
io_rsrc_data_free(ctx->file_data);
|
2021-04-25 13:32:15 +00:00
|
|
|
ctx->file_data = NULL;
|
|
|
|
ctx->nr_user_files = 0;
|
2021-01-15 17:37:51 +00:00
|
|
|
}
|
|
|
|
|
2021-01-15 17:37:50 +00:00
|
|
|
static int io_sqe_files_unregister(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2021-04-13 01:58:38 +00:00
|
|
|
if (!ctx->file_data)
|
2021-01-15 17:37:50 +00:00
|
|
|
return -ENXIO;
|
2021-04-13 01:58:38 +00:00
|
|
|
ret = io_rsrc_ref_quiesce(ctx->file_data, ctx);
|
|
|
|
if (!ret)
|
|
|
|
__io_sqe_files_unregister(ctx);
|
|
|
|
return ret;
|
2019-01-11 05:13:58 +00:00
|
|
|
}
|
|
|
|
|
2021-02-18 04:03:43 +00:00
|
|
|
static void io_sq_thread_unpark(struct io_sq_data *sqd)
|
2021-03-14 20:57:10 +00:00
|
|
|
__releases(&sqd->lock)
|
2021-02-18 04:03:43 +00:00
|
|
|
{
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-11 23:29:38 +00:00
|
|
|
WARN_ON_ONCE(sqd->thread == current);
|
|
|
|
|
2021-03-14 20:57:12 +00:00
|
|
|
/*
|
|
|
|
* Do the dance but not conditional clear_bit() because it'd race with
|
|
|
|
* other threads incrementing park_pending and setting the bit.
|
|
|
|
*/
|
2021-02-18 04:03:43 +00:00
|
|
|
clear_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);
|
2021-03-14 20:57:12 +00:00
|
|
|
if (atomic_dec_return(&sqd->park_pending))
|
|
|
|
set_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);
|
2021-03-14 20:57:10 +00:00
|
|
|
mutex_unlock(&sqd->lock);
|
2021-02-18 04:03:43 +00:00
|
|
|
}
|
|
|
|
|
2021-03-05 15:44:39 +00:00
|
|
|
static void io_sq_thread_park(struct io_sq_data *sqd)
|
2021-03-14 20:57:10 +00:00
|
|
|
__acquires(&sqd->lock)
|
2021-02-18 04:03:43 +00:00
|
|
|
{
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-11 23:29:38 +00:00
|
|
|
WARN_ON_ONCE(sqd->thread == current);
|
|
|
|
|
2021-03-14 20:57:12 +00:00
|
|
|
atomic_inc(&sqd->park_pending);
|
2021-03-05 15:44:39 +00:00
|
|
|
set_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);
|
2021-03-14 20:57:10 +00:00
|
|
|
mutex_lock(&sqd->lock);
|
2021-03-06 20:58:48 +00:00
|
|
|
if (sqd->thread)
|
2021-03-05 15:44:39 +00:00
|
|
|
wake_up_process(sqd->thread);
|
2021-02-18 04:03:43 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void io_sq_thread_stop(struct io_sq_data *sqd)
|
|
|
|
{
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-11 23:29:38 +00:00
|
|
|
WARN_ON_ONCE(sqd->thread == current);
|
2021-04-11 00:46:38 +00:00
|
|
|
WARN_ON_ONCE(test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state));
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-11 23:29:38 +00:00
|
|
|
|
2021-03-06 20:58:48 +00:00
|
|
|
set_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);
|
2021-04-11 00:46:38 +00:00
|
|
|
mutex_lock(&sqd->lock);
|
2021-03-09 23:32:13 +00:00
|
|
|
if (sqd->thread)
|
|
|
|
wake_up_process(sqd->thread);
|
2021-03-14 20:57:10 +00:00
|
|
|
mutex_unlock(&sqd->lock);
|
2021-03-06 20:58:48 +00:00
|
|
|
wait_for_completion(&sqd->exited);
|
2021-02-18 04:03:43 +00:00
|
|
|
}
|
|
|
|
|
2020-09-02 19:52:19 +00:00
|
|
|
static void io_put_sq_data(struct io_sq_data *sqd)
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
{
|
2020-09-02 19:52:19 +00:00
|
|
|
if (refcount_dec_and_test(&sqd->refs)) {
|
2021-03-14 20:57:12 +00:00
|
|
|
WARN_ON_ONCE(atomic_read(&sqd->park_pending));
|
|
|
|
|
2021-02-18 04:03:43 +00:00
|
|
|
io_sq_thread_stop(sqd);
|
|
|
|
kfree(sqd);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_sq_thread_finish(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_sq_data *sqd = ctx->sq_data;
|
|
|
|
|
|
|
|
if (sqd) {
|
2021-03-06 20:58:48 +00:00
|
|
|
io_sq_thread_park(sqd);
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-11 23:29:38 +00:00
|
|
|
list_del_init(&ctx->sqd_list);
|
2021-02-18 04:03:43 +00:00
|
|
|
io_sqd_update_thread_idle(sqd);
|
2021-03-06 20:58:48 +00:00
|
|
|
io_sq_thread_unpark(sqd);
|
2021-02-18 04:03:43 +00:00
|
|
|
|
|
|
|
io_put_sq_data(sqd);
|
|
|
|
ctx->sq_data = NULL;
|
2020-09-02 19:52:19 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-09-02 20:50:27 +00:00
|
|
|
static struct io_sq_data *io_attach_sq_data(struct io_uring_params *p)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx_attach;
|
|
|
|
struct io_sq_data *sqd;
|
|
|
|
struct fd f;
|
|
|
|
|
|
|
|
f = fdget(p->wq_fd);
|
|
|
|
if (!f.file)
|
|
|
|
return ERR_PTR(-ENXIO);
|
|
|
|
if (f.file->f_op != &io_uring_fops) {
|
|
|
|
fdput(f);
|
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx_attach = f.file->private_data;
|
|
|
|
sqd = ctx_attach->sq_data;
|
|
|
|
if (!sqd) {
|
|
|
|
fdput(f);
|
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
}
|
2021-03-11 17:17:56 +00:00
|
|
|
if (sqd->task_tgid != current->tgid) {
|
|
|
|
fdput(f);
|
|
|
|
return ERR_PTR(-EPERM);
|
|
|
|
}
|
2020-09-02 20:50:27 +00:00
|
|
|
|
|
|
|
refcount_inc(&sqd->refs);
|
|
|
|
fdput(f);
|
|
|
|
return sqd;
|
|
|
|
}
|
|
|
|
|
2021-03-11 23:29:37 +00:00
|
|
|
static struct io_sq_data *io_get_sq_data(struct io_uring_params *p,
|
|
|
|
bool *attached)
|
2020-09-02 19:52:19 +00:00
|
|
|
{
|
|
|
|
struct io_sq_data *sqd;
|
|
|
|
|
2021-03-11 23:29:37 +00:00
|
|
|
*attached = false;
|
2021-03-11 17:17:56 +00:00
|
|
|
if (p->flags & IORING_SETUP_ATTACH_WQ) {
|
|
|
|
sqd = io_attach_sq_data(p);
|
2021-03-11 23:29:37 +00:00
|
|
|
if (!IS_ERR(sqd)) {
|
|
|
|
*attached = true;
|
2021-03-11 17:17:56 +00:00
|
|
|
return sqd;
|
2021-03-11 23:29:37 +00:00
|
|
|
}
|
2021-03-11 17:17:56 +00:00
|
|
|
/* fall through for EPERM case, setup new sqd/task */
|
|
|
|
if (PTR_ERR(sqd) != -EPERM)
|
|
|
|
return sqd;
|
|
|
|
}
|
2020-09-02 20:50:27 +00:00
|
|
|
|
2020-09-02 19:52:19 +00:00
|
|
|
sqd = kzalloc(sizeof(*sqd), GFP_KERNEL);
|
|
|
|
if (!sqd)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
|
2021-03-14 20:57:12 +00:00
|
|
|
atomic_set(&sqd->park_pending, 0);
|
2020-09-02 19:52:19 +00:00
|
|
|
refcount_set(&sqd->refs, 1);
|
2020-09-14 17:16:23 +00:00
|
|
|
INIT_LIST_HEAD(&sqd->ctx_list);
|
2021-03-14 20:57:10 +00:00
|
|
|
mutex_init(&sqd->lock);
|
2020-09-02 19:52:19 +00:00
|
|
|
init_waitqueue_head(&sqd->wait);
|
2021-02-18 04:03:43 +00:00
|
|
|
init_completion(&sqd->exited);
|
2020-09-02 19:52:19 +00:00
|
|
|
return sqd;
|
|
|
|
}
|
|
|
|
|
2019-01-11 05:13:58 +00:00
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
/*
|
|
|
|
* Ensure the UNIX gc is aware of our file set, so we are certain that
|
|
|
|
* the io_uring can be safely unregistered on process exit, even if we have
|
|
|
|
* loops in the file referencing.
|
|
|
|
*/
|
|
|
|
static int __io_sqe_files_scm(struct io_ring_ctx *ctx, int nr, int offset)
|
|
|
|
{
|
|
|
|
struct sock *sk = ctx->ring_sock->sk;
|
|
|
|
struct scm_fp_list *fpl;
|
|
|
|
struct sk_buff *skb;
|
2019-10-03 14:11:03 +00:00
|
|
|
int i, nr_files;
|
2019-01-11 05:13:58 +00:00
|
|
|
|
|
|
|
fpl = kzalloc(sizeof(*fpl), GFP_KERNEL);
|
|
|
|
if (!fpl)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
skb = alloc_skb(0, GFP_KERNEL);
|
|
|
|
if (!skb) {
|
|
|
|
kfree(fpl);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
skb->sk = sk;
|
|
|
|
|
2019-10-03 14:11:03 +00:00
|
|
|
nr_files = 0;
|
2021-02-21 23:19:37 +00:00
|
|
|
fpl->user = get_uid(current_user());
|
2019-01-11 05:13:58 +00:00
|
|
|
for (i = 0; i < nr; i++) {
|
2019-10-26 13:20:21 +00:00
|
|
|
struct file *file = io_file_from_index(ctx, i + offset);
|
|
|
|
|
|
|
|
if (!file)
|
2019-10-03 14:11:03 +00:00
|
|
|
continue;
|
2019-10-26 13:20:21 +00:00
|
|
|
fpl->fp[nr_files] = get_file(file);
|
2019-10-03 14:11:03 +00:00
|
|
|
unix_inflight(fpl->user, fpl->fp[nr_files]);
|
|
|
|
nr_files++;
|
2019-01-11 05:13:58 +00:00
|
|
|
}
|
|
|
|
|
2019-10-03 14:11:03 +00:00
|
|
|
if (nr_files) {
|
|
|
|
fpl->max = SCM_MAX_FD;
|
|
|
|
fpl->count = nr_files;
|
|
|
|
UNIXCB(skb).fp = fpl;
|
2019-12-09 18:22:50 +00:00
|
|
|
skb->destructor = unix_destruct_scm;
|
2019-10-03 14:11:03 +00:00
|
|
|
refcount_add(skb->truesize, &sk->sk_wmem_alloc);
|
|
|
|
skb_queue_head(&sk->sk_receive_queue, skb);
|
2019-01-11 05:13:58 +00:00
|
|
|
|
2019-10-03 14:11:03 +00:00
|
|
|
for (i = 0; i < nr_files; i++)
|
|
|
|
fput(fpl->fp[i]);
|
|
|
|
} else {
|
|
|
|
kfree_skb(skb);
|
|
|
|
kfree(fpl);
|
|
|
|
}
|
2019-01-11 05:13:58 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If UNIX sockets are enabled, fd passing can cause a reference cycle which
|
|
|
|
* causes regular reference counting to break down. We rely on the UNIX
|
|
|
|
* garbage collection to take care of this problem for us.
|
|
|
|
*/
|
|
|
|
static int io_sqe_files_scm(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
unsigned left, total;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
total = 0;
|
|
|
|
left = ctx->nr_user_files;
|
|
|
|
while (left) {
|
|
|
|
unsigned this_files = min_t(unsigned, left, SCM_MAX_FD);
|
|
|
|
|
|
|
|
ret = __io_sqe_files_scm(ctx, this_files, total);
|
|
|
|
if (ret)
|
|
|
|
break;
|
|
|
|
left -= this_files;
|
|
|
|
total += this_files;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!ret)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
while (total < ctx->nr_user_files) {
|
2019-10-26 13:20:21 +00:00
|
|
|
struct file *file = io_file_from_index(ctx, total);
|
|
|
|
|
|
|
|
if (file)
|
|
|
|
fput(file);
|
2019-01-11 05:13:58 +00:00
|
|
|
total++;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static int io_sqe_files_scm(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2021-04-01 14:43:56 +00:00
|
|
|
static void io_rsrc_file_put(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc)
|
2019-12-09 18:22:50 +00:00
|
|
|
{
|
2021-01-15 17:37:45 +00:00
|
|
|
struct file *file = prsrc->file;
|
2019-12-09 18:22:50 +00:00
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
struct sock *sock = ctx->ring_sock->sk;
|
|
|
|
struct sk_buff_head list, *head = &sock->sk_receive_queue;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
__skb_queue_head_init(&list);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Find the skb that holds this file in its SCM_RIGHTS. When found,
|
|
|
|
* remove this entry and rearrange the file array.
|
|
|
|
*/
|
|
|
|
skb = skb_dequeue(head);
|
|
|
|
while (skb) {
|
|
|
|
struct scm_fp_list *fp;
|
|
|
|
|
|
|
|
fp = UNIXCB(skb).fp;
|
|
|
|
for (i = 0; i < fp->count; i++) {
|
|
|
|
int left;
|
|
|
|
|
|
|
|
if (fp->fp[i] != file)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
unix_notinflight(fp->user, fp->fp[i]);
|
|
|
|
left = fp->count - 1 - i;
|
|
|
|
if (left) {
|
|
|
|
memmove(&fp->fp[i], &fp->fp[i + 1],
|
|
|
|
left * sizeof(struct file *));
|
|
|
|
}
|
|
|
|
fp->count--;
|
|
|
|
if (!fp->count) {
|
|
|
|
kfree_skb(skb);
|
|
|
|
skb = NULL;
|
|
|
|
} else {
|
|
|
|
__skb_queue_tail(&list, skb);
|
|
|
|
}
|
|
|
|
fput(file);
|
|
|
|
file = NULL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!file)
|
|
|
|
break;
|
|
|
|
|
|
|
|
__skb_queue_tail(&list, skb);
|
|
|
|
|
|
|
|
skb = skb_dequeue(head);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (skb_peek(&list)) {
|
|
|
|
spin_lock_irq(&head->lock);
|
|
|
|
while ((skb = __skb_dequeue(&list)) != NULL)
|
|
|
|
__skb_queue_tail(head, skb);
|
|
|
|
spin_unlock_irq(&head->lock);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
fput(file);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2021-04-01 14:43:40 +00:00
|
|
|
static void __io_rsrc_put_work(struct io_rsrc_node *ref_node)
|
2019-10-26 13:20:21 +00:00
|
|
|
{
|
2021-04-01 14:43:40 +00:00
|
|
|
struct io_rsrc_data *rsrc_data = ref_node->rsrc_data;
|
2021-01-15 17:37:44 +00:00
|
|
|
struct io_ring_ctx *ctx = rsrc_data->ctx;
|
|
|
|
struct io_rsrc_put *prsrc, *tmp;
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 06:05:18 +00:00
|
|
|
|
2021-01-15 17:37:44 +00:00
|
|
|
list_for_each_entry_safe(prsrc, tmp, &ref_node->rsrc_list, list) {
|
|
|
|
list_del(&prsrc->list);
|
2021-04-25 13:32:18 +00:00
|
|
|
|
|
|
|
if (prsrc->tag) {
|
|
|
|
bool lock_ring = ctx->flags & IORING_SETUP_IOPOLL;
|
|
|
|
|
|
|
|
io_ring_submit_lock(ctx, lock_ring);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-04-25 13:32:18 +00:00
|
|
|
io_cqring_fill_event(ctx, prsrc->tag, 0, 0);
|
2021-04-27 15:13:51 +00:00
|
|
|
ctx->cq_extra++;
|
2021-04-25 13:32:18 +00:00
|
|
|
io_commit_cqring(ctx);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-04-25 13:32:18 +00:00
|
|
|
io_cqring_ev_posted(ctx);
|
|
|
|
io_ring_submit_unlock(ctx, lock_ring);
|
|
|
|
}
|
|
|
|
|
2021-04-01 14:43:44 +00:00
|
|
|
rsrc_data->do_put(ctx, prsrc);
|
2021-01-15 17:37:44 +00:00
|
|
|
kfree(prsrc);
|
2019-10-26 13:20:21 +00:00
|
|
|
}
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 06:05:18 +00:00
|
|
|
|
2021-04-01 14:43:47 +00:00
|
|
|
io_rsrc_node_destroy(ref_node);
|
2021-04-11 00:46:34 +00:00
|
|
|
if (atomic_dec_and_test(&rsrc_data->refs))
|
|
|
|
complete(&rsrc_data->done);
|
2020-02-05 02:54:55 +00:00
|
|
|
}
|
2019-10-26 13:20:21 +00:00
|
|
|
|
2021-01-15 17:37:44 +00:00
|
|
|
static void io_rsrc_put_work(struct work_struct *work)
|
2020-05-14 23:21:15 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
struct llist_node *node;
|
|
|
|
|
2021-01-15 17:37:44 +00:00
|
|
|
ctx = container_of(work, struct io_ring_ctx, rsrc_put_work.work);
|
|
|
|
node = llist_del_all(&ctx->rsrc_put_llist);
|
2020-05-14 23:21:15 +00:00
|
|
|
|
|
|
|
while (node) {
|
2021-04-01 14:43:40 +00:00
|
|
|
struct io_rsrc_node *ref_node;
|
2020-05-14 23:21:15 +00:00
|
|
|
struct llist_node *next = node->next;
|
|
|
|
|
2021-04-01 14:43:40 +00:00
|
|
|
ref_node = llist_entry(node, struct io_rsrc_node, llist);
|
2021-01-15 17:37:44 +00:00
|
|
|
__io_rsrc_put_work(ref_node);
|
2020-05-14 23:21:15 +00:00
|
|
|
node = next;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-01-11 05:13:58 +00:00
|
|
|
static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
|
2021-04-25 13:32:21 +00:00
|
|
|
unsigned nr_args, u64 __user *tags)
|
2019-01-11 05:13:58 +00:00
|
|
|
{
|
|
|
|
__s32 __user *fds = (__s32 __user *) arg;
|
2019-12-09 18:22:50 +00:00
|
|
|
struct file *file;
|
2021-04-01 14:43:42 +00:00
|
|
|
int fd, ret;
|
2021-04-01 14:44:03 +00:00
|
|
|
unsigned i;
|
2019-01-11 05:13:58 +00:00
|
|
|
|
2019-12-09 18:22:50 +00:00
|
|
|
if (ctx->file_data)
|
2019-01-11 05:13:58 +00:00
|
|
|
return -EBUSY;
|
|
|
|
if (!nr_args)
|
|
|
|
return -EINVAL;
|
|
|
|
if (nr_args > IORING_MAX_FIXED_FILES)
|
|
|
|
return -EMFILE;
|
2021-04-01 14:43:46 +00:00
|
|
|
ret = io_rsrc_node_switch_start(ctx);
|
2021-04-01 14:43:42 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2021-06-14 01:36:18 +00:00
|
|
|
ret = io_rsrc_data_alloc(ctx, io_rsrc_file_put, tags, nr_args,
|
|
|
|
&ctx->file_data);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2019-01-11 05:13:58 +00:00
|
|
|
|
2021-04-01 14:43:42 +00:00
|
|
|
ret = -ENOMEM;
|
2021-04-11 00:46:37 +00:00
|
|
|
if (!io_alloc_file_tables(&ctx->file_table, nr_args))
|
2021-01-15 17:37:51 +00:00
|
|
|
goto out_free;
|
2019-10-26 13:20:21 +00:00
|
|
|
|
2019-10-03 14:11:03 +00:00
|
|
|
for (i = 0; i < nr_args; i++, ctx->nr_user_files++) {
|
2021-06-14 01:36:18 +00:00
|
|
|
if (copy_from_user(&fd, &fds[i], sizeof(fd))) {
|
2020-10-10 17:34:15 +00:00
|
|
|
ret = -EFAULT;
|
|
|
|
goto out_fput;
|
|
|
|
}
|
2019-10-03 14:11:03 +00:00
|
|
|
/* allow sparse sets */
|
2021-04-25 13:32:21 +00:00
|
|
|
if (fd == -1) {
|
|
|
|
ret = -EINVAL;
|
2021-06-14 01:36:21 +00:00
|
|
|
if (unlikely(*io_get_tag_slot(ctx->file_data, i)))
|
2021-04-25 13:32:21 +00:00
|
|
|
goto out_fput;
|
2019-10-03 14:11:03 +00:00
|
|
|
continue;
|
2021-04-25 13:32:21 +00:00
|
|
|
}
|
2019-01-11 05:13:58 +00:00
|
|
|
|
2019-12-09 18:22:50 +00:00
|
|
|
file = fget(fd);
|
2019-01-11 05:13:58 +00:00
|
|
|
ret = -EBADF;
|
2021-04-25 13:32:21 +00:00
|
|
|
if (unlikely(!file))
|
2020-10-10 17:34:15 +00:00
|
|
|
goto out_fput;
|
2019-12-09 18:22:50 +00:00
|
|
|
|
2019-01-11 05:13:58 +00:00
|
|
|
/*
|
|
|
|
* Don't allow io_uring instances to be registered. If UNIX
|
|
|
|
* isn't enabled, then this causes a reference cycle and this
|
|
|
|
* instance can never get freed. If UNIX is enabled we'll
|
|
|
|
* handle it just fine, but there's still no point in allowing
|
|
|
|
* a ring fd as it doesn't support regular read/write anyway.
|
|
|
|
*/
|
2019-12-09 18:22:50 +00:00
|
|
|
if (file->f_op == &io_uring_fops) {
|
|
|
|
fput(file);
|
2020-10-10 17:34:15 +00:00
|
|
|
goto out_fput;
|
2019-01-11 05:13:58 +00:00
|
|
|
}
|
2021-04-11 00:46:37 +00:00
|
|
|
io_fixed_file_set(io_fixed_file_slot(&ctx->file_table, i), file);
|
2019-01-11 05:13:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
ret = io_sqe_files_scm(ctx);
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 06:05:18 +00:00
|
|
|
if (ret) {
|
2021-04-13 01:58:38 +00:00
|
|
|
__io_sqe_files_unregister(ctx);
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 06:05:18 +00:00
|
|
|
return ret;
|
|
|
|
}
|
2019-01-11 05:13:58 +00:00
|
|
|
|
2021-04-01 14:43:46 +00:00
|
|
|
io_rsrc_node_switch(ctx, NULL);
|
2019-01-11 05:13:58 +00:00
|
|
|
return ret;
|
2020-10-10 17:34:15 +00:00
|
|
|
out_fput:
|
|
|
|
for (i = 0; i < ctx->nr_user_files; i++) {
|
|
|
|
file = io_file_from_index(ctx, i);
|
|
|
|
if (file)
|
|
|
|
fput(file);
|
|
|
|
}
|
2021-08-09 12:04:01 +00:00
|
|
|
io_free_file_tables(&ctx->file_table);
|
2020-10-10 17:34:15 +00:00
|
|
|
ctx->nr_user_files = 0;
|
|
|
|
out_free:
|
2021-04-25 13:32:16 +00:00
|
|
|
io_rsrc_data_free(ctx->file_data);
|
2020-10-14 13:35:57 +00:00
|
|
|
ctx->file_data = NULL;
|
2019-01-11 05:13:58 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-10-03 19:59:56 +00:00
|
|
|
static int io_sqe_file_register(struct io_ring_ctx *ctx, struct file *file,
|
|
|
|
int index)
|
|
|
|
{
|
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
struct sock *sock = ctx->ring_sock->sk;
|
|
|
|
struct sk_buff_head *head = &sock->sk_receive_queue;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* See if we can merge this file into an existing skb SCM_RIGHTS
|
|
|
|
* file set. If there's no room, fall back to allocating a new skb
|
|
|
|
* and filling it in.
|
|
|
|
*/
|
|
|
|
spin_lock_irq(&head->lock);
|
|
|
|
skb = skb_peek(head);
|
|
|
|
if (skb) {
|
|
|
|
struct scm_fp_list *fpl = UNIXCB(skb).fp;
|
|
|
|
|
|
|
|
if (fpl->count < SCM_MAX_FD) {
|
|
|
|
__skb_unlink(skb, head);
|
|
|
|
spin_unlock_irq(&head->lock);
|
|
|
|
fpl->fp[fpl->count] = get_file(file);
|
|
|
|
unix_inflight(fpl->user, fpl->fp[fpl->count]);
|
|
|
|
fpl->count++;
|
|
|
|
spin_lock_irq(&head->lock);
|
|
|
|
__skb_queue_head(head, skb);
|
|
|
|
} else {
|
|
|
|
skb = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
spin_unlock_irq(&head->lock);
|
|
|
|
|
|
|
|
if (skb) {
|
|
|
|
fput(file);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return __io_sqe_files_scm(ctx, 1, index);
|
|
|
|
#else
|
|
|
|
return 0;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2021-04-25 13:32:18 +00:00
|
|
|
static int io_queue_rsrc_removal(struct io_rsrc_data *data, unsigned idx,
|
2021-04-01 14:43:45 +00:00
|
|
|
struct io_rsrc_node *node, void *rsrc)
|
2019-12-09 18:22:50 +00:00
|
|
|
{
|
2021-01-15 17:37:44 +00:00
|
|
|
struct io_rsrc_put *prsrc;
|
2019-12-09 18:22:50 +00:00
|
|
|
|
2021-01-15 17:37:44 +00:00
|
|
|
prsrc = kzalloc(sizeof(*prsrc), GFP_KERNEL);
|
|
|
|
if (!prsrc)
|
2020-03-23 09:47:15 +00:00
|
|
|
return -ENOMEM;
|
2019-12-09 18:22:50 +00:00
|
|
|
|
2021-06-14 01:36:21 +00:00
|
|
|
prsrc->tag = *io_get_tag_slot(data, idx);
|
2021-01-15 17:37:45 +00:00
|
|
|
prsrc->rsrc = rsrc;
|
2021-04-01 14:43:45 +00:00
|
|
|
list_add(&prsrc->list, &node->rsrc_list);
|
2020-03-23 09:47:15 +00:00
|
|
|
return 0;
|
2019-12-09 18:22:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int __io_sqe_files_update(struct io_ring_ctx *ctx,
|
2021-04-25 13:32:22 +00:00
|
|
|
struct io_uring_rsrc_update2 *up,
|
2019-12-09 18:22:50 +00:00
|
|
|
unsigned nr_args)
|
|
|
|
{
|
2021-04-25 13:32:22 +00:00
|
|
|
u64 __user *tags = u64_to_user_ptr(up->tags);
|
2021-04-25 13:32:19 +00:00
|
|
|
__s32 __user *fds = u64_to_user_ptr(up->data);
|
2021-04-01 14:43:40 +00:00
|
|
|
struct io_rsrc_data *data = ctx->file_data;
|
2021-04-01 14:44:04 +00:00
|
|
|
struct io_fixed_file *file_slot;
|
|
|
|
struct file *file;
|
2021-04-25 13:32:19 +00:00
|
|
|
int fd, i, err = 0;
|
|
|
|
unsigned int done;
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 06:05:18 +00:00
|
|
|
bool needs_switch = false;
|
2019-10-03 19:59:56 +00:00
|
|
|
|
2021-04-25 13:32:19 +00:00
|
|
|
if (!ctx->file_data)
|
|
|
|
return -ENXIO;
|
|
|
|
if (up->offset + nr_args > ctx->nr_user_files)
|
2019-10-03 19:59:56 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
2021-01-26 13:51:09 +00:00
|
|
|
for (done = 0; done < nr_args; done++) {
|
2021-04-25 13:32:22 +00:00
|
|
|
u64 tag = 0;
|
|
|
|
|
|
|
|
if ((tags && copy_from_user(&tag, &tags[done], sizeof(tag))) ||
|
|
|
|
copy_from_user(&fd, &fds[done], sizeof(fd))) {
|
2019-10-03 19:59:56 +00:00
|
|
|
err = -EFAULT;
|
|
|
|
break;
|
|
|
|
}
|
2021-04-25 13:32:22 +00:00
|
|
|
if ((fd == IORING_REGISTER_FILES_SKIP || fd == -1) && tag) {
|
|
|
|
err = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
2021-01-26 20:23:28 +00:00
|
|
|
if (fd == IORING_REGISTER_FILES_SKIP)
|
|
|
|
continue;
|
|
|
|
|
2021-01-26 13:51:09 +00:00
|
|
|
i = array_index_nospec(up->offset + done, ctx->nr_user_files);
|
2021-04-11 00:46:37 +00:00
|
|
|
file_slot = io_fixed_file_slot(&ctx->file_table, i);
|
2021-02-04 13:52:07 +00:00
|
|
|
|
2021-04-01 14:44:04 +00:00
|
|
|
if (file_slot->file_ptr) {
|
|
|
|
file = (struct file *)(file_slot->file_ptr & FFS_MASK);
|
2021-04-25 13:32:18 +00:00
|
|
|
err = io_queue_rsrc_removal(data, up->offset + done,
|
|
|
|
ctx->rsrc_node, file);
|
2020-03-23 09:47:15 +00:00
|
|
|
if (err)
|
|
|
|
break;
|
2021-04-01 14:44:04 +00:00
|
|
|
file_slot->file_ptr = 0;
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 06:05:18 +00:00
|
|
|
needs_switch = true;
|
2019-10-03 19:59:56 +00:00
|
|
|
}
|
|
|
|
if (fd != -1) {
|
|
|
|
file = fget(fd);
|
|
|
|
if (!file) {
|
|
|
|
err = -EBADF;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Don't allow io_uring instances to be registered. If
|
|
|
|
* UNIX isn't enabled, then this causes a reference
|
|
|
|
* cycle and this instance can never get freed. If UNIX
|
|
|
|
* is enabled we'll handle it just fine, but there's
|
|
|
|
* still no point in allowing a ring fd as it doesn't
|
|
|
|
* support regular read/write anyway.
|
|
|
|
*/
|
|
|
|
if (file->f_op == &io_uring_fops) {
|
|
|
|
fput(file);
|
|
|
|
err = -EBADF;
|
|
|
|
break;
|
|
|
|
}
|
2021-06-14 01:36:21 +00:00
|
|
|
*io_get_tag_slot(data, up->offset + done) = tag;
|
2021-04-01 14:44:01 +00:00
|
|
|
io_fixed_file_set(file_slot, file);
|
2019-10-03 19:59:56 +00:00
|
|
|
err = io_sqe_file_register(ctx, file, i);
|
2020-07-09 10:11:41 +00:00
|
|
|
if (err) {
|
2021-04-01 14:44:04 +00:00
|
|
|
file_slot->file_ptr = 0;
|
2020-07-09 10:11:41 +00:00
|
|
|
fput(file);
|
2019-10-03 19:59:56 +00:00
|
|
|
break;
|
2020-07-09 10:11:41 +00:00
|
|
|
}
|
2019-10-03 19:59:56 +00:00
|
|
|
}
|
2019-12-09 18:22:50 +00:00
|
|
|
}
|
|
|
|
|
2021-04-01 14:43:46 +00:00
|
|
|
if (needs_switch)
|
|
|
|
io_rsrc_node_switch(ctx, data);
|
2019-10-03 19:59:56 +00:00
|
|
|
return done ? done : err;
|
|
|
|
}
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 06:05:18 +00:00
|
|
|
|
2021-03-08 16:37:51 +00:00
|
|
|
static struct io_wq *io_init_wq_offload(struct io_ring_ctx *ctx,
|
|
|
|
struct task_struct *task)
|
2020-01-28 00:15:48 +00:00
|
|
|
{
|
2021-02-19 19:33:30 +00:00
|
|
|
struct io_wq_hash *hash;
|
2020-01-28 00:15:48 +00:00
|
|
|
struct io_wq_data data;
|
|
|
|
unsigned int concurrency;
|
|
|
|
|
2021-07-20 08:38:05 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-02-19 19:33:30 +00:00
|
|
|
hash = ctx->hash_map;
|
|
|
|
if (!hash) {
|
|
|
|
hash = kzalloc(sizeof(*hash), GFP_KERNEL);
|
2021-07-20 08:38:05 +00:00
|
|
|
if (!hash) {
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2021-02-19 19:33:30 +00:00
|
|
|
return ERR_PTR(-ENOMEM);
|
2021-07-20 08:38:05 +00:00
|
|
|
}
|
2021-02-19 19:33:30 +00:00
|
|
|
refcount_set(&hash->refs, 1);
|
|
|
|
init_waitqueue_head(&hash->wait);
|
|
|
|
ctx->hash_map = hash;
|
2020-01-28 00:15:48 +00:00
|
|
|
}
|
2021-07-20 08:38:05 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2020-01-28 00:15:48 +00:00
|
|
|
|
2021-02-19 19:33:30 +00:00
|
|
|
data.hash = hash;
|
2021-03-08 16:37:51 +00:00
|
|
|
data.task = task;
|
2021-08-09 12:04:05 +00:00
|
|
|
data.free_work = io_wq_free_work;
|
2020-06-08 18:08:20 +00:00
|
|
|
data.do_work = io_wq_submit_work;
|
2020-01-28 00:15:48 +00:00
|
|
|
|
2021-02-16 18:41:41 +00:00
|
|
|
/* Do QD, or 4 * CPUS, whatever is smallest */
|
|
|
|
concurrency = min(ctx->sq_entries, 4 * num_online_cpus());
|
2020-01-28 00:15:48 +00:00
|
|
|
|
2021-02-16 19:56:50 +00:00
|
|
|
return io_wq_create(concurrency, &data);
|
2020-01-28 00:15:48 +00:00
|
|
|
}
|
|
|
|
|
2021-02-16 19:56:50 +00:00
|
|
|
static int io_uring_alloc_task_context(struct task_struct *task,
|
|
|
|
struct io_ring_ctx *ctx)
|
2020-09-13 19:09:39 +00:00
|
|
|
{
|
|
|
|
struct io_uring_task *tctx;
|
2020-10-15 22:24:45 +00:00
|
|
|
int ret;
|
2020-09-13 19:09:39 +00:00
|
|
|
|
2021-06-14 01:36:22 +00:00
|
|
|
tctx = kzalloc(sizeof(*tctx), GFP_KERNEL);
|
2020-09-13 19:09:39 +00:00
|
|
|
if (unlikely(!tctx))
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2020-10-15 22:24:45 +00:00
|
|
|
ret = percpu_counter_init(&tctx->inflight, 0, GFP_KERNEL);
|
|
|
|
if (unlikely(ret)) {
|
|
|
|
kfree(tctx);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-03-08 16:37:51 +00:00
|
|
|
tctx->io_wq = io_init_wq_offload(ctx, task);
|
2021-02-16 19:56:50 +00:00
|
|
|
if (IS_ERR(tctx->io_wq)) {
|
|
|
|
ret = PTR_ERR(tctx->io_wq);
|
|
|
|
percpu_counter_destroy(&tctx->inflight);
|
|
|
|
kfree(tctx);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-09-13 19:09:39 +00:00
|
|
|
xa_init(&tctx->xa);
|
|
|
|
init_waitqueue_head(&tctx->wait);
|
2020-10-30 15:37:30 +00:00
|
|
|
atomic_set(&tctx->in_idle, 0);
|
2021-04-11 00:46:26 +00:00
|
|
|
atomic_set(&tctx->inflight_tracked, 0);
|
2020-09-13 19:09:39 +00:00
|
|
|
task->io_uring = tctx;
|
2021-02-10 00:03:20 +00:00
|
|
|
spin_lock_init(&tctx->task_lock);
|
|
|
|
INIT_WQ_LIST(&tctx->task_list);
|
|
|
|
init_task_work(&tctx->task_work, tctx_task_work);
|
2020-09-13 19:09:39 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void __io_uring_free(struct task_struct *tsk)
|
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = tsk->io_uring;
|
|
|
|
|
|
|
|
WARN_ON_ONCE(!xa_empty(&tctx->xa));
|
2021-02-27 11:16:45 +00:00
|
|
|
WARN_ON_ONCE(tctx->io_wq);
|
2021-06-14 01:36:22 +00:00
|
|
|
WARN_ON_ONCE(tctx->cached_refs);
|
2021-02-27 11:16:45 +00:00
|
|
|
|
2020-10-15 22:24:45 +00:00
|
|
|
percpu_counter_destroy(&tctx->inflight);
|
2020-09-13 19:09:39 +00:00
|
|
|
kfree(tctx);
|
|
|
|
tsk->io_uring = NULL;
|
|
|
|
}
|
|
|
|
|
2020-08-27 14:58:31 +00:00
|
|
|
static int io_sq_offload_create(struct io_ring_ctx *ctx,
|
|
|
|
struct io_uring_params *p)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2021-02-16 18:41:41 +00:00
|
|
|
/* Retain compatibility with failing for an invalid attach attempt */
|
|
|
|
if ((ctx->flags & (IORING_SETUP_ATTACH_WQ | IORING_SETUP_SQPOLL)) ==
|
|
|
|
IORING_SETUP_ATTACH_WQ) {
|
|
|
|
struct fd f;
|
|
|
|
|
|
|
|
f = fdget(p->wq_fd);
|
|
|
|
if (!f.file)
|
|
|
|
return -ENXIO;
|
2021-07-22 23:08:07 +00:00
|
|
|
if (f.file->f_op != &io_uring_fops) {
|
|
|
|
fdput(f);
|
2021-04-20 11:03:33 +00:00
|
|
|
return -EINVAL;
|
2021-07-22 23:08:07 +00:00
|
|
|
}
|
|
|
|
fdput(f);
|
2021-02-16 18:41:41 +00:00
|
|
|
}
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_SQPOLL) {
|
2021-03-04 19:39:36 +00:00
|
|
|
struct task_struct *tsk;
|
2020-09-02 19:52:19 +00:00
|
|
|
struct io_sq_data *sqd;
|
2021-03-11 23:29:37 +00:00
|
|
|
bool attached;
|
2020-09-02 19:52:19 +00:00
|
|
|
|
2021-03-11 23:29:37 +00:00
|
|
|
sqd = io_get_sq_data(p, &attached);
|
2020-09-02 19:52:19 +00:00
|
|
|
if (IS_ERR(sqd)) {
|
|
|
|
ret = PTR_ERR(sqd);
|
|
|
|
goto err;
|
|
|
|
}
|
2020-09-14 17:16:23 +00:00
|
|
|
|
2021-03-07 10:54:28 +00:00
|
|
|
ctx->sq_creds = get_current_cred();
|
2020-09-02 19:52:19 +00:00
|
|
|
ctx->sq_data = sqd;
|
2019-04-13 15:28:55 +00:00
|
|
|
ctx->sq_thread_idle = msecs_to_jiffies(p->sq_thread_idle);
|
|
|
|
if (!ctx->sq_thread_idle)
|
|
|
|
ctx->sq_thread_idle = HZ;
|
|
|
|
|
2021-03-10 13:13:53 +00:00
|
|
|
io_sq_thread_park(sqd);
|
2021-03-18 11:54:35 +00:00
|
|
|
list_add(&ctx->sqd_list, &sqd->ctx_list);
|
|
|
|
io_sqd_update_thread_idle(sqd);
|
2021-03-11 23:29:37 +00:00
|
|
|
/* don't attach to a dying SQPOLL thread, would be racy */
|
2021-04-20 11:03:33 +00:00
|
|
|
ret = (attached && !sqd->thread) ? -ENXIO : 0;
|
2021-03-10 13:13:53 +00:00
|
|
|
io_sq_thread_unpark(sqd);
|
|
|
|
|
2021-03-18 11:54:35 +00:00
|
|
|
if (ret < 0)
|
|
|
|
goto err;
|
|
|
|
if (attached)
|
2021-02-16 19:56:50 +00:00
|
|
|
return 0;
|
2020-09-02 20:50:27 +00:00
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
if (p->flags & IORING_SETUP_SQ_AFF) {
|
2019-05-15 02:00:30 +00:00
|
|
|
int cpu = p->sq_thread_cpu;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
|
2019-04-13 15:28:55 +00:00
|
|
|
ret = -EINVAL;
|
2021-04-20 11:03:33 +00:00
|
|
|
if (cpu >= nr_cpu_ids || !cpu_online(cpu))
|
2021-03-09 23:32:13 +00:00
|
|
|
goto err_sqpoll;
|
2021-02-18 04:03:43 +00:00
|
|
|
sqd->sq_cpu = cpu;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
} else {
|
2021-02-18 04:03:43 +00:00
|
|
|
sqd->sq_cpu = -1;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
}
|
2021-02-18 04:03:43 +00:00
|
|
|
|
|
|
|
sqd->task_pid = current->pid;
|
2021-03-11 17:17:56 +00:00
|
|
|
sqd->task_tgid = current->tgid;
|
2021-03-04 19:39:36 +00:00
|
|
|
tsk = create_io_thread(io_sq_thread, sqd, NUMA_NO_NODE);
|
|
|
|
if (IS_ERR(tsk)) {
|
|
|
|
ret = PTR_ERR(tsk);
|
2021-03-09 23:32:13 +00:00
|
|
|
goto err_sqpoll;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
}
|
2021-03-08 17:30:54 +00:00
|
|
|
|
2021-03-04 19:39:36 +00:00
|
|
|
sqd->thread = tsk;
|
2021-03-08 17:30:54 +00:00
|
|
|
ret = io_uring_alloc_task_context(tsk, ctx);
|
2021-03-04 19:39:36 +00:00
|
|
|
wake_up_new_task(tsk);
|
2020-09-13 19:09:39 +00:00
|
|
|
if (ret)
|
|
|
|
goto err;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
} else if (p->flags & IORING_SETUP_SQ_AFF) {
|
|
|
|
/* Can't have SQ_AFF without SQPOLL */
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return 0;
|
2021-04-20 11:03:33 +00:00
|
|
|
err_sqpoll:
|
|
|
|
complete(&ctx->sq_data->exited);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
err:
|
2021-02-18 04:03:43 +00:00
|
|
|
io_sq_thread_finish(ctx);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-06-16 23:36:07 +00:00
|
|
|
static inline void __io_unaccount_mem(struct user_struct *user,
|
|
|
|
unsigned long nr_pages)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
atomic_long_sub(nr_pages, &user->locked_vm);
|
|
|
|
}
|
|
|
|
|
2020-06-16 23:36:07 +00:00
|
|
|
static inline int __io_account_mem(struct user_struct *user,
|
|
|
|
unsigned long nr_pages)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
unsigned long page_limit, cur_pages, new_pages;
|
|
|
|
|
|
|
|
/* Don't allow more pages than we can safely lock */
|
|
|
|
page_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
|
|
|
|
|
|
|
|
do {
|
|
|
|
cur_pages = atomic_long_read(&user->locked_vm);
|
|
|
|
new_pages = cur_pages + nr_pages;
|
|
|
|
if (new_pages > page_limit)
|
|
|
|
return -ENOMEM;
|
|
|
|
} while (atomic_long_cmpxchg(&user->locked_vm, cur_pages,
|
|
|
|
new_pages) != cur_pages);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 03:14:12 +00:00
|
|
|
static void io_unaccount_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
|
2020-06-16 23:36:07 +00:00
|
|
|
{
|
2021-02-21 23:19:37 +00:00
|
|
|
if (ctx->user)
|
2020-06-16 23:36:07 +00:00
|
|
|
__io_unaccount_mem(ctx->user, nr_pages);
|
2020-06-16 23:36:09 +00:00
|
|
|
|
2021-02-10 03:14:12 +00:00
|
|
|
if (ctx->mm_account)
|
|
|
|
atomic64_sub(nr_pages, &ctx->mm_account->pinned_vm);
|
2020-06-16 23:36:07 +00:00
|
|
|
}
|
|
|
|
|
2021-02-10 03:14:12 +00:00
|
|
|
static int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
|
2020-06-16 23:36:07 +00:00
|
|
|
{
|
2020-06-16 23:36:09 +00:00
|
|
|
int ret;
|
|
|
|
|
2021-02-21 23:19:37 +00:00
|
|
|
if (ctx->user) {
|
2020-06-16 23:36:09 +00:00
|
|
|
ret = __io_account_mem(ctx->user, nr_pages);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-02-10 03:14:12 +00:00
|
|
|
if (ctx->mm_account)
|
|
|
|
atomic64_add(nr_pages, &ctx->mm_account->pinned_vm);
|
2020-06-16 23:36:07 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static void io_mem_free(void *ptr)
|
|
|
|
{
|
io_uring: free allocated io_memory once
If io_allocate_scq_urings() fails to allocate an sq_* region, it will
call io_mem_free() for any previously allocated regions, but leave
dangling pointers to these regions in the ctx. Any regions which have
not yet been allocated are left NULL. Note that when returning
-EOVERFLOW, the previously allocated sq_ring is not freed, which appears
to be an unintentional leak.
When io_allocate_scq_urings() fails, io_uring_create() will call
io_ring_ctx_wait_and_kill(), which calls io_mem_free() on all the sq_*
regions, assuming the pointers are valid and not NULL.
This can result in pages being freed multiple times, which has been
observed to corrupt the page state, leading to subsequent fun. This can
also result in virt_to_page() on NULL, resulting in the use of bogus
page addresses, and yet more subsequent fun. The latter can be detected
with CONFIG_DEBUG_VIRTUAL on arm64.
Adding a cleanup path to io_allocate_scq_urings() complicates the logic,
so let's leave it to io_ring_ctx_free() to consistently free these
pointers, and simplify the io_allocate_scq_urings() error paths.
Full splats from before this patch below. Note that the pointer logged
by the DEBUG_VIRTUAL "non-linear address" warning has been hashed, and
is actually NULL.
[ 26.098129] page:ffff80000e949a00 count:0 mapcount:-128 mapping:0000000000000000 index:0x0
[ 26.102976] flags: 0x63fffc000000()
[ 26.104373] raw: 000063fffc000000 ffff80000e86c188 ffff80000ea3df08 0000000000000000
[ 26.108917] raw: 0000000000000000 0000000000000001 00000000ffffff7f 0000000000000000
[ 26.137235] page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0)
[ 26.143960] ------------[ cut here ]------------
[ 26.146020] kernel BUG at include/linux/mm.h:547!
[ 26.147586] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[ 26.149163] Modules linked in:
[ 26.150287] Process syz-executor.21 (pid: 20204, stack limit = 0x000000000e9cefeb)
[ 26.153307] CPU: 2 PID: 20204 Comm: syz-executor.21 Not tainted 5.1.0-rc7-00004-g7d30b2ea43d6 #18
[ 26.156566] Hardware name: linux,dummy-virt (DT)
[ 26.158089] pstate: 40400005 (nZcv daif +PAN -UAO)
[ 26.159869] pc : io_mem_free+0x9c/0xa8
[ 26.161436] lr : io_mem_free+0x9c/0xa8
[ 26.162720] sp : ffff000013003d60
[ 26.164048] x29: ffff000013003d60 x28: ffff800025048040
[ 26.165804] x27: 0000000000000000 x26: ffff800025048040
[ 26.167352] x25: 00000000000000c0 x24: ffff0000112c2820
[ 26.169682] x23: 0000000000000000 x22: 0000000020000080
[ 26.171899] x21: ffff80002143b418 x20: ffff80002143b400
[ 26.174236] x19: ffff80002143b280 x18: 0000000000000000
[ 26.176607] x17: 0000000000000000 x16: 0000000000000000
[ 26.178997] x15: 0000000000000000 x14: 0000000000000000
[ 26.181508] x13: 00009178a5e077b2 x12: 0000000000000001
[ 26.183863] x11: 0000000000000000 x10: 0000000000000980
[ 26.186437] x9 : ffff000013003a80 x8 : ffff800025048a20
[ 26.189006] x7 : ffff8000250481c0 x6 : ffff80002ffe9118
[ 26.191359] x5 : ffff80002ffe9118 x4 : 0000000000000000
[ 26.193863] x3 : ffff80002ffefe98 x2 : 44c06ddd107d1f00
[ 26.196642] x1 : 0000000000000000 x0 : 000000000000003e
[ 26.198892] Call trace:
[ 26.199893] io_mem_free+0x9c/0xa8
[ 26.201155] io_ring_ctx_wait_and_kill+0xec/0x180
[ 26.202688] io_uring_setup+0x6c4/0x6f0
[ 26.204091] __arm64_sys_io_uring_setup+0x18/0x20
[ 26.205576] el0_svc_common.constprop.0+0x7c/0xe8
[ 26.207186] el0_svc_handler+0x28/0x78
[ 26.208389] el0_svc+0x8/0xc
[ 26.209408] Code: aa0203e0 d0006861 9133a021 97fcdc3c (d4210000)
[ 26.211995] ---[ end trace bdb81cd43a21e50d ]---
[ 81.770626] ------------[ cut here ]------------
[ 81.825015] virt_to_phys used for non-linear address: 000000000d42f2c7 ( (null))
[ 81.827860] WARNING: CPU: 1 PID: 30171 at arch/arm64/mm/physaddr.c:15 __virt_to_phys+0x48/0x68
[ 81.831202] Modules linked in:
[ 81.832212] CPU: 1 PID: 30171 Comm: syz-executor.20 Not tainted 5.1.0-rc7-00004-g7d30b2ea43d6 #19
[ 81.835616] Hardware name: linux,dummy-virt (DT)
[ 81.836863] pstate: 60400005 (nZCv daif +PAN -UAO)
[ 81.838727] pc : __virt_to_phys+0x48/0x68
[ 81.840572] lr : __virt_to_phys+0x48/0x68
[ 81.842264] sp : ffff80002cf67c70
[ 81.843858] x29: ffff80002cf67c70 x28: ffff800014358e18
[ 81.846463] x27: 0000000000000000 x26: 0000000020000080
[ 81.849148] x25: 0000000000000000 x24: ffff80001bb01f40
[ 81.851986] x23: ffff200011db06c8 x22: ffff2000127e3c60
[ 81.854351] x21: ffff800014358cc0 x20: ffff800014358d98
[ 81.856711] x19: 0000000000000000 x18: 0000000000000000
[ 81.859132] x17: 0000000000000000 x16: 0000000000000000
[ 81.861586] x15: 0000000000000000 x14: 0000000000000000
[ 81.863905] x13: 0000000000000000 x12: ffff1000037603e9
[ 81.866226] x11: 1ffff000037603e8 x10: 0000000000000980
[ 81.868776] x9 : ffff80002cf67840 x8 : ffff80001bb02920
[ 81.873272] x7 : ffff1000037603e9 x6 : ffff80001bb01f47
[ 81.875266] x5 : ffff1000037603e9 x4 : dfff200000000000
[ 81.876875] x3 : ffff200010087528 x2 : ffff1000059ecf58
[ 81.878751] x1 : 44c06ddd107d1f00 x0 : 0000000000000000
[ 81.880453] Call trace:
[ 81.881164] __virt_to_phys+0x48/0x68
[ 81.882919] io_mem_free+0x18/0x110
[ 81.886585] io_ring_ctx_wait_and_kill+0x13c/0x1f0
[ 81.891212] io_uring_setup+0xa60/0xad0
[ 81.892881] __arm64_sys_io_uring_setup+0x2c/0x38
[ 81.894398] el0_svc_common.constprop.0+0xac/0x150
[ 81.896306] el0_svc_handler+0x34/0x88
[ 81.897744] el0_svc+0x8/0xc
[ 81.898715] ---[ end trace b4a703802243cbba ]---
Fixes: 2b188cc1bb857a9d ("Add io_uring IO interface")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: linux-block@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-30 16:30:21 +00:00
|
|
|
struct page *page;
|
|
|
|
|
|
|
|
if (!ptr)
|
|
|
|
return;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
io_uring: free allocated io_memory once
If io_allocate_scq_urings() fails to allocate an sq_* region, it will
call io_mem_free() for any previously allocated regions, but leave
dangling pointers to these regions in the ctx. Any regions which have
not yet been allocated are left NULL. Note that when returning
-EOVERFLOW, the previously allocated sq_ring is not freed, which appears
to be an unintentional leak.
When io_allocate_scq_urings() fails, io_uring_create() will call
io_ring_ctx_wait_and_kill(), which calls io_mem_free() on all the sq_*
regions, assuming the pointers are valid and not NULL.
This can result in pages being freed multiple times, which has been
observed to corrupt the page state, leading to subsequent fun. This can
also result in virt_to_page() on NULL, resulting in the use of bogus
page addresses, and yet more subsequent fun. The latter can be detected
with CONFIG_DEBUG_VIRTUAL on arm64.
Adding a cleanup path to io_allocate_scq_urings() complicates the logic,
so let's leave it to io_ring_ctx_free() to consistently free these
pointers, and simplify the io_allocate_scq_urings() error paths.
Full splats from before this patch below. Note that the pointer logged
by the DEBUG_VIRTUAL "non-linear address" warning has been hashed, and
is actually NULL.
[ 26.098129] page:ffff80000e949a00 count:0 mapcount:-128 mapping:0000000000000000 index:0x0
[ 26.102976] flags: 0x63fffc000000()
[ 26.104373] raw: 000063fffc000000 ffff80000e86c188 ffff80000ea3df08 0000000000000000
[ 26.108917] raw: 0000000000000000 0000000000000001 00000000ffffff7f 0000000000000000
[ 26.137235] page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0)
[ 26.143960] ------------[ cut here ]------------
[ 26.146020] kernel BUG at include/linux/mm.h:547!
[ 26.147586] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[ 26.149163] Modules linked in:
[ 26.150287] Process syz-executor.21 (pid: 20204, stack limit = 0x000000000e9cefeb)
[ 26.153307] CPU: 2 PID: 20204 Comm: syz-executor.21 Not tainted 5.1.0-rc7-00004-g7d30b2ea43d6 #18
[ 26.156566] Hardware name: linux,dummy-virt (DT)
[ 26.158089] pstate: 40400005 (nZcv daif +PAN -UAO)
[ 26.159869] pc : io_mem_free+0x9c/0xa8
[ 26.161436] lr : io_mem_free+0x9c/0xa8
[ 26.162720] sp : ffff000013003d60
[ 26.164048] x29: ffff000013003d60 x28: ffff800025048040
[ 26.165804] x27: 0000000000000000 x26: ffff800025048040
[ 26.167352] x25: 00000000000000c0 x24: ffff0000112c2820
[ 26.169682] x23: 0000000000000000 x22: 0000000020000080
[ 26.171899] x21: ffff80002143b418 x20: ffff80002143b400
[ 26.174236] x19: ffff80002143b280 x18: 0000000000000000
[ 26.176607] x17: 0000000000000000 x16: 0000000000000000
[ 26.178997] x15: 0000000000000000 x14: 0000000000000000
[ 26.181508] x13: 00009178a5e077b2 x12: 0000000000000001
[ 26.183863] x11: 0000000000000000 x10: 0000000000000980
[ 26.186437] x9 : ffff000013003a80 x8 : ffff800025048a20
[ 26.189006] x7 : ffff8000250481c0 x6 : ffff80002ffe9118
[ 26.191359] x5 : ffff80002ffe9118 x4 : 0000000000000000
[ 26.193863] x3 : ffff80002ffefe98 x2 : 44c06ddd107d1f00
[ 26.196642] x1 : 0000000000000000 x0 : 000000000000003e
[ 26.198892] Call trace:
[ 26.199893] io_mem_free+0x9c/0xa8
[ 26.201155] io_ring_ctx_wait_and_kill+0xec/0x180
[ 26.202688] io_uring_setup+0x6c4/0x6f0
[ 26.204091] __arm64_sys_io_uring_setup+0x18/0x20
[ 26.205576] el0_svc_common.constprop.0+0x7c/0xe8
[ 26.207186] el0_svc_handler+0x28/0x78
[ 26.208389] el0_svc+0x8/0xc
[ 26.209408] Code: aa0203e0 d0006861 9133a021 97fcdc3c (d4210000)
[ 26.211995] ---[ end trace bdb81cd43a21e50d ]---
[ 81.770626] ------------[ cut here ]------------
[ 81.825015] virt_to_phys used for non-linear address: 000000000d42f2c7 ( (null))
[ 81.827860] WARNING: CPU: 1 PID: 30171 at arch/arm64/mm/physaddr.c:15 __virt_to_phys+0x48/0x68
[ 81.831202] Modules linked in:
[ 81.832212] CPU: 1 PID: 30171 Comm: syz-executor.20 Not tainted 5.1.0-rc7-00004-g7d30b2ea43d6 #19
[ 81.835616] Hardware name: linux,dummy-virt (DT)
[ 81.836863] pstate: 60400005 (nZCv daif +PAN -UAO)
[ 81.838727] pc : __virt_to_phys+0x48/0x68
[ 81.840572] lr : __virt_to_phys+0x48/0x68
[ 81.842264] sp : ffff80002cf67c70
[ 81.843858] x29: ffff80002cf67c70 x28: ffff800014358e18
[ 81.846463] x27: 0000000000000000 x26: 0000000020000080
[ 81.849148] x25: 0000000000000000 x24: ffff80001bb01f40
[ 81.851986] x23: ffff200011db06c8 x22: ffff2000127e3c60
[ 81.854351] x21: ffff800014358cc0 x20: ffff800014358d98
[ 81.856711] x19: 0000000000000000 x18: 0000000000000000
[ 81.859132] x17: 0000000000000000 x16: 0000000000000000
[ 81.861586] x15: 0000000000000000 x14: 0000000000000000
[ 81.863905] x13: 0000000000000000 x12: ffff1000037603e9
[ 81.866226] x11: 1ffff000037603e8 x10: 0000000000000980
[ 81.868776] x9 : ffff80002cf67840 x8 : ffff80001bb02920
[ 81.873272] x7 : ffff1000037603e9 x6 : ffff80001bb01f47
[ 81.875266] x5 : ffff1000037603e9 x4 : dfff200000000000
[ 81.876875] x3 : ffff200010087528 x2 : ffff1000059ecf58
[ 81.878751] x1 : 44c06ddd107d1f00 x0 : 0000000000000000
[ 81.880453] Call trace:
[ 81.881164] __virt_to_phys+0x48/0x68
[ 81.882919] io_mem_free+0x18/0x110
[ 81.886585] io_ring_ctx_wait_and_kill+0x13c/0x1f0
[ 81.891212] io_uring_setup+0xa60/0xad0
[ 81.892881] __arm64_sys_io_uring_setup+0x2c/0x38
[ 81.894398] el0_svc_common.constprop.0+0xac/0x150
[ 81.896306] el0_svc_handler+0x34/0x88
[ 81.897744] el0_svc+0x8/0xc
[ 81.898715] ---[ end trace b4a703802243cbba ]---
Fixes: 2b188cc1bb857a9d ("Add io_uring IO interface")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: linux-block@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-30 16:30:21 +00:00
|
|
|
page = virt_to_head_page(ptr);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
if (put_page_testzero(page))
|
|
|
|
free_compound_page(page);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *io_mem_alloc(size_t size)
|
|
|
|
{
|
|
|
|
gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO | __GFP_NOWARN | __GFP_COMP |
|
2021-02-10 03:14:12 +00:00
|
|
|
__GFP_NORETRY | __GFP_ACCOUNT;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
return (void *) __get_free_pages(gfp_flags, get_order(size));
|
|
|
|
}
|
|
|
|
|
2019-08-26 17:23:46 +00:00
|
|
|
static unsigned long rings_size(unsigned sq_entries, unsigned cq_entries,
|
|
|
|
size_t *sq_offset)
|
|
|
|
{
|
|
|
|
struct io_rings *rings;
|
|
|
|
size_t off, sq_array_size;
|
|
|
|
|
|
|
|
off = struct_size(rings, cqes, cq_entries);
|
|
|
|
if (off == SIZE_MAX)
|
|
|
|
return SIZE_MAX;
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
off = ALIGN(off, SMP_CACHE_BYTES);
|
|
|
|
if (off == 0)
|
|
|
|
return SIZE_MAX;
|
|
|
|
#endif
|
|
|
|
|
2020-07-11 09:31:11 +00:00
|
|
|
if (sq_offset)
|
|
|
|
*sq_offset = off;
|
|
|
|
|
2019-08-26 17:23:46 +00:00
|
|
|
sq_array_size = array_size(sizeof(u32), sq_entries);
|
|
|
|
if (sq_array_size == SIZE_MAX)
|
|
|
|
return SIZE_MAX;
|
|
|
|
|
|
|
|
if (check_add_overflow(off, sq_array_size, &off))
|
|
|
|
return SIZE_MAX;
|
|
|
|
|
|
|
|
return off;
|
|
|
|
}
|
|
|
|
|
2021-04-25 13:32:23 +00:00
|
|
|
static void io_buffer_unmap(struct io_ring_ctx *ctx, struct io_mapped_ubuf **slot)
|
2021-04-11 00:46:35 +00:00
|
|
|
{
|
2021-04-25 13:32:23 +00:00
|
|
|
struct io_mapped_ubuf *imu = *slot;
|
2021-04-11 00:46:35 +00:00
|
|
|
unsigned int i;
|
|
|
|
|
2021-04-28 12:11:29 +00:00
|
|
|
if (imu != ctx->dummy_ubuf) {
|
|
|
|
for (i = 0; i < imu->nr_bvecs; i++)
|
|
|
|
unpin_user_page(imu->bvec[i].bv_page);
|
|
|
|
if (imu->acct_pages)
|
|
|
|
io_unaccount_mem(ctx, imu->acct_pages);
|
|
|
|
kvfree(imu);
|
|
|
|
}
|
2021-04-25 13:32:23 +00:00
|
|
|
*slot = NULL;
|
2021-04-11 00:46:35 +00:00
|
|
|
}
|
|
|
|
|
2021-04-25 13:32:25 +00:00
|
|
|
static void io_rsrc_buf_put(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc)
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
{
|
2021-04-25 13:32:26 +00:00
|
|
|
io_buffer_unmap(ctx, &prsrc->buf);
|
|
|
|
prsrc->buf = NULL;
|
2021-04-25 13:32:25 +00:00
|
|
|
}
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
|
2021-04-25 13:32:25 +00:00
|
|
|
static void __io_sqe_buffers_unregister(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
unsigned int i;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
|
2021-04-11 00:46:35 +00:00
|
|
|
for (i = 0; i < ctx->nr_user_bufs; i++)
|
|
|
|
io_buffer_unmap(ctx, &ctx->user_bufs[i]);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
kfree(ctx->user_bufs);
|
2021-04-30 08:25:15 +00:00
|
|
|
io_rsrc_data_free(ctx->buf_data);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
ctx->user_bufs = NULL;
|
2021-04-25 13:32:25 +00:00
|
|
|
ctx->buf_data = NULL;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
ctx->nr_user_bufs = 0;
|
2021-04-25 13:32:25 +00:00
|
|
|
}
|
|
|
|
|
2021-01-06 20:39:10 +00:00
|
|
|
static int io_sqe_buffers_unregister(struct io_ring_ctx *ctx)
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
{
|
2021-04-25 13:32:25 +00:00
|
|
|
int ret;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
|
2021-04-25 13:32:25 +00:00
|
|
|
if (!ctx->buf_data)
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
return -ENXIO;
|
|
|
|
|
2021-04-25 13:32:25 +00:00
|
|
|
ret = io_rsrc_ref_quiesce(ctx->buf_data, ctx);
|
|
|
|
if (!ret)
|
|
|
|
__io_sqe_buffers_unregister(ctx);
|
|
|
|
return ret;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int io_copy_iov(struct io_ring_ctx *ctx, struct iovec *dst,
|
|
|
|
void __user *arg, unsigned index)
|
|
|
|
{
|
|
|
|
struct iovec __user *src;
|
|
|
|
|
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
if (ctx->compat) {
|
|
|
|
struct compat_iovec __user *ciovs;
|
|
|
|
struct compat_iovec ciov;
|
|
|
|
|
|
|
|
ciovs = (struct compat_iovec __user *) arg;
|
|
|
|
if (copy_from_user(&ciov, &ciovs[index], sizeof(ciov)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2019-12-11 23:12:15 +00:00
|
|
|
dst->iov_base = u64_to_user_ptr((u64)ciov.iov_base);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
dst->iov_len = ciov.iov_len;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
src = (struct iovec __user *) arg;
|
|
|
|
if (copy_from_user(dst, &src[index], sizeof(*dst)))
|
|
|
|
return -EFAULT;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-09-17 22:19:16 +00:00
|
|
|
/*
|
|
|
|
* Not super efficient, but this is just a registration time. And we do cache
|
|
|
|
* the last compound head, so generally we'll only do a full search if we don't
|
|
|
|
* match that one.
|
|
|
|
*
|
|
|
|
* We check if the given compound head page has already been accounted, to
|
|
|
|
* avoid double accounting it. This allows us to account the full size of the
|
|
|
|
* page, not just the constituent pages of a huge page.
|
|
|
|
*/
|
|
|
|
static bool headpage_already_acct(struct io_ring_ctx *ctx, struct page **pages,
|
|
|
|
int nr_pages, struct page *hpage)
|
|
|
|
{
|
|
|
|
int i, j;
|
|
|
|
|
|
|
|
/* check current page array */
|
|
|
|
for (i = 0; i < nr_pages; i++) {
|
|
|
|
if (!PageCompound(pages[i]))
|
|
|
|
continue;
|
|
|
|
if (compound_head(pages[i]) == hpage)
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* check previously registered pages */
|
|
|
|
for (i = 0; i < ctx->nr_user_bufs; i++) {
|
2021-04-25 13:32:23 +00:00
|
|
|
struct io_mapped_ubuf *imu = ctx->user_bufs[i];
|
2020-09-17 22:19:16 +00:00
|
|
|
|
|
|
|
for (j = 0; j < imu->nr_bvecs; j++) {
|
|
|
|
if (!PageCompound(imu->bvec[j].bv_page))
|
|
|
|
continue;
|
|
|
|
if (compound_head(imu->bvec[j].bv_page) == hpage)
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages,
|
|
|
|
int nr_pages, struct io_mapped_ubuf *imu,
|
|
|
|
struct page **last_hpage)
|
|
|
|
{
|
|
|
|
int i, ret;
|
|
|
|
|
2021-05-29 11:01:02 +00:00
|
|
|
imu->acct_pages = 0;
|
2020-09-17 22:19:16 +00:00
|
|
|
for (i = 0; i < nr_pages; i++) {
|
|
|
|
if (!PageCompound(pages[i])) {
|
|
|
|
imu->acct_pages++;
|
|
|
|
} else {
|
|
|
|
struct page *hpage;
|
|
|
|
|
|
|
|
hpage = compound_head(pages[i]);
|
|
|
|
if (hpage == *last_hpage)
|
|
|
|
continue;
|
|
|
|
*last_hpage = hpage;
|
|
|
|
if (headpage_already_acct(ctx, pages, i, hpage))
|
|
|
|
continue;
|
|
|
|
imu->acct_pages += page_size(hpage) >> PAGE_SHIFT;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!imu->acct_pages)
|
|
|
|
return 0;
|
|
|
|
|
2021-02-10 03:14:12 +00:00
|
|
|
ret = io_account_mem(ctx, imu->acct_pages);
|
2020-09-17 22:19:16 +00:00
|
|
|
if (ret)
|
|
|
|
imu->acct_pages = 0;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-01-06 20:39:10 +00:00
|
|
|
static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
|
2021-04-25 13:32:23 +00:00
|
|
|
struct io_mapped_ubuf **pimu,
|
2021-01-06 20:39:10 +00:00
|
|
|
struct page **last_hpage)
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
{
|
2021-04-25 13:32:23 +00:00
|
|
|
struct io_mapped_ubuf *imu = NULL;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
struct vm_area_struct **vmas = NULL;
|
|
|
|
struct page **pages = NULL;
|
2021-01-06 20:39:10 +00:00
|
|
|
unsigned long off, start, end, ubuf;
|
|
|
|
size_t size;
|
|
|
|
int ret, pret, nr_pages, i;
|
|
|
|
|
2021-04-28 12:11:29 +00:00
|
|
|
if (!iov->iov_base) {
|
|
|
|
*pimu = ctx->dummy_ubuf;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-01-06 20:39:10 +00:00
|
|
|
ubuf = (unsigned long) iov->iov_base;
|
|
|
|
end = (ubuf + iov->iov_len + PAGE_SIZE - 1) >> PAGE_SHIFT;
|
|
|
|
start = ubuf >> PAGE_SHIFT;
|
|
|
|
nr_pages = end - start;
|
|
|
|
|
2021-04-25 13:32:23 +00:00
|
|
|
*pimu = NULL;
|
2021-01-06 20:39:10 +00:00
|
|
|
ret = -ENOMEM;
|
|
|
|
|
|
|
|
pages = kvmalloc_array(nr_pages, sizeof(struct page *), GFP_KERNEL);
|
|
|
|
if (!pages)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
vmas = kvmalloc_array(nr_pages, sizeof(struct vm_area_struct *),
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!vmas)
|
|
|
|
goto done;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
|
2021-04-25 13:32:23 +00:00
|
|
|
imu = kvmalloc(struct_size(imu, bvec, nr_pages), GFP_KERNEL);
|
2021-04-25 23:16:31 +00:00
|
|
|
if (!imu)
|
2021-01-06 20:39:10 +00:00
|
|
|
goto done;
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
mmap_read_lock(current->mm);
|
|
|
|
pret = pin_user_pages(ubuf, nr_pages, FOLL_WRITE | FOLL_LONGTERM,
|
|
|
|
pages, vmas);
|
|
|
|
if (pret == nr_pages) {
|
|
|
|
/* don't support file backed memory */
|
|
|
|
for (i = 0; i < nr_pages; i++) {
|
|
|
|
struct vm_area_struct *vma = vmas[i];
|
|
|
|
|
2021-06-09 14:26:54 +00:00
|
|
|
if (vma_is_shmem(vma))
|
|
|
|
continue;
|
2021-01-06 20:39:10 +00:00
|
|
|
if (vma->vm_file &&
|
|
|
|
!is_file_hugepages(vma->vm_file)) {
|
|
|
|
ret = -EOPNOTSUPP;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
ret = pret < 0 ? pret : -EFAULT;
|
|
|
|
}
|
|
|
|
mmap_read_unlock(current->mm);
|
|
|
|
if (ret) {
|
|
|
|
/*
|
|
|
|
* if we did partial map, or found file backed vmas,
|
|
|
|
* release any pages we did get
|
|
|
|
*/
|
|
|
|
if (pret > 0)
|
|
|
|
unpin_user_pages(pages, pret);
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = io_buffer_account_pin(ctx, pages, pret, imu, last_hpage);
|
|
|
|
if (ret) {
|
|
|
|
unpin_user_pages(pages, pret);
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
off = ubuf & ~PAGE_MASK;
|
|
|
|
size = iov->iov_len;
|
|
|
|
for (i = 0; i < nr_pages; i++) {
|
|
|
|
size_t vec_len;
|
|
|
|
|
|
|
|
vec_len = min_t(size_t, size, PAGE_SIZE - off);
|
|
|
|
imu->bvec[i].bv_page = pages[i];
|
|
|
|
imu->bvec[i].bv_len = vec_len;
|
|
|
|
imu->bvec[i].bv_offset = off;
|
|
|
|
off = 0;
|
|
|
|
size -= vec_len;
|
|
|
|
}
|
|
|
|
/* store original address for later verification */
|
|
|
|
imu->ubuf = ubuf;
|
2021-04-01 14:43:55 +00:00
|
|
|
imu->ubuf_end = ubuf + iov->iov_len;
|
2021-01-06 20:39:10 +00:00
|
|
|
imu->nr_bvecs = nr_pages;
|
2021-04-25 13:32:23 +00:00
|
|
|
*pimu = imu;
|
2021-01-06 20:39:10 +00:00
|
|
|
ret = 0;
|
|
|
|
done:
|
2021-04-25 13:32:23 +00:00
|
|
|
if (ret)
|
|
|
|
kvfree(imu);
|
2021-01-06 20:39:10 +00:00
|
|
|
kvfree(pages);
|
|
|
|
kvfree(vmas);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-01-06 20:39:11 +00:00
|
|
|
static int io_buffers_map_alloc(struct io_ring_ctx *ctx, unsigned int nr_args)
|
2021-01-06 20:39:10 +00:00
|
|
|
{
|
2021-04-11 00:46:36 +00:00
|
|
|
ctx->user_bufs = kcalloc(nr_args, sizeof(*ctx->user_bufs), GFP_KERNEL);
|
|
|
|
return ctx->user_bufs ? 0 : -ENOMEM;
|
2021-01-06 20:39:11 +00:00
|
|
|
}
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
|
2021-01-06 20:39:11 +00:00
|
|
|
static int io_buffer_validate(struct iovec *iov)
|
|
|
|
{
|
2021-03-24 22:59:01 +00:00
|
|
|
unsigned long tmp, acct_len = iov->iov_len + (PAGE_SIZE - 1);
|
|
|
|
|
2021-01-06 20:39:11 +00:00
|
|
|
/*
|
|
|
|
* Don't impose further limits on the size and buffer
|
|
|
|
* constraints here, we'll -EINVAL later when IO is
|
|
|
|
* submitted if they are wrong.
|
|
|
|
*/
|
2021-04-28 12:11:29 +00:00
|
|
|
if (!iov->iov_base)
|
|
|
|
return iov->iov_len ? -EFAULT : 0;
|
|
|
|
if (!iov->iov_len)
|
2021-01-06 20:39:11 +00:00
|
|
|
return -EFAULT;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
|
2021-01-06 20:39:11 +00:00
|
|
|
/* arbitrary limit, but we need something */
|
|
|
|
if (iov->iov_len > SZ_1G)
|
|
|
|
return -EFAULT;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
|
2021-03-24 22:59:01 +00:00
|
|
|
if (check_add_overflow((unsigned long)iov->iov_base, acct_len, &tmp))
|
|
|
|
return -EOVERFLOW;
|
|
|
|
|
2021-01-06 20:39:11 +00:00
|
|
|
return 0;
|
|
|
|
}
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
|
2021-01-06 20:39:11 +00:00
|
|
|
static int io_sqe_buffers_register(struct io_ring_ctx *ctx, void __user *arg,
|
2021-04-25 13:32:26 +00:00
|
|
|
unsigned int nr_args, u64 __user *tags)
|
2021-01-06 20:39:11 +00:00
|
|
|
{
|
2021-04-25 13:32:25 +00:00
|
|
|
struct page *last_hpage = NULL;
|
|
|
|
struct io_rsrc_data *data;
|
2021-01-06 20:39:11 +00:00
|
|
|
int i, ret;
|
|
|
|
struct iovec iov;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
|
2021-04-11 00:46:36 +00:00
|
|
|
if (ctx->user_bufs)
|
|
|
|
return -EBUSY;
|
2021-05-14 11:06:44 +00:00
|
|
|
if (!nr_args || nr_args > IORING_MAX_REG_BUFFERS)
|
2021-04-11 00:46:36 +00:00
|
|
|
return -EINVAL;
|
2021-04-25 13:32:25 +00:00
|
|
|
ret = io_rsrc_node_switch_start(ctx);
|
2021-01-06 20:39:11 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2021-06-14 01:36:18 +00:00
|
|
|
ret = io_rsrc_data_alloc(ctx, io_rsrc_buf_put, tags, nr_args, &data);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2021-04-25 13:32:25 +00:00
|
|
|
ret = io_buffers_map_alloc(ctx, nr_args);
|
|
|
|
if (ret) {
|
2021-04-30 08:25:15 +00:00
|
|
|
io_rsrc_data_free(data);
|
2021-04-25 13:32:25 +00:00
|
|
|
return ret;
|
|
|
|
}
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
|
2021-04-11 00:46:36 +00:00
|
|
|
for (i = 0; i < nr_args; i++, ctx->nr_user_bufs++) {
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
ret = io_copy_iov(ctx, &iov, arg, i);
|
|
|
|
if (ret)
|
2021-01-06 20:39:10 +00:00
|
|
|
break;
|
2021-01-06 20:39:11 +00:00
|
|
|
ret = io_buffer_validate(&iov);
|
|
|
|
if (ret)
|
2021-01-06 20:39:10 +00:00
|
|
|
break;
|
2021-06-14 01:36:21 +00:00
|
|
|
if (!iov.iov_base && *io_get_tag_slot(data, i)) {
|
2021-04-29 10:46:02 +00:00
|
|
|
ret = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
|
2021-04-25 13:32:23 +00:00
|
|
|
ret = io_sqe_buffer_register(ctx, &iov, &ctx->user_bufs[i],
|
|
|
|
&last_hpage);
|
2021-01-06 20:39:10 +00:00
|
|
|
if (ret)
|
|
|
|
break;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
}
|
2021-01-06 20:39:10 +00:00
|
|
|
|
2021-04-25 13:32:25 +00:00
|
|
|
WARN_ON_ONCE(ctx->buf_data);
|
2021-01-06 20:39:10 +00:00
|
|
|
|
2021-04-25 13:32:25 +00:00
|
|
|
ctx->buf_data = data;
|
|
|
|
if (ret)
|
|
|
|
__io_sqe_buffers_unregister(ctx);
|
|
|
|
else
|
|
|
|
io_rsrc_node_switch(ctx, NULL);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-04-25 13:32:26 +00:00
|
|
|
static int __io_sqe_buffers_update(struct io_ring_ctx *ctx,
|
|
|
|
struct io_uring_rsrc_update2 *up,
|
|
|
|
unsigned int nr_args)
|
|
|
|
{
|
|
|
|
u64 __user *tags = u64_to_user_ptr(up->tags);
|
|
|
|
struct iovec iov, __user *iovs = u64_to_user_ptr(up->data);
|
|
|
|
struct page *last_hpage = NULL;
|
|
|
|
bool needs_switch = false;
|
|
|
|
__u32 done;
|
|
|
|
int i, err;
|
|
|
|
|
|
|
|
if (!ctx->buf_data)
|
|
|
|
return -ENXIO;
|
|
|
|
if (up->offset + nr_args > ctx->nr_user_bufs)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
for (done = 0; done < nr_args; done++) {
|
2021-04-26 14:17:38 +00:00
|
|
|
struct io_mapped_ubuf *imu;
|
|
|
|
int offset = up->offset + done;
|
2021-04-25 13:32:26 +00:00
|
|
|
u64 tag = 0;
|
|
|
|
|
|
|
|
err = io_copy_iov(ctx, &iov, iovs, done);
|
|
|
|
if (err)
|
|
|
|
break;
|
|
|
|
if (tags && copy_from_user(&tag, &tags[done], sizeof(tag))) {
|
|
|
|
err = -EFAULT;
|
|
|
|
break;
|
|
|
|
}
|
2021-04-26 14:17:38 +00:00
|
|
|
err = io_buffer_validate(&iov);
|
|
|
|
if (err)
|
|
|
|
break;
|
2021-04-29 10:46:02 +00:00
|
|
|
if (!iov.iov_base && tag) {
|
|
|
|
err = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
2021-04-26 14:17:38 +00:00
|
|
|
err = io_sqe_buffer_register(ctx, &iov, &imu, &last_hpage);
|
|
|
|
if (err)
|
|
|
|
break;
|
2021-04-25 13:32:26 +00:00
|
|
|
|
2021-04-26 14:17:38 +00:00
|
|
|
i = array_index_nospec(offset, ctx->nr_user_bufs);
|
2021-04-28 12:11:29 +00:00
|
|
|
if (ctx->user_bufs[i] != ctx->dummy_ubuf) {
|
2021-04-26 14:17:38 +00:00
|
|
|
err = io_queue_rsrc_removal(ctx->buf_data, offset,
|
|
|
|
ctx->rsrc_node, ctx->user_bufs[i]);
|
|
|
|
if (unlikely(err)) {
|
|
|
|
io_buffer_unmap(ctx, &imu);
|
2021-04-25 13:32:26 +00:00
|
|
|
break;
|
2021-04-26 14:17:38 +00:00
|
|
|
}
|
2021-04-25 13:32:26 +00:00
|
|
|
ctx->user_bufs[i] = NULL;
|
|
|
|
needs_switch = true;
|
|
|
|
}
|
|
|
|
|
2021-04-26 14:17:38 +00:00
|
|
|
ctx->user_bufs[i] = imu;
|
2021-06-14 01:36:21 +00:00
|
|
|
*io_get_tag_slot(ctx->buf_data, offset) = tag;
|
2021-04-25 13:32:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (needs_switch)
|
|
|
|
io_rsrc_node_switch(ctx, ctx->buf_data);
|
|
|
|
return done ? done : err;
|
|
|
|
}
|
|
|
|
|
2019-04-11 17:45:41 +00:00
|
|
|
static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg)
|
|
|
|
{
|
|
|
|
__s32 __user *fds = arg;
|
|
|
|
int fd;
|
|
|
|
|
|
|
|
if (ctx->cq_ev_fd)
|
|
|
|
return -EBUSY;
|
|
|
|
|
|
|
|
if (copy_from_user(&fd, fds, sizeof(*fds)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
ctx->cq_ev_fd = eventfd_ctx_fdget(fd);
|
|
|
|
if (IS_ERR(ctx->cq_ev_fd)) {
|
|
|
|
int ret = PTR_ERR(ctx->cq_ev_fd);
|
2021-06-24 14:09:57 +00:00
|
|
|
|
2019-04-11 17:45:41 +00:00
|
|
|
ctx->cq_ev_fd = NULL;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_eventfd_unregister(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
if (ctx->cq_ev_fd) {
|
|
|
|
eventfd_ctx_put(ctx->cq_ev_fd);
|
|
|
|
ctx->cq_ev_fd = NULL;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return -ENXIO;
|
|
|
|
}
|
|
|
|
|
2020-02-23 23:23:11 +00:00
|
|
|
static void io_destroy_buffers(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2021-03-13 19:29:43 +00:00
|
|
|
struct io_buffer *buf;
|
|
|
|
unsigned long index;
|
|
|
|
|
|
|
|
xa_for_each(&ctx->io_buffers, index, buf)
|
|
|
|
__io_remove_buffers(ctx, buf, index, -1U);
|
2020-02-23 23:23:11 +00:00
|
|
|
}
|
|
|
|
|
2021-08-09 19:18:09 +00:00
|
|
|
static void io_req_cache_free(struct list_head *list)
|
2021-02-10 00:03:19 +00:00
|
|
|
{
|
2021-02-13 16:00:02 +00:00
|
|
|
struct io_kiocb *req, *nxt;
|
2021-02-10 00:03:19 +00:00
|
|
|
|
2021-08-09 19:18:10 +00:00
|
|
|
list_for_each_entry_safe(req, nxt, list, inflight_entry) {
|
|
|
|
list_del(&req->inflight_entry);
|
2021-02-10 00:03:19 +00:00
|
|
|
kmem_cache_free(req_cachep, req);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-02-27 22:04:18 +00:00
|
|
|
static void io_req_caches_free(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2021-08-09 19:18:11 +00:00
|
|
|
struct io_submit_state *state = &ctx->submit_state;
|
2021-02-10 00:03:17 +00:00
|
|
|
|
2021-02-13 16:09:44 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
|
2021-08-09 19:18:11 +00:00
|
|
|
if (state->free_reqs) {
|
|
|
|
kmem_cache_free_bulk(req_cachep, state->free_reqs, state->reqs);
|
|
|
|
state->free_reqs = 0;
|
2021-02-22 11:45:55 +00:00
|
|
|
}
|
2021-02-13 16:09:44 +00:00
|
|
|
|
2021-08-09 19:18:11 +00:00
|
|
|
io_flush_cached_locked_reqs(ctx, state);
|
|
|
|
io_req_cache_free(&state->free_list);
|
2021-02-13 16:09:44 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
|
2021-08-10 01:44:23 +00:00
|
|
|
static void io_wait_rsrc_data(struct io_rsrc_data *data)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2021-08-10 01:44:23 +00:00
|
|
|
if (data && !atomic_dec_and_test(&data->refs))
|
2021-04-25 13:32:25 +00:00
|
|
|
wait_for_completion(&data->done);
|
|
|
|
}
|
2021-02-12 03:23:54 +00:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static void io_ring_ctx_free(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2021-02-18 04:03:43 +00:00
|
|
|
io_sq_thread_finish(ctx);
|
2020-09-14 16:45:53 +00:00
|
|
|
|
2021-02-18 04:03:43 +00:00
|
|
|
if (ctx->mm_account) {
|
2020-09-14 16:45:53 +00:00
|
|
|
mmdrop(ctx->mm_account);
|
|
|
|
ctx->mm_account = NULL;
|
2020-06-16 23:36:09 +00:00
|
|
|
}
|
2019-01-09 15:59:42 +00:00
|
|
|
|
2021-08-10 01:44:23 +00:00
|
|
|
/* __io_rsrc_put_work() may need uring_lock to progress, wait w/o it */
|
|
|
|
io_wait_rsrc_data(ctx->buf_data);
|
|
|
|
io_wait_rsrc_data(ctx->file_data);
|
|
|
|
|
2021-02-19 09:19:36 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-08-10 01:44:23 +00:00
|
|
|
if (ctx->buf_data)
|
2021-04-25 13:32:25 +00:00
|
|
|
__io_sqe_buffers_unregister(ctx);
|
2021-08-10 01:44:23 +00:00
|
|
|
if (ctx->file_data)
|
2021-04-13 01:58:38 +00:00
|
|
|
__io_sqe_files_unregister(ctx);
|
2021-04-01 14:43:58 +00:00
|
|
|
if (ctx->rings)
|
|
|
|
__io_cqring_overflow_flush(ctx, true);
|
2021-02-19 09:19:36 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2019-04-11 17:45:41 +00:00
|
|
|
io_eventfd_unregister(ctx);
|
2020-02-23 23:23:11 +00:00
|
|
|
io_destroy_buffers(ctx);
|
2021-04-20 11:03:32 +00:00
|
|
|
if (ctx->sq_creds)
|
|
|
|
put_cred(ctx->sq_creds);
|
2019-01-09 15:59:42 +00:00
|
|
|
|
2021-04-01 14:43:46 +00:00
|
|
|
/* there are no registered resources left, nobody uses it */
|
|
|
|
if (ctx->rsrc_node)
|
|
|
|
io_rsrc_node_destroy(ctx->rsrc_node);
|
2021-03-19 17:22:36 +00:00
|
|
|
if (ctx->rsrc_backup_node)
|
2021-04-01 14:43:40 +00:00
|
|
|
io_rsrc_node_destroy(ctx->rsrc_backup_node);
|
2021-04-01 14:43:46 +00:00
|
|
|
flush_delayed_work(&ctx->rsrc_put_work);
|
|
|
|
|
|
|
|
WARN_ON_ONCE(!list_empty(&ctx->rsrc_ref_list));
|
|
|
|
WARN_ON_ONCE(!llist_empty(&ctx->rsrc_put_llist));
|
2019-01-09 15:59:42 +00:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#if defined(CONFIG_UNIX)
|
2019-06-12 21:58:43 +00:00
|
|
|
if (ctx->ring_sock) {
|
|
|
|
ctx->ring_sock->file = NULL; /* so that iput() is called */
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
sock_release(ctx->ring_sock);
|
2019-06-12 21:58:43 +00:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#endif
|
|
|
|
|
2019-08-26 17:23:46 +00:00
|
|
|
io_mem_free(ctx->rings);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
io_mem_free(ctx->sq_sqes);
|
|
|
|
|
|
|
|
percpu_ref_exit(&ctx->refs);
|
|
|
|
free_uid(ctx->user);
|
2021-02-27 22:04:18 +00:00
|
|
|
io_req_caches_free(ctx);
|
2021-02-19 19:33:30 +00:00
|
|
|
if (ctx->hash_map)
|
|
|
|
io_wq_put_hash(ctx->hash_map);
|
2019-12-05 02:56:40 +00:00
|
|
|
kfree(ctx->cancel_hash);
|
2021-04-28 12:11:29 +00:00
|
|
|
kfree(ctx->dummy_ubuf);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
kfree(ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static __poll_t io_uring_poll(struct file *file, poll_table *wait)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
__poll_t mask = 0;
|
|
|
|
|
2021-06-14 22:37:28 +00:00
|
|
|
poll_wait(file, &ctx->poll_wait, wait);
|
2019-04-24 21:54:17 +00:00
|
|
|
/*
|
|
|
|
* synchronizes with barrier from wq_has_sleeper call in
|
|
|
|
* io_commit_cqring
|
|
|
|
*/
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
smp_rmb();
|
2020-09-03 18:12:41 +00:00
|
|
|
if (!io_sqring_full(ctx))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
mask |= EPOLLOUT | EPOLLWRNORM;
|
2021-02-05 08:34:21 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't flush cqring overflow list here, just do a simple check.
|
|
|
|
* Otherwise there could possible be ABBA deadlock:
|
|
|
|
* CPU0 CPU1
|
|
|
|
* ---- ----
|
|
|
|
* lock(&ctx->uring_lock);
|
|
|
|
* lock(&ep->mtx);
|
|
|
|
* lock(&ctx->uring_lock);
|
|
|
|
* lock(&ep->mtx);
|
|
|
|
*
|
|
|
|
* Users may get EPOLLIN meanwhile seeing nothing in cqring, this
|
|
|
|
* pushs them to do the flush.
|
|
|
|
*/
|
2021-06-14 22:37:27 +00:00
|
|
|
if (io_cqring_events(ctx) || test_bit(0, &ctx->check_cq_overflow))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
mask |= EPOLLIN | EPOLLRDNORM;
|
|
|
|
|
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_uring_fasync(int fd, struct file *file, int on)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
|
|
|
|
return fasync_helper(fd, file, on, &ctx->cq_fasync);
|
|
|
|
}
|
|
|
|
|
2020-12-24 03:02:20 +00:00
|
|
|
static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id)
|
2020-01-28 17:04:42 +00:00
|
|
|
{
|
2021-02-15 20:40:22 +00:00
|
|
|
const struct cred *creds;
|
2020-01-28 17:04:42 +00:00
|
|
|
|
2021-03-08 14:16:16 +00:00
|
|
|
creds = xa_erase(&ctx->personalities, id);
|
2021-02-15 20:40:22 +00:00
|
|
|
if (creds) {
|
|
|
|
put_cred(creds);
|
2020-12-24 03:02:20 +00:00
|
|
|
return 0;
|
2020-10-15 14:46:24 +00:00
|
|
|
}
|
2020-12-24 03:02:20 +00:00
|
|
|
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2021-03-06 11:02:13 +00:00
|
|
|
struct io_tctx_exit {
|
|
|
|
struct callback_head task_work;
|
|
|
|
struct completion completion;
|
2021-03-06 11:02:15 +00:00
|
|
|
struct io_ring_ctx *ctx;
|
2021-03-06 11:02:13 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static void io_tctx_exit_cb(struct callback_head *cb)
|
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
|
|
|
struct io_tctx_exit *work;
|
|
|
|
|
|
|
|
work = container_of(cb, struct io_tctx_exit, task_work);
|
|
|
|
/*
|
|
|
|
* When @in_idle, we're in cancellation and it's racy to remove the
|
|
|
|
* node. It'll be removed by the end of cancellation, just ignore it.
|
|
|
|
*/
|
|
|
|
if (!atomic_read(&tctx->in_idle))
|
2021-06-14 01:36:15 +00:00
|
|
|
io_uring_del_tctx_node((unsigned long)work->ctx);
|
2021-03-06 11:02:13 +00:00
|
|
|
complete(&work->completion);
|
|
|
|
}
|
|
|
|
|
2021-04-25 22:34:45 +00:00
|
|
|
static bool io_cancel_ctx_cb(struct io_wq_work *work, void *data)
|
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
|
|
|
|
|
|
|
return req->ctx == data;
|
|
|
|
}
|
|
|
|
|
2020-04-10 00:14:00 +00:00
|
|
|
static void io_ring_exit_work(struct work_struct *work)
|
|
|
|
{
|
2021-03-06 11:02:13 +00:00
|
|
|
struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx, exit_work);
|
2021-03-06 11:02:16 +00:00
|
|
|
unsigned long timeout = jiffies + HZ * 60 * 5;
|
2021-08-09 12:04:17 +00:00
|
|
|
unsigned long interval = HZ / 20;
|
2021-03-06 11:02:13 +00:00
|
|
|
struct io_tctx_exit exit;
|
|
|
|
struct io_tctx_node *node;
|
|
|
|
int ret;
|
2020-04-10 00:14:00 +00:00
|
|
|
|
2020-06-17 21:00:04 +00:00
|
|
|
/*
|
|
|
|
* If we're doing polled IO and end up having requests being
|
|
|
|
* submitted async (out-of-line), then completions can come in while
|
|
|
|
* we're waiting for refs to drop. We need to reap these manually,
|
|
|
|
* as nobody else will be looking for them.
|
|
|
|
*/
|
2020-07-07 13:36:22 +00:00
|
|
|
do {
|
2021-05-16 21:58:04 +00:00
|
|
|
io_uring_try_cancel_requests(ctx, NULL, true);
|
2021-04-25 22:34:45 +00:00
|
|
|
if (ctx->sq_data) {
|
|
|
|
struct io_sq_data *sqd = ctx->sq_data;
|
|
|
|
struct task_struct *tsk;
|
|
|
|
|
|
|
|
io_sq_thread_park(sqd);
|
|
|
|
tsk = sqd->thread;
|
|
|
|
if (tsk && tsk->io_uring && tsk->io_uring->io_wq)
|
|
|
|
io_wq_cancel_cb(tsk->io_uring->io_wq,
|
|
|
|
io_cancel_ctx_cb, ctx, true);
|
|
|
|
io_sq_thread_unpark(sqd);
|
|
|
|
}
|
2021-03-06 11:02:16 +00:00
|
|
|
|
2021-08-09 12:04:17 +00:00
|
|
|
if (WARN_ON_ONCE(time_after(jiffies, timeout))) {
|
|
|
|
/* there is little hope left, don't run it too often */
|
|
|
|
interval = HZ * 60;
|
|
|
|
}
|
|
|
|
} while (!wait_for_completion_timeout(&ctx->ref_comp, interval));
|
2021-03-06 11:02:13 +00:00
|
|
|
|
2021-04-14 12:38:34 +00:00
|
|
|
init_completion(&exit.completion);
|
|
|
|
init_task_work(&exit.task_work, io_tctx_exit_cb);
|
|
|
|
exit.ctx = ctx;
|
2021-04-01 14:43:50 +00:00
|
|
|
/*
|
|
|
|
* Some may use context even when all refs and requests have been put,
|
|
|
|
* and they are free to do so while still holding uring_lock or
|
2021-06-30 20:54:04 +00:00
|
|
|
* completion_lock, see io_req_task_submit(). Apart from other work,
|
2021-04-01 14:43:50 +00:00
|
|
|
* this lock/unlock section also waits them to finish.
|
|
|
|
*/
|
2021-03-06 11:02:13 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
while (!list_empty(&ctx->tctx_list)) {
|
2021-03-06 11:02:16 +00:00
|
|
|
WARN_ON_ONCE(time_after(jiffies, timeout));
|
|
|
|
|
2021-03-06 11:02:13 +00:00
|
|
|
node = list_first_entry(&ctx->tctx_list, struct io_tctx_node,
|
|
|
|
ctx_node);
|
2021-04-14 12:38:34 +00:00
|
|
|
/* don't spin on a single task if cancellation failed */
|
|
|
|
list_rotate_left(&ctx->tctx_list);
|
2021-03-06 11:02:13 +00:00
|
|
|
ret = task_work_add(node->task, &exit.task_work, TWA_SIGNAL);
|
|
|
|
if (WARN_ON_ONCE(ret))
|
|
|
|
continue;
|
|
|
|
wake_up_process(node->task);
|
|
|
|
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
wait_for_completion(&exit.completion);
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-03-06 11:02:13 +00:00
|
|
|
|
2020-04-10 00:14:00 +00:00
|
|
|
io_ring_ctx_free(ctx);
|
|
|
|
}
|
|
|
|
|
2021-03-25 18:32:43 +00:00
|
|
|
/* Returns true if we found and killed one or more timeouts */
|
|
|
|
static bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk,
|
2021-05-16 21:58:04 +00:00
|
|
|
bool cancel_all)
|
2021-03-25 18:32:43 +00:00
|
|
|
{
|
|
|
|
struct io_kiocb *req, *tmp;
|
|
|
|
int canceled = 0;
|
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
spin_lock_irq(&ctx->timeout_lock);
|
2021-03-25 18:32:43 +00:00
|
|
|
list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
|
2021-05-16 21:58:04 +00:00
|
|
|
if (io_match_task(req, tsk, cancel_all)) {
|
2021-03-25 18:32:43 +00:00
|
|
|
io_kill_timeout(req, -ECANCELED);
|
|
|
|
canceled++;
|
|
|
|
}
|
|
|
|
}
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock_irq(&ctx->timeout_lock);
|
2021-03-29 10:39:29 +00:00
|
|
|
if (canceled != 0)
|
|
|
|
io_commit_cqring(ctx);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-03-25 18:32:43 +00:00
|
|
|
if (canceled != 0)
|
|
|
|
io_cqring_ev_posted(ctx);
|
|
|
|
return canceled != 0;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2021-03-08 14:16:16 +00:00
|
|
|
unsigned long index;
|
|
|
|
struct creds *creds;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
percpu_ref_kill(&ctx->refs);
|
2020-12-06 22:22:44 +00:00
|
|
|
if (ctx->rings)
|
2021-02-23 12:40:22 +00:00
|
|
|
__io_cqring_overflow_flush(ctx, true);
|
2021-03-08 14:16:16 +00:00
|
|
|
xa_for_each(&ctx->personalities, index, creds)
|
|
|
|
io_unregister_personality(ctx, index);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
|
2021-05-16 21:58:04 +00:00
|
|
|
io_kill_timeouts(ctx, NULL, true);
|
|
|
|
io_poll_remove_all(ctx, NULL, true);
|
2019-10-24 13:25:42 +00:00
|
|
|
|
2019-11-13 16:09:23 +00:00
|
|
|
/* if we failed setting up the ctx, we might not have any rings */
|
2020-07-07 13:36:22 +00:00
|
|
|
io_iopoll_try_reap_events(ctx);
|
2020-07-10 15:13:34 +00:00
|
|
|
|
2020-04-10 00:14:00 +00:00
|
|
|
INIT_WORK(&ctx->exit_work, io_ring_exit_work);
|
2020-08-19 17:10:51 +00:00
|
|
|
/*
|
|
|
|
* Use system_unbound_wq to avoid spawning tons of event kworkers
|
|
|
|
* if we're exiting a ton of rings at the same time. It just adds
|
|
|
|
* noise and overhead, there's no discernable change in runtime
|
|
|
|
* over using system_wq.
|
|
|
|
*/
|
|
|
|
queue_work(system_unbound_wq, &ctx->exit_work);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int io_uring_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
|
|
|
|
file->private_data = NULL;
|
|
|
|
io_ring_ctx_wait_and_kill(ctx);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-11-06 13:00:26 +00:00
|
|
|
struct io_task_cancel {
|
|
|
|
struct task_struct *task;
|
2021-05-16 21:58:04 +00:00
|
|
|
bool all;
|
2020-11-06 13:00:26 +00:00
|
|
|
};
|
2020-08-12 23:33:30 +00:00
|
|
|
|
2020-11-06 13:00:26 +00:00
|
|
|
static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
|
2020-08-16 15:23:05 +00:00
|
|
|
{
|
2020-11-05 22:31:37 +00:00
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
2020-11-06 13:00:26 +00:00
|
|
|
struct io_task_cancel *cancel = data;
|
2020-11-05 22:31:37 +00:00
|
|
|
bool ret;
|
|
|
|
|
2021-05-16 21:58:04 +00:00
|
|
|
if (!cancel->all && (req->flags & REQ_F_LINK_TIMEOUT)) {
|
2020-11-05 22:31:37 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
|
|
|
/* protect against races with linked timeouts */
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-05-16 21:58:04 +00:00
|
|
|
ret = io_match_task(req, cancel->task, cancel->all);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2020-11-05 22:31:37 +00:00
|
|
|
} else {
|
2021-05-16 21:58:04 +00:00
|
|
|
ret = io_match_task(req, cancel->task, cancel->all);
|
2020-11-05 22:31:37 +00:00
|
|
|
}
|
|
|
|
return ret;
|
2020-08-16 15:23:05 +00:00
|
|
|
}
|
|
|
|
|
2021-03-11 23:29:35 +00:00
|
|
|
static bool io_cancel_defer_files(struct io_ring_ctx *ctx,
|
2021-05-16 21:58:04 +00:00
|
|
|
struct task_struct *task, bool cancel_all)
|
2020-09-05 21:45:14 +00:00
|
|
|
{
|
2021-03-11 23:29:35 +00:00
|
|
|
struct io_defer_entry *de;
|
2020-09-05 21:45:14 +00:00
|
|
|
LIST_HEAD(list);
|
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2020-09-05 21:45:14 +00:00
|
|
|
list_for_each_entry_reverse(de, &ctx->defer_list, list) {
|
2021-05-16 21:58:04 +00:00
|
|
|
if (io_match_task(de->req, task, cancel_all)) {
|
2020-09-05 21:45:14 +00:00
|
|
|
list_cut_position(&list, &ctx->defer_list, &de->list);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-03-11 23:29:35 +00:00
|
|
|
if (list_empty(&list))
|
|
|
|
return false;
|
2020-09-05 21:45:14 +00:00
|
|
|
|
|
|
|
while (!list_empty(&list)) {
|
|
|
|
de = list_first_entry(&list, struct io_defer_entry, list);
|
|
|
|
list_del_init(&de->list);
|
2021-02-28 22:35:12 +00:00
|
|
|
io_req_complete_failed(de->req, -ECANCELED);
|
2020-09-05 21:45:14 +00:00
|
|
|
kfree(de);
|
|
|
|
}
|
2021-03-11 23:29:35 +00:00
|
|
|
return true;
|
2020-09-05 21:45:14 +00:00
|
|
|
}
|
|
|
|
|
2021-03-06 11:02:17 +00:00
|
|
|
static bool io_uring_try_cancel_iowq(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_tctx_node *node;
|
|
|
|
enum io_wq_cancel cret;
|
|
|
|
bool ret = false;
|
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
|
|
|
|
struct io_uring_task *tctx = node->task->io_uring;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* io_wq will stay alive while we hold uring_lock, because it's
|
|
|
|
* killed after ctx nodes, which requires to take the lock.
|
|
|
|
*/
|
|
|
|
if (!tctx || !tctx->io_wq)
|
|
|
|
continue;
|
|
|
|
cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_ctx_cb, ctx, true);
|
|
|
|
ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
|
|
|
|
}
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-02-04 13:51:56 +00:00
|
|
|
static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
|
|
|
|
struct task_struct *task,
|
2021-05-16 21:58:04 +00:00
|
|
|
bool cancel_all)
|
2021-02-04 13:51:56 +00:00
|
|
|
{
|
2021-05-16 21:58:04 +00:00
|
|
|
struct io_task_cancel cancel = { .task = task, .all = cancel_all, };
|
2021-03-06 11:02:17 +00:00
|
|
|
struct io_uring_task *tctx = task ? task->io_uring : NULL;
|
2021-02-04 13:51:56 +00:00
|
|
|
|
|
|
|
while (1) {
|
|
|
|
enum io_wq_cancel cret;
|
|
|
|
bool ret = false;
|
|
|
|
|
2021-03-06 11:02:17 +00:00
|
|
|
if (!task) {
|
|
|
|
ret |= io_uring_try_cancel_iowq(ctx);
|
|
|
|
} else if (tctx && tctx->io_wq) {
|
|
|
|
/*
|
|
|
|
* Cancels requests of all rings, not only @ctx, but
|
|
|
|
* it's fine as the task is in exit/exec.
|
|
|
|
*/
|
2021-02-16 19:56:50 +00:00
|
|
|
cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_task_cb,
|
2021-02-04 13:51:56 +00:00
|
|
|
&cancel, true);
|
|
|
|
ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* SQPOLL thread does its own polling */
|
2021-05-16 21:58:04 +00:00
|
|
|
if ((!(ctx->flags & IORING_SETUP_SQPOLL) && cancel_all) ||
|
2021-03-11 17:49:20 +00:00
|
|
|
(ctx->sq_data && ctx->sq_data->thread == current)) {
|
2021-02-04 13:51:56 +00:00
|
|
|
while (!list_empty_careful(&ctx->iopoll_list)) {
|
|
|
|
io_iopoll_try_reap_events(ctx);
|
|
|
|
ret = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-05-16 21:58:04 +00:00
|
|
|
ret |= io_cancel_defer_files(ctx, task, cancel_all);
|
|
|
|
ret |= io_poll_remove_all(ctx, task, cancel_all);
|
|
|
|
ret |= io_kill_timeouts(ctx, task, cancel_all);
|
2021-06-26 20:40:46 +00:00
|
|
|
if (task)
|
|
|
|
ret |= io_run_task_work();
|
2021-02-04 13:51:56 +00:00
|
|
|
if (!ret)
|
|
|
|
break;
|
|
|
|
cond_resched();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-06-14 01:36:15 +00:00
|
|
|
static int __io_uring_add_tctx_node(struct io_ring_ctx *ctx)
|
2020-09-13 19:09:39 +00:00
|
|
|
{
|
2020-10-09 12:49:52 +00:00
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
2021-03-06 11:02:12 +00:00
|
|
|
struct io_tctx_node *node;
|
2020-12-21 18:34:04 +00:00
|
|
|
int ret;
|
2020-10-09 12:49:52 +00:00
|
|
|
|
|
|
|
if (unlikely(!tctx)) {
|
2021-02-16 19:56:50 +00:00
|
|
|
ret = io_uring_alloc_task_context(current, ctx);
|
2020-09-13 19:09:39 +00:00
|
|
|
if (unlikely(ret))
|
|
|
|
return ret;
|
2020-10-09 12:49:52 +00:00
|
|
|
tctx = current->io_uring;
|
2020-09-13 19:09:39 +00:00
|
|
|
}
|
2021-03-19 17:22:31 +00:00
|
|
|
if (!xa_load(&tctx->xa, (unsigned long)ctx)) {
|
|
|
|
node = kmalloc(sizeof(*node), GFP_KERNEL);
|
|
|
|
if (!node)
|
|
|
|
return -ENOMEM;
|
|
|
|
node->ctx = ctx;
|
|
|
|
node->task = current;
|
2021-03-06 11:02:12 +00:00
|
|
|
|
2021-03-19 17:22:31 +00:00
|
|
|
ret = xa_err(xa_store(&tctx->xa, (unsigned long)ctx,
|
|
|
|
node, GFP_KERNEL));
|
|
|
|
if (ret) {
|
|
|
|
kfree(node);
|
|
|
|
return ret;
|
2020-09-13 19:09:39 +00:00
|
|
|
}
|
2021-03-19 17:22:31 +00:00
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
list_add(&node->ctx_node, &ctx->tctx_list);
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2020-09-13 19:09:39 +00:00
|
|
|
}
|
2021-03-19 17:22:31 +00:00
|
|
|
tctx->last = ctx;
|
2020-09-13 19:09:39 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-03-19 17:22:31 +00:00
|
|
|
/*
|
|
|
|
* Note that this task has used io_uring. We use it for cancelation purposes.
|
|
|
|
*/
|
2021-06-14 01:36:15 +00:00
|
|
|
static inline int io_uring_add_tctx_node(struct io_ring_ctx *ctx)
|
2021-03-19 17:22:31 +00:00
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
|
|
|
|
|
|
|
if (likely(tctx && tctx->last == ctx))
|
|
|
|
return 0;
|
2021-06-14 01:36:15 +00:00
|
|
|
return __io_uring_add_tctx_node(ctx);
|
2021-03-19 17:22:31 +00:00
|
|
|
}
|
|
|
|
|
2020-09-13 19:09:39 +00:00
|
|
|
/*
|
|
|
|
* Remove this io_uring_file -> task mapping.
|
|
|
|
*/
|
2021-06-14 01:36:15 +00:00
|
|
|
static void io_uring_del_tctx_node(unsigned long index)
|
2020-09-13 19:09:39 +00:00
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
2021-03-06 11:02:12 +00:00
|
|
|
struct io_tctx_node *node;
|
2021-03-06 11:02:11 +00:00
|
|
|
|
2021-03-06 11:02:14 +00:00
|
|
|
if (!tctx)
|
|
|
|
return;
|
2021-03-06 11:02:12 +00:00
|
|
|
node = xa_erase(&tctx->xa, index);
|
|
|
|
if (!node)
|
2021-03-06 11:02:11 +00:00
|
|
|
return;
|
2020-09-13 19:09:39 +00:00
|
|
|
|
2021-03-06 11:02:12 +00:00
|
|
|
WARN_ON_ONCE(current != node->task);
|
|
|
|
WARN_ON_ONCE(list_empty(&node->ctx_node));
|
|
|
|
|
|
|
|
mutex_lock(&node->ctx->uring_lock);
|
|
|
|
list_del(&node->ctx_node);
|
|
|
|
mutex_unlock(&node->ctx->uring_lock);
|
|
|
|
|
2021-03-06 11:02:15 +00:00
|
|
|
if (tctx->last == node->ctx)
|
2020-09-13 19:09:39 +00:00
|
|
|
tctx->last = NULL;
|
2021-03-06 11:02:12 +00:00
|
|
|
kfree(node);
|
2020-09-13 19:09:39 +00:00
|
|
|
}
|
|
|
|
|
2021-02-27 11:16:46 +00:00
|
|
|
static void io_uring_clean_tctx(struct io_uring_task *tctx)
|
2021-01-04 20:43:29 +00:00
|
|
|
{
|
2021-05-20 12:21:20 +00:00
|
|
|
struct io_wq *wq = tctx->io_wq;
|
2021-03-06 11:02:12 +00:00
|
|
|
struct io_tctx_node *node;
|
2021-01-04 20:43:29 +00:00
|
|
|
unsigned long index;
|
|
|
|
|
2021-03-06 11:02:12 +00:00
|
|
|
xa_for_each(&tctx->xa, index, node)
|
2021-06-14 01:36:15 +00:00
|
|
|
io_uring_del_tctx_node(index);
|
2021-05-27 09:25:48 +00:00
|
|
|
if (wq) {
|
|
|
|
/*
|
|
|
|
* Must be after io_uring_del_task_file() (removes nodes under
|
|
|
|
* uring_lock) to avoid race with io_uring_try_cancel_iowq().
|
|
|
|
*/
|
|
|
|
tctx->io_wq = NULL;
|
2021-05-20 12:21:20 +00:00
|
|
|
io_wq_put_and_exit(wq);
|
2021-05-27 09:25:48 +00:00
|
|
|
}
|
2021-01-04 20:43:29 +00:00
|
|
|
}
|
|
|
|
|
2021-04-11 00:46:27 +00:00
|
|
|
static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked)
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-11 23:29:38 +00:00
|
|
|
{
|
2021-04-11 00:46:27 +00:00
|
|
|
if (tracked)
|
|
|
|
return atomic_read(&tctx->inflight_tracked);
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-11 23:29:38 +00:00
|
|
|
return percpu_counter_sum(&tctx->inflight);
|
|
|
|
}
|
|
|
|
|
2021-06-14 01:36:22 +00:00
|
|
|
static void io_uring_drop_tctx_refs(struct task_struct *task)
|
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = task->io_uring;
|
|
|
|
unsigned int refs = tctx->cached_refs;
|
|
|
|
|
2021-08-09 12:04:20 +00:00
|
|
|
if (refs) {
|
|
|
|
tctx->cached_refs = 0;
|
|
|
|
percpu_counter_sub(&tctx->inflight, refs);
|
|
|
|
put_task_struct_many(task, refs);
|
|
|
|
}
|
2021-06-14 01:36:22 +00:00
|
|
|
}
|
|
|
|
|
2021-06-14 01:36:23 +00:00
|
|
|
/*
|
|
|
|
* Find any io_uring ctx that this task has registered or done IO on, and cancel
|
|
|
|
* requests. @sqd should be not-null IIF it's an SQPOLL thread cancellation.
|
|
|
|
*/
|
|
|
|
static void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd)
|
2021-02-07 22:34:26 +00:00
|
|
|
{
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-11 23:29:38 +00:00
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
2021-04-18 13:52:09 +00:00
|
|
|
struct io_ring_ctx *ctx;
|
2021-02-07 22:34:26 +00:00
|
|
|
s64 inflight;
|
|
|
|
DEFINE_WAIT(wait);
|
2020-10-30 15:37:30 +00:00
|
|
|
|
2021-06-14 01:36:23 +00:00
|
|
|
WARN_ON_ONCE(sqd && sqd->thread != current);
|
|
|
|
|
2021-04-27 12:51:49 +00:00
|
|
|
if (!current->io_uring)
|
|
|
|
return;
|
2021-05-23 14:48:39 +00:00
|
|
|
if (tctx->io_wq)
|
|
|
|
io_wq_exit_start(tctx->io_wq);
|
|
|
|
|
2021-02-07 22:34:26 +00:00
|
|
|
atomic_inc(&tctx->in_idle);
|
|
|
|
do {
|
2021-08-09 12:04:20 +00:00
|
|
|
io_uring_drop_tctx_refs(current);
|
2021-02-07 22:34:26 +00:00
|
|
|
/* read completions before cancelations */
|
2021-06-14 01:36:23 +00:00
|
|
|
inflight = tctx_inflight(tctx, !cancel_all);
|
2021-02-07 22:34:26 +00:00
|
|
|
if (!inflight)
|
|
|
|
break;
|
2020-10-30 15:37:30 +00:00
|
|
|
|
2021-06-14 01:36:23 +00:00
|
|
|
if (!sqd) {
|
|
|
|
struct io_tctx_node *node;
|
|
|
|
unsigned long index;
|
2020-09-13 19:09:39 +00:00
|
|
|
|
2021-06-14 01:36:23 +00:00
|
|
|
xa_for_each(&tctx->xa, index, node) {
|
|
|
|
/* sqpoll task will cancel all its requests */
|
|
|
|
if (node->ctx->sq_data)
|
|
|
|
continue;
|
|
|
|
io_uring_try_cancel_requests(node->ctx, current,
|
|
|
|
cancel_all);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
|
|
|
|
io_uring_try_cancel_requests(ctx, current,
|
|
|
|
cancel_all);
|
|
|
|
}
|
2021-05-23 14:48:39 +00:00
|
|
|
|
2020-09-13 19:09:39 +00:00
|
|
|
prepare_to_wait(&tctx->wait, &wait, TASK_UNINTERRUPTIBLE);
|
2021-08-09 12:04:20 +00:00
|
|
|
io_uring_drop_tctx_refs(current);
|
2020-09-13 19:09:39 +00:00
|
|
|
/*
|
2021-01-26 15:28:26 +00:00
|
|
|
* If we've seen completions, retry without waiting. This
|
|
|
|
* avoids a race where a completion comes in before we did
|
|
|
|
* prepare_to_wait().
|
2020-09-13 19:09:39 +00:00
|
|
|
*/
|
2021-05-16 21:58:04 +00:00
|
|
|
if (inflight == tctx_inflight(tctx, !cancel_all))
|
2021-01-26 15:28:26 +00:00
|
|
|
schedule();
|
2020-12-20 13:21:44 +00:00
|
|
|
finish_wait(&tctx->wait, &wait);
|
2020-10-15 22:24:45 +00:00
|
|
|
} while (1);
|
2020-10-30 15:37:30 +00:00
|
|
|
atomic_dec(&tctx->in_idle);
|
2021-01-04 20:43:29 +00:00
|
|
|
|
2021-02-27 11:16:46 +00:00
|
|
|
io_uring_clean_tctx(tctx);
|
2021-05-16 21:58:04 +00:00
|
|
|
if (cancel_all) {
|
2021-04-11 00:46:27 +00:00
|
|
|
/* for exec all current's requests should be gone, kill tctx */
|
|
|
|
__io_uring_free(current);
|
|
|
|
}
|
2020-06-15 07:24:04 +00:00
|
|
|
}
|
|
|
|
|
2021-08-12 04:14:35 +00:00
|
|
|
void __io_uring_cancel(bool cancel_all)
|
2021-06-14 01:36:23 +00:00
|
|
|
{
|
2021-08-12 04:14:35 +00:00
|
|
|
io_uring_cancel_generic(cancel_all, NULL);
|
2021-06-14 01:36:23 +00:00
|
|
|
}
|
|
|
|
|
2019-11-28 11:53:22 +00:00
|
|
|
static void *io_uring_validate_mmap_request(struct file *file,
|
|
|
|
loff_t pgoff, size_t sz)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
2019-11-28 11:53:22 +00:00
|
|
|
loff_t offset = pgoff << PAGE_SHIFT;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
struct page *page;
|
|
|
|
void *ptr;
|
|
|
|
|
|
|
|
switch (offset) {
|
|
|
|
case IORING_OFF_SQ_RING:
|
2019-08-26 17:23:46 +00:00
|
|
|
case IORING_OFF_CQ_RING:
|
|
|
|
ptr = ctx->rings;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
break;
|
|
|
|
case IORING_OFF_SQES:
|
|
|
|
ptr = ctx->sq_sqes;
|
|
|
|
break;
|
|
|
|
default:
|
2019-11-28 11:53:22 +00:00
|
|
|
return ERR_PTR(-EINVAL);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
page = virt_to_head_page(ptr);
|
2019-09-23 22:34:25 +00:00
|
|
|
if (sz > page_size(page))
|
2019-11-28 11:53:22 +00:00
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
|
|
|
|
return ptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_MMU
|
|
|
|
|
|
|
|
static int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
|
|
|
|
{
|
|
|
|
size_t sz = vma->vm_end - vma->vm_start;
|
|
|
|
unsigned long pfn;
|
|
|
|
void *ptr;
|
|
|
|
|
|
|
|
ptr = io_uring_validate_mmap_request(file, vma->vm_pgoff, sz);
|
|
|
|
if (IS_ERR(ptr))
|
|
|
|
return PTR_ERR(ptr);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
pfn = virt_to_phys(ptr) >> PAGE_SHIFT;
|
|
|
|
return remap_pfn_range(vma, vma->vm_start, pfn, sz, vma->vm_page_prot);
|
|
|
|
}
|
|
|
|
|
2019-11-28 11:53:22 +00:00
|
|
|
#else /* !CONFIG_MMU */
|
|
|
|
|
|
|
|
static int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
|
|
|
|
{
|
|
|
|
return vma->vm_flags & (VM_SHARED | VM_MAYSHARE) ? 0 : -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned int io_uring_nommu_mmap_capabilities(struct file *file)
|
|
|
|
{
|
|
|
|
return NOMMU_MAP_DIRECT | NOMMU_MAP_READ | NOMMU_MAP_WRITE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned long io_uring_nommu_get_unmapped_area(struct file *file,
|
|
|
|
unsigned long addr, unsigned long len,
|
|
|
|
unsigned long pgoff, unsigned long flags)
|
|
|
|
{
|
|
|
|
void *ptr;
|
|
|
|
|
|
|
|
ptr = io_uring_validate_mmap_request(file, pgoff, len);
|
|
|
|
if (IS_ERR(ptr))
|
|
|
|
return PTR_ERR(ptr);
|
|
|
|
|
|
|
|
return (unsigned long) ptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* !CONFIG_MMU */
|
|
|
|
|
io_uring: stop SQPOLL submit on creator's death
When the creator of SQPOLL io_uring dies (i.e. sqo_task), we don't want
its internals like ->files and ->mm to be poked by the SQPOLL task, it
have never been nice and recently got racy. That can happen when the
owner undergoes destruction and SQPOLL tasks tries to submit new
requests in parallel, and so calls io_sq_thread_acquire*().
That patch halts SQPOLL submissions when sqo_task dies by introducing
sqo_dead flag. Once set, the SQPOLL task must not do any submission,
which is synchronised by uring_lock as well as the new flag.
The tricky part is to make sure that disabling always happens, that
means either the ring is discovered by creator's do_exit() -> cancel,
or if the final close() happens before it's done by the creator. The
last is guaranteed by the fact that for SQPOLL the creator task and only
it holds exactly one file note, so either it pins up to do_exit() or
removed by the creator on the final put in flush. (see comments in
uring_flush() around file->f_count == 2).
One more place that can trigger io_sq_thread_acquire_*() is
__io_req_task_submit(). Shoot off requests on sqo_dead there, even
though actually we don't need to. That's because cancellation of
sqo_task should wait for the request before going any further.
note 1: io_disable_sqo_submit() does io_ring_set_wakeup_flag() so the
caller would enter the ring to get an error, but it still doesn't
guarantee that the flag won't be cleared.
note 2: if final __userspace__ close happens not from the creator
task, the file note will pin the ring until the task dies.
Fixed: b1b6b5a30dce8 ("kernel/io_uring: cancel io_uring before task works")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-08 20:57:25 +00:00
|
|
|
static int io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
|
2020-09-03 18:12:41 +00:00
|
|
|
{
|
|
|
|
DEFINE_WAIT(wait);
|
|
|
|
|
|
|
|
do {
|
|
|
|
if (!io_sqring_full(ctx))
|
|
|
|
break;
|
|
|
|
prepare_to_wait(&ctx->sqo_sq_wait, &wait, TASK_INTERRUPTIBLE);
|
|
|
|
|
|
|
|
if (!io_sqring_full(ctx))
|
|
|
|
break;
|
|
|
|
schedule();
|
|
|
|
} while (!signal_pending(current));
|
|
|
|
|
|
|
|
finish_wait(&ctx->sqo_sq_wait, &wait);
|
2021-03-09 06:30:41 +00:00
|
|
|
return 0;
|
2020-09-03 18:12:41 +00:00
|
|
|
}
|
|
|
|
|
2020-11-03 02:54:37 +00:00
|
|
|
static int io_get_ext_arg(unsigned flags, const void __user *argp, size_t *argsz,
|
|
|
|
struct __kernel_timespec __user **ts,
|
|
|
|
const sigset_t __user **sig)
|
|
|
|
{
|
|
|
|
struct io_uring_getevents_arg arg;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If EXT_ARG isn't set, then we have no timespec and the argp pointer
|
|
|
|
* is just a pointer to the sigset_t.
|
|
|
|
*/
|
|
|
|
if (!(flags & IORING_ENTER_EXT_ARG)) {
|
|
|
|
*sig = (const sigset_t __user *) argp;
|
|
|
|
*ts = NULL;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* EXT_ARG is set - ensure we agree on the size of it and copy in our
|
|
|
|
* timespec and sigset_t pointers if good.
|
|
|
|
*/
|
|
|
|
if (*argsz != sizeof(arg))
|
|
|
|
return -EINVAL;
|
|
|
|
if (copy_from_user(&arg, argp, sizeof(arg)))
|
|
|
|
return -EFAULT;
|
|
|
|
*sig = u64_to_user_ptr(arg.sigmask);
|
|
|
|
*argsz = arg.sigmask_sz;
|
|
|
|
*ts = u64_to_user_ptr(arg.ts);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
|
2020-11-03 02:54:37 +00:00
|
|
|
u32, min_complete, u32, flags, const void __user *, argp,
|
|
|
|
size_t, argsz)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
int submitted = 0;
|
|
|
|
struct fd f;
|
2021-03-19 17:22:30 +00:00
|
|
|
long ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2020-07-01 17:29:10 +00:00
|
|
|
io_run_task_work();
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 16:52:41 +00:00
|
|
|
|
2021-03-19 17:22:30 +00:00
|
|
|
if (unlikely(flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP |
|
|
|
|
IORING_ENTER_SQ_WAIT | IORING_ENTER_EXT_ARG)))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
f = fdget(fd);
|
2021-03-19 17:22:30 +00:00
|
|
|
if (unlikely(!f.file))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return -EBADF;
|
|
|
|
|
|
|
|
ret = -EOPNOTSUPP;
|
2021-03-19 17:22:30 +00:00
|
|
|
if (unlikely(f.file->f_op != &io_uring_fops))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
goto out_fput;
|
|
|
|
|
|
|
|
ret = -ENXIO;
|
|
|
|
ctx = f.file->private_data;
|
2021-03-19 17:22:30 +00:00
|
|
|
if (unlikely(!percpu_ref_tryget(&ctx->refs)))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
goto out_fput;
|
|
|
|
|
2020-08-27 14:58:31 +00:00
|
|
|
ret = -EBADFD;
|
2021-03-19 17:22:30 +00:00
|
|
|
if (unlikely(ctx->flags & IORING_SETUP_R_DISABLED))
|
2020-08-27 14:58:31 +00:00
|
|
|
goto out;
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
/*
|
|
|
|
* For SQ polling, the thread will do all submissions and completions.
|
|
|
|
* Just return the requested submit count, and wake the thread if
|
|
|
|
* we were asked to.
|
|
|
|
*/
|
2019-09-12 20:19:16 +00:00
|
|
|
ret = 0;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_SQPOLL) {
|
2021-08-09 19:18:12 +00:00
|
|
|
io_cqring_overflow_flush(ctx);
|
2020-12-17 00:24:39 +00:00
|
|
|
|
2021-08-14 15:04:40 +00:00
|
|
|
if (unlikely(ctx->sq_data->thread == NULL)) {
|
|
|
|
ret = -EOWNERDEAD;
|
2021-03-07 10:54:29 +00:00
|
|
|
goto out;
|
2021-08-14 15:04:40 +00:00
|
|
|
}
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
if (flags & IORING_ENTER_SQ_WAKEUP)
|
2020-09-02 19:52:19 +00:00
|
|
|
wake_up(&ctx->sq_data->wait);
|
io_uring: stop SQPOLL submit on creator's death
When the creator of SQPOLL io_uring dies (i.e. sqo_task), we don't want
its internals like ->files and ->mm to be poked by the SQPOLL task, it
have never been nice and recently got racy. That can happen when the
owner undergoes destruction and SQPOLL tasks tries to submit new
requests in parallel, and so calls io_sq_thread_acquire*().
That patch halts SQPOLL submissions when sqo_task dies by introducing
sqo_dead flag. Once set, the SQPOLL task must not do any submission,
which is synchronised by uring_lock as well as the new flag.
The tricky part is to make sure that disabling always happens, that
means either the ring is discovered by creator's do_exit() -> cancel,
or if the final close() happens before it's done by the creator. The
last is guaranteed by the fact that for SQPOLL the creator task and only
it holds exactly one file note, so either it pins up to do_exit() or
removed by the creator on the final put in flush. (see comments in
uring_flush() around file->f_count == 2).
One more place that can trigger io_sq_thread_acquire_*() is
__io_req_task_submit(). Shoot off requests on sqo_dead there, even
though actually we don't need to. That's because cancellation of
sqo_task should wait for the request before going any further.
note 1: io_disable_sqo_submit() does io_ring_set_wakeup_flag() so the
caller would enter the ring to get an error, but it still doesn't
guarantee that the flag won't be cleared.
note 2: if final __userspace__ close happens not from the creator
task, the file note will pin the ring until the task dies.
Fixed: b1b6b5a30dce8 ("kernel/io_uring: cancel io_uring before task works")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-08 20:57:25 +00:00
|
|
|
if (flags & IORING_ENTER_SQ_WAIT) {
|
|
|
|
ret = io_sqpoll_wait_sq(ctx);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
}
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
submitted = to_submit;
|
2019-09-12 20:19:16 +00:00
|
|
|
} else if (to_submit) {
|
2021-06-14 01:36:15 +00:00
|
|
|
ret = io_uring_add_tctx_node(ctx);
|
2020-09-13 19:09:39 +00:00
|
|
|
if (unlikely(ret))
|
|
|
|
goto out;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2020-09-13 19:09:39 +00:00
|
|
|
submitted = io_submit_sqes(ctx, to_submit);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2019-12-18 16:53:45 +00:00
|
|
|
|
|
|
|
if (submitted != to_submit)
|
|
|
|
goto out;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
if (flags & IORING_ENTER_GETEVENTS) {
|
2020-11-03 02:54:37 +00:00
|
|
|
const sigset_t __user *sig;
|
|
|
|
struct __kernel_timespec __user *ts;
|
|
|
|
|
|
|
|
ret = io_get_ext_arg(flags, argp, &argsz, &ts, &sig);
|
|
|
|
if (unlikely(ret))
|
|
|
|
goto out;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
min_complete = min(min_complete, ctx->cq_entries);
|
|
|
|
|
io_uring: io_uring_enter(2) don't poll while SETUP_IOPOLL|SETUP_SQPOLL enabled
When SETUP_IOPOLL and SETUP_SQPOLL are both enabled, applications don't need
to do io completion events polling again, they can rely on io_sq_thread to do
polling work, which can reduce cpu usage and uring_lock contention.
I modify fio io_uring engine codes a bit to evaluate the performance:
static int fio_ioring_getevents(struct thread_data *td, unsigned int min,
continue;
}
- if (!o->sqpoll_thread) {
+ if (o->sqpoll_thread && o->hipri) {
r = io_uring_enter(ld, 0, actual_min,
IORING_ENTER_GETEVENTS);
if (r < 0) {
and use "fio -name=fiotest -filename=/dev/nvme0n1 -iodepth=$depth -thread
-rw=read -ioengine=io_uring -hipri=1 -sqthread_poll=1 -direct=1 -bs=4k
-size=10G -numjobs=1 -time_based -runtime=120"
original codes
--------------------------------------------------------------------
iodepth | 4 | 8 | 16 | 32 | 64
bw | 1133MB/s | 1519MB/s | 2090MB/s | 2710MB/s | 3012MB/s
fio cpu usage | 100% | 100% | 100% | 100% | 100%
--------------------------------------------------------------------
with patch
--------------------------------------------------------------------
iodepth | 4 | 8 | 16 | 32 | 64
bw | 1196MB/s | 1721MB/s | 2351MB/s | 2977MB/s | 3357MB/s
fio cpu usage | 63.8% | 74.4%% | 81.1% | 83.7% | 82.4%
--------------------------------------------------------------------
bw improve | 5.5% | 13.2% | 12.3% | 9.8% | 11.5%
--------------------------------------------------------------------
From above test results, we can see that bw has above 5.5%~13%
improvement, and fio process's cpu usage also drops much. Note this
won't improve io_sq_thread's cpu usage when SETUP_IOPOLL|SETUP_SQPOLL
are both enabled, in this case, io_sq_thread always has 100% cpu usage.
I think this patch will be friendly to applications which will often use
io_uring_wait_cqe() or similar from liburing.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-11 01:26:09 +00:00
|
|
|
/*
|
|
|
|
* When SETUP_IOPOLL and SETUP_SQPOLL are both enabled, user
|
|
|
|
* space applications don't need to do io completion events
|
|
|
|
* polling again, they can rely on io_sq_thread to do polling
|
|
|
|
* work, which can reduce cpu usage and uring_lock contention.
|
|
|
|
*/
|
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL &&
|
|
|
|
!(ctx->flags & IORING_SETUP_SQPOLL)) {
|
2020-07-07 13:36:21 +00:00
|
|
|
ret = io_iopoll_check(ctx, min_complete);
|
2019-01-09 15:59:42 +00:00
|
|
|
} else {
|
2020-11-03 02:54:37 +00:00
|
|
|
ret = io_cqring_wait(ctx, min_complete, sig, argsz, ts);
|
2019-01-09 15:59:42 +00:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2019-12-18 16:53:45 +00:00
|
|
|
out:
|
2019-10-07 23:18:42 +00:00
|
|
|
percpu_ref_put(&ctx->refs);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
out_fput:
|
|
|
|
fdput(f);
|
|
|
|
return submitted ? submitted : ret;
|
|
|
|
}
|
|
|
|
|
2020-02-26 17:38:32 +00:00
|
|
|
#ifdef CONFIG_PROC_FS
|
2021-03-08 14:16:16 +00:00
|
|
|
static int io_uring_show_cred(struct seq_file *m, unsigned int id,
|
|
|
|
const struct cred *cred)
|
2020-01-30 15:25:34 +00:00
|
|
|
{
|
|
|
|
struct user_namespace *uns = seq_user_ns(m);
|
|
|
|
struct group_info *gi;
|
|
|
|
kernel_cap_t cap;
|
|
|
|
unsigned __capi;
|
|
|
|
int g;
|
|
|
|
|
|
|
|
seq_printf(m, "%5d\n", id);
|
|
|
|
seq_put_decimal_ull(m, "\tUid:\t", from_kuid_munged(uns, cred->uid));
|
|
|
|
seq_put_decimal_ull(m, "\t\t", from_kuid_munged(uns, cred->euid));
|
|
|
|
seq_put_decimal_ull(m, "\t\t", from_kuid_munged(uns, cred->suid));
|
|
|
|
seq_put_decimal_ull(m, "\t\t", from_kuid_munged(uns, cred->fsuid));
|
|
|
|
seq_put_decimal_ull(m, "\n\tGid:\t", from_kgid_munged(uns, cred->gid));
|
|
|
|
seq_put_decimal_ull(m, "\t\t", from_kgid_munged(uns, cred->egid));
|
|
|
|
seq_put_decimal_ull(m, "\t\t", from_kgid_munged(uns, cred->sgid));
|
|
|
|
seq_put_decimal_ull(m, "\t\t", from_kgid_munged(uns, cred->fsgid));
|
|
|
|
seq_puts(m, "\n\tGroups:\t");
|
|
|
|
gi = cred->group_info;
|
|
|
|
for (g = 0; g < gi->ngroups; g++) {
|
|
|
|
seq_put_decimal_ull(m, g ? " " : "",
|
|
|
|
from_kgid_munged(uns, gi->gid[g]));
|
|
|
|
}
|
|
|
|
seq_puts(m, "\n\tCapEff:\t");
|
|
|
|
cap = cred->cap_effective;
|
|
|
|
CAP_FOR_EACH_U32(__capi)
|
|
|
|
seq_put_hex_ll(m, NULL, cap.cap[CAP_LAST_U32 - __capi], 8);
|
|
|
|
seq_putc(m, '\n');
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
|
|
|
|
{
|
2020-09-29 15:01:22 +00:00
|
|
|
struct io_sq_data *sq = NULL;
|
2020-09-28 14:57:48 +00:00
|
|
|
bool has_lock;
|
2020-01-30 15:25:34 +00:00
|
|
|
int i;
|
|
|
|
|
2020-09-28 14:57:48 +00:00
|
|
|
/*
|
|
|
|
* Avoid ABBA deadlock between the seq lock and the io_uring mutex,
|
|
|
|
* since fdinfo case grabs it in the opposite direction of normal use
|
|
|
|
* cases. If we fail to get the lock, we just don't iterate any
|
|
|
|
* structures that could be going away outside the io_uring mutex.
|
|
|
|
*/
|
|
|
|
has_lock = mutex_trylock(&ctx->uring_lock);
|
|
|
|
|
2021-02-25 17:17:46 +00:00
|
|
|
if (has_lock && (ctx->flags & IORING_SETUP_SQPOLL)) {
|
2020-09-29 15:01:22 +00:00
|
|
|
sq = ctx->sq_data;
|
2021-02-25 17:17:46 +00:00
|
|
|
if (!sq->thread)
|
|
|
|
sq = NULL;
|
|
|
|
}
|
2020-09-29 15:01:22 +00:00
|
|
|
|
|
|
|
seq_printf(m, "SqThread:\t%d\n", sq ? task_pid_nr(sq->thread) : -1);
|
|
|
|
seq_printf(m, "SqThreadCpu:\t%d\n", sq ? task_cpu(sq->thread) : -1);
|
2020-01-30 15:25:34 +00:00
|
|
|
seq_printf(m, "UserFiles:\t%u\n", ctx->nr_user_files);
|
2020-09-28 14:57:48 +00:00
|
|
|
for (i = 0; has_lock && i < ctx->nr_user_files; i++) {
|
2021-03-12 15:30:14 +00:00
|
|
|
struct file *f = io_file_from_index(ctx, i);
|
2020-01-30 15:25:34 +00:00
|
|
|
|
|
|
|
if (f)
|
|
|
|
seq_printf(m, "%5u: %s\n", i, file_dentry(f)->d_iname);
|
|
|
|
else
|
|
|
|
seq_printf(m, "%5u: <none>\n", i);
|
|
|
|
}
|
|
|
|
seq_printf(m, "UserBufs:\t%u\n", ctx->nr_user_bufs);
|
2020-09-28 14:57:48 +00:00
|
|
|
for (i = 0; has_lock && i < ctx->nr_user_bufs; i++) {
|
2021-04-25 13:32:23 +00:00
|
|
|
struct io_mapped_ubuf *buf = ctx->user_bufs[i];
|
2021-04-01 14:43:55 +00:00
|
|
|
unsigned int len = buf->ubuf_end - buf->ubuf;
|
2020-01-30 15:25:34 +00:00
|
|
|
|
2021-04-01 14:43:55 +00:00
|
|
|
seq_printf(m, "%5u: 0x%llx/%u\n", i, buf->ubuf, len);
|
2020-01-30 15:25:34 +00:00
|
|
|
}
|
2021-03-08 14:16:16 +00:00
|
|
|
if (has_lock && !xa_empty(&ctx->personalities)) {
|
|
|
|
unsigned long index;
|
|
|
|
const struct cred *cred;
|
|
|
|
|
2020-01-30 15:25:34 +00:00
|
|
|
seq_printf(m, "Personalities:\n");
|
2021-03-08 14:16:16 +00:00
|
|
|
xa_for_each(&ctx->personalities, index, cred)
|
|
|
|
io_uring_show_cred(m, index, cred);
|
2020-01-30 15:25:34 +00:00
|
|
|
}
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
seq_printf(m, "PollList:\n");
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-15 05:23:12 +00:00
|
|
|
for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {
|
|
|
|
struct hlist_head *list = &ctx->cancel_hash[i];
|
|
|
|
struct io_kiocb *req;
|
|
|
|
|
|
|
|
hlist_for_each_entry(req, list, hash_node)
|
|
|
|
seq_printf(m, " op=%d, task_works=%d\n", req->opcode,
|
|
|
|
req->task->task_works != NULL);
|
|
|
|
}
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2020-09-28 14:57:48 +00:00
|
|
|
if (has_lock)
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2020-01-30 15:25:34 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void io_uring_show_fdinfo(struct seq_file *m, struct file *f)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = f->private_data;
|
|
|
|
|
|
|
|
if (percpu_ref_tryget(&ctx->refs)) {
|
|
|
|
__io_uring_show_fdinfo(ctx, m);
|
|
|
|
percpu_ref_put(&ctx->refs);
|
|
|
|
}
|
|
|
|
}
|
2020-02-26 17:38:32 +00:00
|
|
|
#endif
|
2020-01-30 15:25:34 +00:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static const struct file_operations io_uring_fops = {
|
|
|
|
.release = io_uring_release,
|
|
|
|
.mmap = io_uring_mmap,
|
2019-11-28 11:53:22 +00:00
|
|
|
#ifndef CONFIG_MMU
|
|
|
|
.get_unmapped_area = io_uring_nommu_get_unmapped_area,
|
|
|
|
.mmap_capabilities = io_uring_nommu_mmap_capabilities,
|
|
|
|
#endif
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
.poll = io_uring_poll,
|
|
|
|
.fasync = io_uring_fasync,
|
2020-02-26 17:38:32 +00:00
|
|
|
#ifdef CONFIG_PROC_FS
|
2020-01-30 15:25:34 +00:00
|
|
|
.show_fdinfo = io_uring_show_fdinfo,
|
2020-02-26 17:38:32 +00:00
|
|
|
#endif
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static int io_allocate_scq_urings(struct io_ring_ctx *ctx,
|
|
|
|
struct io_uring_params *p)
|
|
|
|
{
|
2019-08-26 17:23:46 +00:00
|
|
|
struct io_rings *rings;
|
|
|
|
size_t size, sq_array_offset;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2020-08-05 18:58:23 +00:00
|
|
|
/* make sure these are sane, as we already accounted them */
|
|
|
|
ctx->sq_entries = p->sq_entries;
|
|
|
|
ctx->cq_entries = p->cq_entries;
|
|
|
|
|
2019-08-26 17:23:46 +00:00
|
|
|
size = rings_size(p->sq_entries, p->cq_entries, &sq_array_offset);
|
|
|
|
if (size == SIZE_MAX)
|
|
|
|
return -EOVERFLOW;
|
|
|
|
|
|
|
|
rings = io_mem_alloc(size);
|
|
|
|
if (!rings)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return -ENOMEM;
|
|
|
|
|
2019-08-26 17:23:46 +00:00
|
|
|
ctx->rings = rings;
|
|
|
|
ctx->sq_array = (u32 *)((char *)rings + sq_array_offset);
|
|
|
|
rings->sq_ring_mask = p->sq_entries - 1;
|
|
|
|
rings->cq_ring_mask = p->cq_entries - 1;
|
|
|
|
rings->sq_ring_entries = p->sq_entries;
|
|
|
|
rings->cq_ring_entries = p->cq_entries;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
size = array_size(sizeof(struct io_uring_sqe), p->sq_entries);
|
2019-11-20 16:26:29 +00:00
|
|
|
if (size == SIZE_MAX) {
|
|
|
|
io_mem_free(ctx->rings);
|
|
|
|
ctx->rings = NULL;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return -EOVERFLOW;
|
2019-11-20 16:26:29 +00:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
ctx->sq_sqes = io_mem_alloc(size);
|
2019-11-20 16:26:29 +00:00
|
|
|
if (!ctx->sq_sqes) {
|
|
|
|
io_mem_free(ctx->rings);
|
|
|
|
ctx->rings = NULL;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return -ENOMEM;
|
2019-11-20 16:26:29 +00:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-12-21 18:34:05 +00:00
|
|
|
static int io_uring_install_fd(struct io_ring_ctx *ctx, struct file *file)
|
|
|
|
{
|
|
|
|
int ret, fd;
|
|
|
|
|
|
|
|
fd = get_unused_fd_flags(O_RDWR | O_CLOEXEC);
|
|
|
|
if (fd < 0)
|
|
|
|
return fd;
|
|
|
|
|
2021-06-14 01:36:15 +00:00
|
|
|
ret = io_uring_add_tctx_node(ctx);
|
2020-12-21 18:34:05 +00:00
|
|
|
if (ret) {
|
|
|
|
put_unused_fd(fd);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
fd_install(fd, file);
|
|
|
|
return fd;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
/*
|
|
|
|
* Allocate an anonymous fd, this is what constitutes the application
|
|
|
|
* visible backing of an io_uring instance. The application mmaps this
|
|
|
|
* fd to gain access to the SQ/CQ ring details. If UNIX sockets are enabled,
|
|
|
|
* we have to tie this fd to a socket for file garbage collection purposes.
|
|
|
|
*/
|
2020-12-21 18:34:05 +00:00
|
|
|
static struct file *io_uring_get_file(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct file *file;
|
2020-12-21 18:34:05 +00:00
|
|
|
#if defined(CONFIG_UNIX)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = sock_create_kern(&init_net, PF_UNIX, SOCK_RAW, IPPROTO_IP,
|
|
|
|
&ctx->ring_sock);
|
|
|
|
if (ret)
|
2020-12-21 18:34:05 +00:00
|
|
|
return ERR_PTR(ret);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#endif
|
|
|
|
|
|
|
|
file = anon_inode_getfile("[io_uring]", &io_uring_fops, ctx,
|
|
|
|
O_RDWR | O_CLOEXEC);
|
|
|
|
#if defined(CONFIG_UNIX)
|
2020-12-21 18:34:05 +00:00
|
|
|
if (IS_ERR(file)) {
|
|
|
|
sock_release(ctx->ring_sock);
|
|
|
|
ctx->ring_sock = NULL;
|
|
|
|
} else {
|
|
|
|
ctx->ring_sock->file = file;
|
2020-09-13 19:09:39 +00:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#endif
|
2020-12-21 18:34:05 +00:00
|
|
|
return file;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2020-05-05 08:28:53 +00:00
|
|
|
static int io_uring_create(unsigned entries, struct io_uring_params *p,
|
|
|
|
struct io_uring_params __user *params)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
2020-12-21 18:34:05 +00:00
|
|
|
struct file *file;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
int ret;
|
|
|
|
|
2019-12-28 22:39:54 +00:00
|
|
|
if (!entries)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return -EINVAL;
|
2019-12-28 22:39:54 +00:00
|
|
|
if (entries > IORING_MAX_ENTRIES) {
|
|
|
|
if (!(p->flags & IORING_SETUP_CLAMP))
|
|
|
|
return -EINVAL;
|
|
|
|
entries = IORING_MAX_ENTRIES;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Use twice as many entries for the CQ ring. It's possible for the
|
|
|
|
* application to drive a higher depth than the size of the SQ ring,
|
|
|
|
* since the sqes are only used at submission time. This allows for
|
2019-10-04 18:10:03 +00:00
|
|
|
* some flexibility in overcommitting a bit. If the application has
|
|
|
|
* set IORING_SETUP_CQSIZE, it will have passed in the desired number
|
|
|
|
* of CQ ring entries manually.
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
*/
|
|
|
|
p->sq_entries = roundup_pow_of_two(entries);
|
2019-10-04 18:10:03 +00:00
|
|
|
if (p->flags & IORING_SETUP_CQSIZE) {
|
|
|
|
/*
|
|
|
|
* If IORING_SETUP_CQSIZE is set, we do the same roundup
|
|
|
|
* to a power-of-two, if it isn't already. We do NOT impose
|
|
|
|
* any cq vs sq ring sizing.
|
|
|
|
*/
|
2020-11-24 07:03:03 +00:00
|
|
|
if (!p->cq_entries)
|
2019-10-04 18:10:03 +00:00
|
|
|
return -EINVAL;
|
2019-12-28 22:39:54 +00:00
|
|
|
if (p->cq_entries > IORING_MAX_CQ_ENTRIES) {
|
|
|
|
if (!(p->flags & IORING_SETUP_CLAMP))
|
|
|
|
return -EINVAL;
|
|
|
|
p->cq_entries = IORING_MAX_CQ_ENTRIES;
|
|
|
|
}
|
2020-11-24 07:03:03 +00:00
|
|
|
p->cq_entries = roundup_pow_of_two(p->cq_entries);
|
|
|
|
if (p->cq_entries < p->sq_entries)
|
|
|
|
return -EINVAL;
|
2019-10-04 18:10:03 +00:00
|
|
|
} else {
|
|
|
|
p->cq_entries = 2 * p->sq_entries;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
ctx = io_ring_ctx_alloc(p);
|
2021-02-21 23:19:37 +00:00
|
|
|
if (!ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return -ENOMEM;
|
|
|
|
ctx->compat = in_compat_syscall();
|
2021-02-21 23:19:37 +00:00
|
|
|
if (!capable(CAP_IPC_LOCK))
|
|
|
|
ctx->user = get_uid(current_user());
|
2020-09-14 16:45:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* This is just grabbed for accounting purposes. When a process exits,
|
|
|
|
* the mm is exited and dropped before the files, hence we need to hang
|
|
|
|
* on to this mm purely for the purposes of being able to unaccount
|
|
|
|
* memory (locked/pinned vm). It's not used for anything else.
|
|
|
|
*/
|
2020-08-25 13:58:00 +00:00
|
|
|
mmgrab(current->mm);
|
2020-09-14 16:45:53 +00:00
|
|
|
ctx->mm_account = current->mm;
|
2020-08-25 13:58:00 +00:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
ret = io_allocate_scq_urings(ctx, p);
|
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
|
2020-08-27 14:58:31 +00:00
|
|
|
ret = io_sq_offload_create(ctx, p);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
if (ret)
|
|
|
|
goto err;
|
2021-04-25 13:32:24 +00:00
|
|
|
/* always set a rsrc node */
|
2021-04-29 10:46:48 +00:00
|
|
|
ret = io_rsrc_node_switch_start(ctx);
|
|
|
|
if (ret)
|
|
|
|
goto err;
|
2021-04-25 13:32:24 +00:00
|
|
|
io_rsrc_node_switch(ctx, NULL);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
memset(&p->sq_off, 0, sizeof(p->sq_off));
|
2019-08-26 17:23:46 +00:00
|
|
|
p->sq_off.head = offsetof(struct io_rings, sq.head);
|
|
|
|
p->sq_off.tail = offsetof(struct io_rings, sq.tail);
|
|
|
|
p->sq_off.ring_mask = offsetof(struct io_rings, sq_ring_mask);
|
|
|
|
p->sq_off.ring_entries = offsetof(struct io_rings, sq_ring_entries);
|
|
|
|
p->sq_off.flags = offsetof(struct io_rings, sq_flags);
|
|
|
|
p->sq_off.dropped = offsetof(struct io_rings, sq_dropped);
|
|
|
|
p->sq_off.array = (char *)ctx->sq_array - (char *)ctx->rings;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
memset(&p->cq_off, 0, sizeof(p->cq_off));
|
2019-08-26 17:23:46 +00:00
|
|
|
p->cq_off.head = offsetof(struct io_rings, cq.head);
|
|
|
|
p->cq_off.tail = offsetof(struct io_rings, cq.tail);
|
|
|
|
p->cq_off.ring_mask = offsetof(struct io_rings, cq_ring_mask);
|
|
|
|
p->cq_off.ring_entries = offsetof(struct io_rings, cq_ring_entries);
|
|
|
|
p->cq_off.overflow = offsetof(struct io_rings, cq_overflow);
|
|
|
|
p->cq_off.cqes = offsetof(struct io_rings, cqes);
|
2020-05-15 16:38:04 +00:00
|
|
|
p->cq_off.flags = offsetof(struct io_rings, cq_flags);
|
2019-09-06 16:26:21 +00:00
|
|
|
|
2020-05-05 08:28:53 +00:00
|
|
|
p->features = IORING_FEAT_SINGLE_MMAP | IORING_FEAT_NODROP |
|
|
|
|
IORING_FEAT_SUBMIT_STABLE | IORING_FEAT_RW_CUR_POS |
|
2020-06-17 09:53:55 +00:00
|
|
|
IORING_FEAT_CUR_PERSONALITY | IORING_FEAT_FAST_POLL |
|
2020-11-03 02:54:37 +00:00
|
|
|
IORING_FEAT_POLL_32BITS | IORING_FEAT_SQPOLL_NONFIXED |
|
2021-06-10 15:37:38 +00:00
|
|
|
IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS |
|
|
|
|
IORING_FEAT_RSRC_TAGS;
|
2020-05-05 08:28:53 +00:00
|
|
|
|
|
|
|
if (copy_to_user(params, p, sizeof(*p))) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
goto err;
|
|
|
|
}
|
2020-07-30 19:43:53 +00:00
|
|
|
|
2020-12-21 18:34:05 +00:00
|
|
|
file = io_uring_get_file(ctx);
|
|
|
|
if (IS_ERR(file)) {
|
|
|
|
ret = PTR_ERR(file);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
2019-10-28 15:15:33 +00:00
|
|
|
/*
|
|
|
|
* Install ring fd as the very last thing, so we don't risk someone
|
|
|
|
* having closed it before we finish setup
|
|
|
|
*/
|
2020-12-21 18:34:05 +00:00
|
|
|
ret = io_uring_install_fd(ctx, file);
|
|
|
|
if (ret < 0) {
|
|
|
|
/* fput will clean it up */
|
|
|
|
fput(file);
|
|
|
|
return ret;
|
|
|
|
}
|
2019-10-28 15:15:33 +00:00
|
|
|
|
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 17:02:01 +00:00
|
|
|
trace_io_uring_create(ret, ctx, p->sq_entries, p->cq_entries, p->flags);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return ret;
|
|
|
|
err:
|
|
|
|
io_ring_ctx_wait_and_kill(ctx);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Sets up an aio uring context, and returns the fd. Applications asks for a
|
|
|
|
* ring size, we return the actual sq/cq ring sizes (among other things) in the
|
|
|
|
* params structure passed in.
|
|
|
|
*/
|
|
|
|
static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
|
|
|
|
{
|
|
|
|
struct io_uring_params p;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (copy_from_user(&p, params, sizeof(p)))
|
|
|
|
return -EFAULT;
|
|
|
|
for (i = 0; i < ARRAY_SIZE(p.resv); i++) {
|
|
|
|
if (p.resv[i])
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
if (p.flags & ~(IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL |
|
2019-12-28 22:39:54 +00:00
|
|
|
IORING_SETUP_SQ_AFF | IORING_SETUP_CQSIZE |
|
2020-08-27 14:58:31 +00:00
|
|
|
IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ |
|
|
|
|
IORING_SETUP_R_DISABLED))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
2020-05-05 08:28:53 +00:00
|
|
|
return io_uring_create(entries, &p, params);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
SYSCALL_DEFINE2(io_uring_setup, u32, entries,
|
|
|
|
struct io_uring_params __user *, params)
|
|
|
|
{
|
|
|
|
return io_uring_setup(entries, params);
|
|
|
|
}
|
|
|
|
|
2020-01-16 22:36:52 +00:00
|
|
|
static int io_probe(struct io_ring_ctx *ctx, void __user *arg, unsigned nr_args)
|
|
|
|
{
|
|
|
|
struct io_uring_probe *p;
|
|
|
|
size_t size;
|
|
|
|
int i, ret;
|
|
|
|
|
|
|
|
size = struct_size(p, ops, nr_args);
|
|
|
|
if (size == SIZE_MAX)
|
|
|
|
return -EOVERFLOW;
|
|
|
|
p = kzalloc(size, GFP_KERNEL);
|
|
|
|
if (!p)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
ret = -EFAULT;
|
|
|
|
if (copy_from_user(p, arg, size))
|
|
|
|
goto out;
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (memchr_inv(p, 0, size))
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
p->last_op = IORING_OP_LAST - 1;
|
|
|
|
if (nr_args > IORING_OP_LAST)
|
|
|
|
nr_args = IORING_OP_LAST;
|
|
|
|
|
|
|
|
for (i = 0; i < nr_args; i++) {
|
|
|
|
p->ops[i].op = i;
|
|
|
|
if (!io_op_defs[i].not_supported)
|
|
|
|
p->ops[i].flags = IO_URING_OP_SUPPORTED;
|
|
|
|
}
|
|
|
|
p->ops_len = i;
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
if (copy_to_user(arg, p, size))
|
|
|
|
ret = -EFAULT;
|
|
|
|
out:
|
|
|
|
kfree(p);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-01-28 17:04:42 +00:00
|
|
|
static int io_register_personality(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2021-02-15 20:40:22 +00:00
|
|
|
const struct cred *creds;
|
2021-03-08 14:16:16 +00:00
|
|
|
u32 id;
|
2020-10-15 14:46:24 +00:00
|
|
|
int ret;
|
2020-01-28 17:04:42 +00:00
|
|
|
|
2021-02-15 20:40:22 +00:00
|
|
|
creds = get_current_cred();
|
2020-10-15 14:46:24 +00:00
|
|
|
|
2021-03-08 14:16:16 +00:00
|
|
|
ret = xa_alloc_cyclic(&ctx->personalities, &id, (void *)creds,
|
|
|
|
XA_LIMIT(0, USHRT_MAX), &ctx->pers_next, GFP_KERNEL);
|
2021-08-20 20:53:59 +00:00
|
|
|
if (ret < 0) {
|
|
|
|
put_cred(creds);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
return id;
|
2020-01-28 17:04:42 +00:00
|
|
|
}
|
|
|
|
|
2020-08-27 14:58:30 +00:00
|
|
|
static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg,
|
|
|
|
unsigned int nr_args)
|
|
|
|
{
|
|
|
|
struct io_uring_restriction *res;
|
|
|
|
size_t size;
|
|
|
|
int i, ret;
|
|
|
|
|
2020-08-27 14:58:31 +00:00
|
|
|
/* Restrictions allowed only if rings started disabled */
|
|
|
|
if (!(ctx->flags & IORING_SETUP_R_DISABLED))
|
|
|
|
return -EBADFD;
|
|
|
|
|
2020-08-27 14:58:30 +00:00
|
|
|
/* We allow only a single restrictions registration */
|
2020-08-27 14:58:31 +00:00
|
|
|
if (ctx->restrictions.registered)
|
2020-08-27 14:58:30 +00:00
|
|
|
return -EBUSY;
|
|
|
|
|
|
|
|
if (!arg || nr_args > IORING_MAX_RESTRICTIONS)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
size = array_size(nr_args, sizeof(*res));
|
|
|
|
if (size == SIZE_MAX)
|
|
|
|
return -EOVERFLOW;
|
|
|
|
|
|
|
|
res = memdup_user(arg, size);
|
|
|
|
if (IS_ERR(res))
|
|
|
|
return PTR_ERR(res);
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
for (i = 0; i < nr_args; i++) {
|
|
|
|
switch (res[i].opcode) {
|
|
|
|
case IORING_RESTRICTION_REGISTER_OP:
|
|
|
|
if (res[i].register_op >= IORING_REGISTER_LAST) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
__set_bit(res[i].register_op,
|
|
|
|
ctx->restrictions.register_op);
|
|
|
|
break;
|
|
|
|
case IORING_RESTRICTION_SQE_OP:
|
|
|
|
if (res[i].sqe_op >= IORING_OP_LAST) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
__set_bit(res[i].sqe_op, ctx->restrictions.sqe_op);
|
|
|
|
break;
|
|
|
|
case IORING_RESTRICTION_SQE_FLAGS_ALLOWED:
|
|
|
|
ctx->restrictions.sqe_flags_allowed = res[i].sqe_flags;
|
|
|
|
break;
|
|
|
|
case IORING_RESTRICTION_SQE_FLAGS_REQUIRED:
|
|
|
|
ctx->restrictions.sqe_flags_required = res[i].sqe_flags;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
/* Reset all restrictions if an error happened */
|
|
|
|
if (ret != 0)
|
|
|
|
memset(&ctx->restrictions, 0, sizeof(ctx->restrictions));
|
|
|
|
else
|
2020-08-27 14:58:31 +00:00
|
|
|
ctx->restrictions.registered = true;
|
2020-08-27 14:58:30 +00:00
|
|
|
|
|
|
|
kfree(res);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-08-27 14:58:31 +00:00
|
|
|
static int io_register_enable_rings(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
if (!(ctx->flags & IORING_SETUP_R_DISABLED))
|
|
|
|
return -EBADFD;
|
|
|
|
|
|
|
|
if (ctx->restrictions.registered)
|
|
|
|
ctx->restricted = 1;
|
|
|
|
|
2021-03-08 13:20:57 +00:00
|
|
|
ctx->flags &= ~IORING_SETUP_R_DISABLED;
|
|
|
|
if (ctx->sq_data && wq_has_sleeper(&ctx->sq_data->wait))
|
|
|
|
wake_up(&ctx->sq_data->wait);
|
2020-08-27 14:58:31 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-04-25 13:32:20 +00:00
|
|
|
static int __io_register_rsrc_update(struct io_ring_ctx *ctx, unsigned type,
|
2021-04-25 13:32:22 +00:00
|
|
|
struct io_uring_rsrc_update2 *up,
|
2021-04-25 13:32:19 +00:00
|
|
|
unsigned nr_args)
|
|
|
|
{
|
|
|
|
__u32 tmp;
|
|
|
|
int err;
|
|
|
|
|
2021-04-25 13:32:22 +00:00
|
|
|
if (up->resv)
|
|
|
|
return -EINVAL;
|
2021-04-25 13:32:19 +00:00
|
|
|
if (check_add_overflow(up->offset, nr_args, &tmp))
|
|
|
|
return -EOVERFLOW;
|
|
|
|
err = io_rsrc_node_switch_start(ctx);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
2021-04-25 13:32:20 +00:00
|
|
|
switch (type) {
|
|
|
|
case IORING_RSRC_FILE:
|
2021-04-25 13:32:19 +00:00
|
|
|
return __io_sqe_files_update(ctx, up, nr_args);
|
2021-04-25 13:32:26 +00:00
|
|
|
case IORING_RSRC_BUFFER:
|
|
|
|
return __io_sqe_buffers_update(ctx, up, nr_args);
|
2021-04-25 13:32:19 +00:00
|
|
|
}
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2021-04-25 13:32:22 +00:00
|
|
|
static int io_register_files_update(struct io_ring_ctx *ctx, void __user *arg,
|
|
|
|
unsigned nr_args)
|
2021-04-25 13:32:19 +00:00
|
|
|
{
|
2021-04-25 13:32:22 +00:00
|
|
|
struct io_uring_rsrc_update2 up;
|
2021-04-25 13:32:19 +00:00
|
|
|
|
|
|
|
if (!nr_args)
|
|
|
|
return -EINVAL;
|
2021-04-25 13:32:22 +00:00
|
|
|
memset(&up, 0, sizeof(up));
|
|
|
|
if (copy_from_user(&up, arg, sizeof(struct io_uring_rsrc_update)))
|
|
|
|
return -EFAULT;
|
|
|
|
return __io_register_rsrc_update(ctx, IORING_RSRC_FILE, &up, nr_args);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_register_rsrc_update(struct io_ring_ctx *ctx, void __user *arg,
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 15:37:37 +00:00
|
|
|
unsigned size, unsigned type)
|
2021-04-25 13:32:22 +00:00
|
|
|
{
|
|
|
|
struct io_uring_rsrc_update2 up;
|
|
|
|
|
|
|
|
if (size != sizeof(up))
|
|
|
|
return -EINVAL;
|
2021-04-25 13:32:19 +00:00
|
|
|
if (copy_from_user(&up, arg, sizeof(up)))
|
|
|
|
return -EFAULT;
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 15:37:37 +00:00
|
|
|
if (!up.nr || up.resv)
|
2021-04-25 13:32:19 +00:00
|
|
|
return -EINVAL;
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 15:37:37 +00:00
|
|
|
return __io_register_rsrc_update(ctx, type, &up, up.nr);
|
2021-04-25 13:32:19 +00:00
|
|
|
}
|
|
|
|
|
2021-04-25 13:32:21 +00:00
|
|
|
static int io_register_rsrc(struct io_ring_ctx *ctx, void __user *arg,
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 15:37:37 +00:00
|
|
|
unsigned int size, unsigned int type)
|
2021-04-25 13:32:21 +00:00
|
|
|
{
|
|
|
|
struct io_uring_rsrc_register rr;
|
|
|
|
|
|
|
|
/* keep it extendible */
|
|
|
|
if (size != sizeof(rr))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
memset(&rr, 0, sizeof(rr));
|
|
|
|
if (copy_from_user(&rr, arg, size))
|
|
|
|
return -EFAULT;
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 15:37:37 +00:00
|
|
|
if (!rr.nr || rr.resv || rr.resv2)
|
2021-04-25 13:32:21 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 15:37:37 +00:00
|
|
|
switch (type) {
|
2021-04-25 13:32:21 +00:00
|
|
|
case IORING_RSRC_FILE:
|
|
|
|
return io_sqe_files_register(ctx, u64_to_user_ptr(rr.data),
|
|
|
|
rr.nr, u64_to_user_ptr(rr.tags));
|
2021-04-25 13:32:26 +00:00
|
|
|
case IORING_RSRC_BUFFER:
|
|
|
|
return io_sqe_buffers_register(ctx, u64_to_user_ptr(rr.data),
|
|
|
|
rr.nr, u64_to_user_ptr(rr.tags));
|
2021-04-25 13:32:21 +00:00
|
|
|
}
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2021-06-17 16:19:54 +00:00
|
|
|
static int io_register_iowq_aff(struct io_ring_ctx *ctx, void __user *arg,
|
|
|
|
unsigned len)
|
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
|
|
|
cpumask_var_t new_mask;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!tctx || !tctx->io_wq)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
cpumask_clear(new_mask);
|
|
|
|
if (len > cpumask_size())
|
|
|
|
len = cpumask_size();
|
|
|
|
|
|
|
|
if (copy_from_user(new_mask, arg, len)) {
|
|
|
|
free_cpumask_var(new_mask);
|
|
|
|
return -EFAULT;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = io_wq_cpu_affinity(tctx->io_wq, new_mask);
|
|
|
|
free_cpumask_var(new_mask);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_unregister_iowq_aff(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
|
|
|
|
|
|
|
if (!tctx || !tctx->io_wq)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
return io_wq_cpu_affinity(tctx->io_wq, NULL);
|
|
|
|
}
|
|
|
|
|
2020-01-28 17:04:42 +00:00
|
|
|
static bool io_register_op_must_quiesce(int op)
|
|
|
|
{
|
|
|
|
switch (op) {
|
2021-04-25 13:32:25 +00:00
|
|
|
case IORING_REGISTER_BUFFERS:
|
|
|
|
case IORING_UNREGISTER_BUFFERS:
|
2021-04-01 14:44:02 +00:00
|
|
|
case IORING_REGISTER_FILES:
|
2020-01-28 17:04:42 +00:00
|
|
|
case IORING_UNREGISTER_FILES:
|
|
|
|
case IORING_REGISTER_FILES_UPDATE:
|
|
|
|
case IORING_REGISTER_PROBE:
|
|
|
|
case IORING_REGISTER_PERSONALITY:
|
|
|
|
case IORING_UNREGISTER_PERSONALITY:
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 15:37:37 +00:00
|
|
|
case IORING_REGISTER_FILES2:
|
|
|
|
case IORING_REGISTER_FILES_UPDATE2:
|
|
|
|
case IORING_REGISTER_BUFFERS2:
|
|
|
|
case IORING_REGISTER_BUFFERS_UPDATE:
|
2021-06-17 16:19:54 +00:00
|
|
|
case IORING_REGISTER_IOWQ_AFF:
|
|
|
|
case IORING_UNREGISTER_IOWQ_AFF:
|
2020-01-28 17:04:42 +00:00
|
|
|
return false;
|
|
|
|
default:
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-08-09 12:04:12 +00:00
|
|
|
static int io_ctx_quiesce(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
long ret;
|
|
|
|
|
|
|
|
percpu_ref_kill(&ctx->refs);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Drop uring mutex before waiting for references to exit. If another
|
|
|
|
* thread is currently inside io_uring_enter() it might need to grab the
|
|
|
|
* uring_lock to make progress. If we hold it here across the drain
|
|
|
|
* wait, then we can deadlock. It's safe to drop the mutex here, since
|
|
|
|
* no new references will come in after we've killed the percpu ref.
|
|
|
|
*/
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
do {
|
|
|
|
ret = wait_for_completion_interruptible(&ctx->ref_comp);
|
|
|
|
if (!ret)
|
|
|
|
break;
|
|
|
|
ret = io_run_task_work_sig();
|
|
|
|
} while (ret >= 0);
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
|
|
|
|
if (ret)
|
|
|
|
io_refs_resurrect(&ctx->refs, &ctx->ref_comp);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
|
|
|
|
void __user *arg, unsigned nr_args)
|
2019-04-15 16:49:38 +00:00
|
|
|
__releases(ctx->uring_lock)
|
|
|
|
__acquires(ctx->uring_lock)
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2019-04-22 16:23:23 +00:00
|
|
|
/*
|
|
|
|
* We're inside the ring mutex, if the ref is already dying, then
|
|
|
|
* someone else killed the ctx or is already going through
|
|
|
|
* io_uring_register().
|
|
|
|
*/
|
|
|
|
if (percpu_ref_is_dying(&ctx->refs))
|
|
|
|
return -ENXIO;
|
|
|
|
|
2021-04-15 12:07:40 +00:00
|
|
|
if (ctx->restricted) {
|
|
|
|
if (opcode >= IORING_REGISTER_LAST)
|
|
|
|
return -EINVAL;
|
|
|
|
opcode = array_index_nospec(opcode, IORING_REGISTER_LAST);
|
|
|
|
if (!test_bit(opcode, ctx->restrictions.register_op))
|
|
|
|
return -EACCES;
|
|
|
|
}
|
|
|
|
|
2020-01-28 17:04:42 +00:00
|
|
|
if (io_register_op_must_quiesce(opcode)) {
|
2021-08-09 12:04:12 +00:00
|
|
|
ret = io_ctx_quiesce(ctx);
|
|
|
|
if (ret)
|
2021-04-11 00:46:40 +00:00
|
|
|
return ret;
|
2019-12-09 18:22:50 +00:00
|
|
|
}
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
|
|
|
|
switch (opcode) {
|
|
|
|
case IORING_REGISTER_BUFFERS:
|
2021-04-25 13:32:26 +00:00
|
|
|
ret = io_sqe_buffers_register(ctx, arg, nr_args, NULL);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_BUFFERS:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
2021-01-06 20:39:10 +00:00
|
|
|
ret = io_sqe_buffers_unregister(ctx);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
break;
|
2019-01-11 05:13:58 +00:00
|
|
|
case IORING_REGISTER_FILES:
|
2021-04-25 13:32:21 +00:00
|
|
|
ret = io_sqe_files_register(ctx, arg, nr_args, NULL);
|
2019-01-11 05:13:58 +00:00
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_FILES:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_sqe_files_unregister(ctx);
|
|
|
|
break;
|
2019-10-03 19:59:56 +00:00
|
|
|
case IORING_REGISTER_FILES_UPDATE:
|
2021-04-25 13:32:22 +00:00
|
|
|
ret = io_register_files_update(ctx, arg, nr_args);
|
2019-10-03 19:59:56 +00:00
|
|
|
break;
|
2019-04-11 17:45:41 +00:00
|
|
|
case IORING_REGISTER_EVENTFD:
|
2020-01-08 18:04:00 +00:00
|
|
|
case IORING_REGISTER_EVENTFD_ASYNC:
|
2019-04-11 17:45:41 +00:00
|
|
|
ret = -EINVAL;
|
|
|
|
if (nr_args != 1)
|
|
|
|
break;
|
|
|
|
ret = io_eventfd_register(ctx, arg);
|
2020-01-08 18:04:00 +00:00
|
|
|
if (ret)
|
|
|
|
break;
|
|
|
|
if (opcode == IORING_REGISTER_EVENTFD_ASYNC)
|
|
|
|
ctx->eventfd_async = 1;
|
|
|
|
else
|
|
|
|
ctx->eventfd_async = 0;
|
2019-04-11 17:45:41 +00:00
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_EVENTFD:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_eventfd_unregister(ctx);
|
|
|
|
break;
|
2020-01-16 22:36:52 +00:00
|
|
|
case IORING_REGISTER_PROBE:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (!arg || nr_args > 256)
|
|
|
|
break;
|
|
|
|
ret = io_probe(ctx, arg, nr_args);
|
|
|
|
break;
|
2020-01-28 17:04:42 +00:00
|
|
|
case IORING_REGISTER_PERSONALITY:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_register_personality(ctx);
|
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_PERSONALITY:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg)
|
|
|
|
break;
|
|
|
|
ret = io_unregister_personality(ctx, nr_args);
|
|
|
|
break;
|
2020-08-27 14:58:31 +00:00
|
|
|
case IORING_REGISTER_ENABLE_RINGS:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_register_enable_rings(ctx);
|
|
|
|
break;
|
2020-08-27 14:58:30 +00:00
|
|
|
case IORING_REGISTER_RESTRICTIONS:
|
|
|
|
ret = io_register_restrictions(ctx, arg, nr_args);
|
|
|
|
break;
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 15:37:37 +00:00
|
|
|
case IORING_REGISTER_FILES2:
|
|
|
|
ret = io_register_rsrc(ctx, arg, nr_args, IORING_RSRC_FILE);
|
|
|
|
break;
|
|
|
|
case IORING_REGISTER_FILES_UPDATE2:
|
|
|
|
ret = io_register_rsrc_update(ctx, arg, nr_args,
|
|
|
|
IORING_RSRC_FILE);
|
|
|
|
break;
|
|
|
|
case IORING_REGISTER_BUFFERS2:
|
|
|
|
ret = io_register_rsrc(ctx, arg, nr_args, IORING_RSRC_BUFFER);
|
2021-04-25 13:32:21 +00:00
|
|
|
break;
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 15:37:37 +00:00
|
|
|
case IORING_REGISTER_BUFFERS_UPDATE:
|
|
|
|
ret = io_register_rsrc_update(ctx, arg, nr_args,
|
|
|
|
IORING_RSRC_BUFFER);
|
2021-04-25 13:32:22 +00:00
|
|
|
break;
|
2021-06-17 16:19:54 +00:00
|
|
|
case IORING_REGISTER_IOWQ_AFF:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (!arg || !nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_register_iowq_aff(ctx, arg, nr_args);
|
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_IOWQ_AFF:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_unregister_iowq_aff(ctx);
|
|
|
|
break;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
default:
|
|
|
|
ret = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2020-01-28 17:04:42 +00:00
|
|
|
if (io_register_op_must_quiesce(opcode)) {
|
2019-12-09 18:22:50 +00:00
|
|
|
/* bring the ctx back to life */
|
|
|
|
percpu_ref_reinit(&ctx->refs);
|
2020-05-14 23:18:39 +00:00
|
|
|
reinit_completion(&ctx->ref_comp);
|
2019-12-09 18:22:50 +00:00
|
|
|
}
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
|
|
|
|
void __user *, arg, unsigned int, nr_args)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
long ret = -EBADF;
|
|
|
|
struct fd f;
|
|
|
|
|
|
|
|
f = fdget(fd);
|
|
|
|
if (!f.file)
|
|
|
|
return -EBADF;
|
|
|
|
|
|
|
|
ret = -EOPNOTSUPP;
|
|
|
|
if (f.file->f_op != &io_uring_fops)
|
|
|
|
goto out_fput;
|
|
|
|
|
|
|
|
ctx = f.file->private_data;
|
|
|
|
|
2021-02-20 15:17:18 +00:00
|
|
|
io_run_task_work();
|
|
|
|
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
ret = __io_uring_register(ctx, opcode, arg, nr_args);
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 17:02:01 +00:00
|
|
|
trace_io_uring_register(ctx, opcode, ctx->nr_user_files, ctx->nr_user_bufs,
|
|
|
|
ctx->cq_ev_fd != NULL, ret);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
out_fput:
|
|
|
|
fdput(f);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static int __init io_uring_init(void)
|
|
|
|
{
|
2020-01-29 13:39:41 +00:00
|
|
|
#define __BUILD_BUG_VERIFY_ELEMENT(stype, eoffset, etype, ename) do { \
|
|
|
|
BUILD_BUG_ON(offsetof(stype, ename) != eoffset); \
|
|
|
|
BUILD_BUG_ON(sizeof(etype) != sizeof_field(stype, ename)); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define BUILD_BUG_SQE_ELEM(eoffset, etype, ename) \
|
|
|
|
__BUILD_BUG_VERIFY_ELEMENT(struct io_uring_sqe, eoffset, etype, ename)
|
|
|
|
BUILD_BUG_ON(sizeof(struct io_uring_sqe) != 64);
|
|
|
|
BUILD_BUG_SQE_ELEM(0, __u8, opcode);
|
|
|
|
BUILD_BUG_SQE_ELEM(1, __u8, flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(2, __u16, ioprio);
|
|
|
|
BUILD_BUG_SQE_ELEM(4, __s32, fd);
|
|
|
|
BUILD_BUG_SQE_ELEM(8, __u64, off);
|
|
|
|
BUILD_BUG_SQE_ELEM(8, __u64, addr2);
|
|
|
|
BUILD_BUG_SQE_ELEM(16, __u64, addr);
|
2020-02-24 08:32:45 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(16, __u64, splice_off_in);
|
2020-01-29 13:39:41 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(24, __u32, len);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __kernel_rwf_t, rw_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, /* compat */ int, rw_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, /* compat */ __u32, rw_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, fsync_flags);
|
2020-06-17 09:53:55 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(28, /* compat */ __u16, poll_events);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, poll32_events);
|
2020-01-29 13:39:41 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, sync_range_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, msg_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, timeout_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, accept_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, cancel_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, open_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, statx_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, fadvise_advice);
|
2020-02-24 08:32:45 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, splice_flags);
|
2020-01-29 13:39:41 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(32, __u64, user_data);
|
|
|
|
BUILD_BUG_SQE_ELEM(40, __u16, buf_index);
|
2021-06-24 14:09:58 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(40, __u16, buf_group);
|
2020-01-29 13:39:41 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(42, __u16, personality);
|
2020-02-24 08:32:45 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(44, __s32, splice_fd_in);
|
2020-01-29 13:39:41 +00:00
|
|
|
|
2021-04-27 15:13:53 +00:00
|
|
|
BUILD_BUG_ON(sizeof(struct io_uring_files_update) !=
|
|
|
|
sizeof(struct io_uring_rsrc_update));
|
|
|
|
BUILD_BUG_ON(sizeof(struct io_uring_rsrc_update) >
|
|
|
|
sizeof(struct io_uring_rsrc_update2));
|
|
|
|
/* should fit into one byte */
|
|
|
|
BUILD_BUG_ON(SQE_VALID_FLAGS >= (1 << 8));
|
|
|
|
|
2019-12-18 16:50:26 +00:00
|
|
|
BUILD_BUG_ON(ARRAY_SIZE(io_op_defs) != IORING_OP_LAST);
|
2020-03-03 22:28:17 +00:00
|
|
|
BUILD_BUG_ON(__REQ_F_LAST_BIT >= 8 * sizeof(int));
|
2021-06-24 14:09:58 +00:00
|
|
|
|
2021-02-09 20:48:50 +00:00
|
|
|
req_cachep = KMEM_CACHE(io_kiocb, SLAB_HWCACHE_ALIGN | SLAB_PANIC |
|
|
|
|
SLAB_ACCOUNT);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return 0;
|
|
|
|
};
|
|
|
|
__initcall(io_uring_init);
|