Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
|
|
|
/*
|
|
|
|
* Shared application/kernel submission and completion ring pairs, for
|
|
|
|
* supporting fast/efficient IO.
|
|
|
|
*
|
|
|
|
* A note on the read/write ordering memory barriers that are matched between
|
|
|
|
* the application and kernel side. When the application reads the CQ ring
|
|
|
|
* tail, it must use an appropriate smp_rmb() to order with the smp_wmb()
|
|
|
|
* the kernel uses after writing the tail. Failure to do so could cause a
|
|
|
|
* delay in when the application notices that completion events available.
|
|
|
|
* This isn't a fatal condition. Likewise, the application must use an
|
|
|
|
* appropriate smp_wmb() both before writing the SQ tail, and after writing
|
|
|
|
* the SQ tail. The first one orders the sqe writes with the tail write, and
|
|
|
|
* the latter is paired with the smp_rmb() the kernel will issue before
|
|
|
|
* reading the SQ tail on submission.
|
|
|
|
*
|
|
|
|
* Also see the examples in the liburing library:
|
|
|
|
*
|
|
|
|
* git://git.kernel.dk/liburing
|
|
|
|
*
|
|
|
|
* io_uring also uses READ/WRITE_ONCE() for _any_ store or load that happens
|
|
|
|
* from data shared between the kernel and application. This is done both
|
|
|
|
* for ordering purposes, but also to ensure that once a value is loaded from
|
|
|
|
* data that the application could potentially modify, it remains stable.
|
|
|
|
*
|
|
|
|
* Copyright (C) 2018-2019 Jens Axboe
|
2019-01-11 16:43:02 +00:00
|
|
|
* Copyright (c) 2018-2019 Christoph Hellwig
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
*/
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/errno.h>
|
|
|
|
#include <linux/syscalls.h>
|
|
|
|
#include <linux/compat.h>
|
|
|
|
#include <linux/refcount.h>
|
|
|
|
#include <linux/uio.h>
|
|
|
|
|
|
|
|
#include <linux/sched/signal.h>
|
|
|
|
#include <linux/fs.h>
|
|
|
|
#include <linux/file.h>
|
|
|
|
#include <linux/fdtable.h>
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/mman.h>
|
|
|
|
#include <linux/mmu_context.h>
|
|
|
|
#include <linux/percpu.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/workqueue.h>
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
#include <linux/kthread.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#include <linux/blkdev.h>
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
#include <linux/bvec.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#include <linux/net.h>
|
|
|
|
#include <net/sock.h>
|
|
|
|
#include <net/af_unix.h>
|
2019-01-11 05:13:58 +00:00
|
|
|
#include <net/scm.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#include <linux/anon_inodes.h>
|
|
|
|
#include <linux/sched/mm.h>
|
|
|
|
#include <linux/uaccess.h>
|
|
|
|
#include <linux/nospec.h>
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
#include <linux/sizes.h>
|
|
|
|
#include <linux/hugetlb.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
#include <uapi/linux/io_uring.h>
|
|
|
|
|
|
|
|
#include "internal.h"
|
|
|
|
|
|
|
|
#define IORING_MAX_ENTRIES 4096
|
2019-01-11 05:13:58 +00:00
|
|
|
#define IORING_MAX_FIXED_FILES 1024
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
struct io_uring {
|
|
|
|
u32 head ____cacheline_aligned_in_smp;
|
|
|
|
u32 tail ____cacheline_aligned_in_smp;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct io_sq_ring {
|
|
|
|
struct io_uring r;
|
|
|
|
u32 ring_mask;
|
|
|
|
u32 ring_entries;
|
|
|
|
u32 dropped;
|
|
|
|
u32 flags;
|
|
|
|
u32 array[];
|
|
|
|
};
|
|
|
|
|
|
|
|
struct io_cq_ring {
|
|
|
|
struct io_uring r;
|
|
|
|
u32 ring_mask;
|
|
|
|
u32 ring_entries;
|
|
|
|
u32 overflow;
|
|
|
|
struct io_uring_cqe cqes[];
|
|
|
|
};
|
|
|
|
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
struct io_mapped_ubuf {
|
|
|
|
u64 ubuf;
|
|
|
|
size_t len;
|
|
|
|
struct bio_vec *bvec;
|
|
|
|
unsigned int nr_bvecs;
|
|
|
|
};
|
|
|
|
|
2019-01-19 05:56:34 +00:00
|
|
|
struct async_list {
|
|
|
|
spinlock_t lock;
|
|
|
|
atomic_t cnt;
|
|
|
|
struct list_head list;
|
|
|
|
|
|
|
|
struct file *file;
|
|
|
|
off_t io_end;
|
|
|
|
size_t io_pages;
|
|
|
|
};
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
struct io_ring_ctx {
|
|
|
|
struct {
|
|
|
|
struct percpu_ref refs;
|
|
|
|
} ____cacheline_aligned_in_smp;
|
|
|
|
|
|
|
|
struct {
|
|
|
|
unsigned int flags;
|
|
|
|
bool compat;
|
|
|
|
bool account_mem;
|
|
|
|
|
|
|
|
/* SQ ring */
|
|
|
|
struct io_sq_ring *sq_ring;
|
|
|
|
unsigned cached_sq_head;
|
|
|
|
unsigned sq_entries;
|
|
|
|
unsigned sq_mask;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
unsigned sq_thread_idle;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
struct io_uring_sqe *sq_sqes;
|
|
|
|
} ____cacheline_aligned_in_smp;
|
|
|
|
|
|
|
|
/* IO offload */
|
|
|
|
struct workqueue_struct *sqo_wq;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
struct task_struct *sqo_thread; /* if using sq thread polling */
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
struct mm_struct *sqo_mm;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
wait_queue_head_t sqo_wait;
|
|
|
|
unsigned sqo_stop;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
struct {
|
|
|
|
/* CQ ring */
|
|
|
|
struct io_cq_ring *cq_ring;
|
|
|
|
unsigned cached_cq_tail;
|
|
|
|
unsigned cq_entries;
|
|
|
|
unsigned cq_mask;
|
|
|
|
struct wait_queue_head cq_wait;
|
|
|
|
struct fasync_struct *cq_fasync;
|
|
|
|
} ____cacheline_aligned_in_smp;
|
|
|
|
|
2019-01-11 05:13:58 +00:00
|
|
|
/*
|
|
|
|
* If used, fixed file set. Writers must ensure that ->refs is dead,
|
|
|
|
* readers must ensure that ->refs is alive as long as the file* is
|
|
|
|
* used. Only updated through io_uring_register(2).
|
|
|
|
*/
|
|
|
|
struct file **user_files;
|
|
|
|
unsigned nr_user_files;
|
|
|
|
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
/* if used, fixed mapped user buffers */
|
|
|
|
unsigned nr_user_bufs;
|
|
|
|
struct io_mapped_ubuf *user_bufs;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
struct user_struct *user;
|
|
|
|
|
|
|
|
struct completion ctx_done;
|
|
|
|
|
|
|
|
struct {
|
|
|
|
struct mutex uring_lock;
|
|
|
|
wait_queue_head_t wait;
|
|
|
|
} ____cacheline_aligned_in_smp;
|
|
|
|
|
|
|
|
struct {
|
|
|
|
spinlock_t completion_lock;
|
2019-01-09 15:59:42 +00:00
|
|
|
bool poll_multi_file;
|
|
|
|
/*
|
|
|
|
* ->poll_list is protected by the ctx->uring_lock for
|
|
|
|
* io_uring instances that don't use IORING_SETUP_SQPOLL.
|
|
|
|
* For SQPOLL, only the single threaded io_sq_thread() will
|
|
|
|
* manipulate the list, hence no extra locking is needed there.
|
|
|
|
*/
|
|
|
|
struct list_head poll_list;
|
2019-01-17 16:41:58 +00:00
|
|
|
struct list_head cancel_list;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
} ____cacheline_aligned_in_smp;
|
|
|
|
|
2019-01-19 05:56:34 +00:00
|
|
|
struct async_list pending_async[2];
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
struct socket *ring_sock;
|
|
|
|
#endif
|
|
|
|
};
|
|
|
|
|
|
|
|
struct sqe_submit {
|
|
|
|
const struct io_uring_sqe *sqe;
|
|
|
|
unsigned short index;
|
|
|
|
bool has_user;
|
2019-01-09 15:59:42 +00:00
|
|
|
bool needs_lock;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
bool needs_fixed_file;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
};
|
|
|
|
|
2019-01-17 16:41:58 +00:00
|
|
|
struct io_poll_iocb {
|
|
|
|
struct file *file;
|
|
|
|
struct wait_queue_head *head;
|
|
|
|
__poll_t events;
|
|
|
|
bool woken;
|
|
|
|
bool canceled;
|
|
|
|
struct wait_queue_entry wait;
|
|
|
|
};
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
struct io_kiocb {
|
2019-01-17 16:41:58 +00:00
|
|
|
union {
|
|
|
|
struct kiocb rw;
|
|
|
|
struct io_poll_iocb poll;
|
|
|
|
};
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
struct sqe_submit submit;
|
|
|
|
|
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
struct list_head list;
|
|
|
|
unsigned int flags;
|
2019-01-17 15:39:48 +00:00
|
|
|
refcount_t refs;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#define REQ_F_FORCE_NONBLOCK 1 /* inline submission attempt */
|
2019-01-09 15:59:42 +00:00
|
|
|
#define REQ_F_IOPOLL_COMPLETED 2 /* polled IO has completed */
|
2019-01-11 05:13:58 +00:00
|
|
|
#define REQ_F_FIXED_FILE 4 /* ctx owns file */
|
2019-01-19 05:56:34 +00:00
|
|
|
#define REQ_F_SEQ_PREV 8 /* sequential with previous */
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
u64 user_data;
|
2019-01-09 15:59:42 +00:00
|
|
|
u64 error;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
struct work_struct work;
|
|
|
|
};
|
|
|
|
|
|
|
|
#define IO_PLUG_THRESHOLD 2
|
2019-01-09 15:59:42 +00:00
|
|
|
#define IO_IOPOLL_BATCH 8
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-01-09 16:06:50 +00:00
|
|
|
struct io_submit_state {
|
|
|
|
struct blk_plug plug;
|
|
|
|
|
2019-01-09 16:10:43 +00:00
|
|
|
/*
|
|
|
|
* io_kiocb alloc cache
|
|
|
|
*/
|
|
|
|
void *reqs[IO_IOPOLL_BATCH];
|
|
|
|
unsigned int free_reqs;
|
|
|
|
unsigned int cur_req;
|
|
|
|
|
2019-01-09 16:06:50 +00:00
|
|
|
/*
|
|
|
|
* File reference cache
|
|
|
|
*/
|
|
|
|
struct file *file;
|
|
|
|
unsigned int fd;
|
|
|
|
unsigned int has_refs;
|
|
|
|
unsigned int used_refs;
|
|
|
|
unsigned int ios_left;
|
|
|
|
};
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static struct kmem_cache *req_cachep;
|
|
|
|
|
|
|
|
static const struct file_operations io_uring_fops;
|
|
|
|
|
|
|
|
struct sock *io_uring_get_socket(struct file *file)
|
|
|
|
{
|
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
if (file->f_op == &io_uring_fops) {
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
|
|
|
|
return ctx->ring_sock->sk;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(io_uring_get_socket);
|
|
|
|
|
|
|
|
static void io_ring_ctx_ref_free(struct percpu_ref *ref)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = container_of(ref, struct io_ring_ctx, refs);
|
|
|
|
|
|
|
|
complete(&ctx->ctx_done);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
2019-01-19 05:56:34 +00:00
|
|
|
int i;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
|
|
|
|
if (!ctx)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
if (percpu_ref_init(&ctx->refs, io_ring_ctx_ref_free, 0, GFP_KERNEL)) {
|
|
|
|
kfree(ctx);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx->flags = p->flags;
|
|
|
|
init_waitqueue_head(&ctx->cq_wait);
|
|
|
|
init_completion(&ctx->ctx_done);
|
|
|
|
mutex_init(&ctx->uring_lock);
|
|
|
|
init_waitqueue_head(&ctx->wait);
|
2019-01-19 05:56:34 +00:00
|
|
|
for (i = 0; i < ARRAY_SIZE(ctx->pending_async); i++) {
|
|
|
|
spin_lock_init(&ctx->pending_async[i].lock);
|
|
|
|
INIT_LIST_HEAD(&ctx->pending_async[i].list);
|
|
|
|
atomic_set(&ctx->pending_async[i].cnt, 0);
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
spin_lock_init(&ctx->completion_lock);
|
2019-01-09 15:59:42 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->poll_list);
|
2019-01-17 16:41:58 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->cancel_list);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return ctx;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_commit_cqring(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_cq_ring *ring = ctx->cq_ring;
|
|
|
|
|
|
|
|
if (ctx->cached_cq_tail != READ_ONCE(ring->r.tail)) {
|
|
|
|
/* order cqe stores with ring update */
|
|
|
|
smp_store_release(&ring->r.tail, ctx->cached_cq_tail);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Write sider barrier of tail update, app has read side. See
|
|
|
|
* comment at the top of this file.
|
|
|
|
*/
|
|
|
|
smp_wmb();
|
|
|
|
|
|
|
|
if (wq_has_sleeper(&ctx->cq_wait)) {
|
|
|
|
wake_up_interruptible(&ctx->cq_wait);
|
|
|
|
kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct io_uring_cqe *io_get_cqring(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_cq_ring *ring = ctx->cq_ring;
|
|
|
|
unsigned tail;
|
|
|
|
|
|
|
|
tail = ctx->cached_cq_tail;
|
|
|
|
/* See comment at the top of the file */
|
|
|
|
smp_rmb();
|
|
|
|
if (tail + 1 == READ_ONCE(ring->r.head))
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
ctx->cached_cq_tail++;
|
|
|
|
return &ring->cqes[tail & ctx->cq_mask];
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_cqring_fill_event(struct io_ring_ctx *ctx, u64 ki_user_data,
|
|
|
|
long res, unsigned ev_flags)
|
|
|
|
{
|
|
|
|
struct io_uring_cqe *cqe;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we can't get a cq entry, userspace overflowed the
|
|
|
|
* submission (by quite a lot). Increment the overflow count in
|
|
|
|
* the ring.
|
|
|
|
*/
|
|
|
|
cqe = io_get_cqring(ctx);
|
|
|
|
if (cqe) {
|
|
|
|
WRITE_ONCE(cqe->user_data, ki_user_data);
|
|
|
|
WRITE_ONCE(cqe->res, res);
|
|
|
|
WRITE_ONCE(cqe->flags, ev_flags);
|
|
|
|
} else {
|
|
|
|
unsigned overflow = READ_ONCE(ctx->cq_ring->overflow);
|
|
|
|
|
|
|
|
WRITE_ONCE(ctx->cq_ring->overflow, overflow + 1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_cqring_add_event(struct io_ring_ctx *ctx, u64 ki_user_data,
|
|
|
|
long res, unsigned ev_flags)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&ctx->completion_lock, flags);
|
|
|
|
io_cqring_fill_event(ctx, ki_user_data, res, ev_flags);
|
|
|
|
io_commit_cqring(ctx);
|
|
|
|
spin_unlock_irqrestore(&ctx->completion_lock, flags);
|
|
|
|
|
|
|
|
if (waitqueue_active(&ctx->wait))
|
|
|
|
wake_up(&ctx->wait);
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
if (waitqueue_active(&ctx->sqo_wait))
|
|
|
|
wake_up(&ctx->sqo_wait);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void io_ring_drop_ctx_refs(struct io_ring_ctx *ctx, unsigned refs)
|
|
|
|
{
|
|
|
|
percpu_ref_put_many(&ctx->refs, refs);
|
|
|
|
|
|
|
|
if (waitqueue_active(&ctx->wait))
|
|
|
|
wake_up(&ctx->wait);
|
|
|
|
}
|
|
|
|
|
2019-01-09 16:10:43 +00:00
|
|
|
static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx,
|
|
|
|
struct io_submit_state *state)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct io_kiocb *req;
|
|
|
|
|
|
|
|
if (!percpu_ref_tryget(&ctx->refs))
|
|
|
|
return NULL;
|
|
|
|
|
2019-01-09 16:10:43 +00:00
|
|
|
if (!state) {
|
|
|
|
req = kmem_cache_alloc(req_cachep, __GFP_NOWARN);
|
|
|
|
if (unlikely(!req))
|
|
|
|
goto out;
|
|
|
|
} else if (!state->free_reqs) {
|
|
|
|
size_t sz;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
sz = min_t(size_t, state->ios_left, ARRAY_SIZE(state->reqs));
|
|
|
|
ret = kmem_cache_alloc_bulk(req_cachep, __GFP_NOWARN, sz,
|
|
|
|
state->reqs);
|
|
|
|
if (unlikely(ret <= 0))
|
|
|
|
goto out;
|
|
|
|
state->free_reqs = ret - 1;
|
|
|
|
state->cur_req = 1;
|
|
|
|
req = state->reqs[0];
|
|
|
|
} else {
|
|
|
|
req = state->reqs[state->cur_req];
|
|
|
|
state->free_reqs--;
|
|
|
|
state->cur_req++;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2019-01-09 16:10:43 +00:00
|
|
|
req->ctx = ctx;
|
|
|
|
req->flags = 0;
|
2019-03-12 16:16:44 +00:00
|
|
|
/* one is dropped after submission, the other at completion */
|
|
|
|
refcount_set(&req->refs, 2);
|
2019-01-09 16:10:43 +00:00
|
|
|
return req;
|
|
|
|
out:
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
io_ring_drop_ctx_refs(ctx, 1);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
static void io_free_req_many(struct io_ring_ctx *ctx, void **reqs, int *nr)
|
|
|
|
{
|
|
|
|
if (*nr) {
|
|
|
|
kmem_cache_free_bulk(req_cachep, *nr, reqs);
|
|
|
|
io_ring_drop_ctx_refs(ctx, *nr);
|
|
|
|
*nr = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static void io_free_req(struct io_kiocb *req)
|
|
|
|
{
|
2019-03-12 16:16:44 +00:00
|
|
|
io_ring_drop_ctx_refs(req->ctx, 1);
|
|
|
|
kmem_cache_free(req_cachep, req);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_put_req(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
if (refcount_dec_and_test(&req->refs))
|
|
|
|
io_free_req(req);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
/*
|
|
|
|
* Find and free completed poll iocbs
|
|
|
|
*/
|
|
|
|
static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
|
|
|
|
struct list_head *done)
|
|
|
|
{
|
|
|
|
void *reqs[IO_IOPOLL_BATCH];
|
2019-01-09 16:06:50 +00:00
|
|
|
int file_count, to_free;
|
|
|
|
struct file *file = NULL;
|
2019-01-09 15:59:42 +00:00
|
|
|
struct io_kiocb *req;
|
|
|
|
|
2019-01-09 16:06:50 +00:00
|
|
|
file_count = to_free = 0;
|
2019-01-09 15:59:42 +00:00
|
|
|
while (!list_empty(done)) {
|
|
|
|
req = list_first_entry(done, struct io_kiocb, list);
|
|
|
|
list_del(&req->list);
|
|
|
|
|
|
|
|
io_cqring_fill_event(ctx, req->user_data, req->error, 0);
|
|
|
|
|
2019-03-12 16:16:44 +00:00
|
|
|
if (refcount_dec_and_test(&req->refs))
|
|
|
|
reqs[to_free++] = req;
|
2019-01-09 15:59:42 +00:00
|
|
|
(*nr_events)++;
|
|
|
|
|
2019-01-09 16:06:50 +00:00
|
|
|
/*
|
|
|
|
* Batched puts of the same file, to avoid dirtying the
|
|
|
|
* file usage count multiple times, if avoidable.
|
|
|
|
*/
|
2019-01-11 05:13:58 +00:00
|
|
|
if (!(req->flags & REQ_F_FIXED_FILE)) {
|
|
|
|
if (!file) {
|
|
|
|
file = req->rw.ki_filp;
|
|
|
|
file_count = 1;
|
|
|
|
} else if (file == req->rw.ki_filp) {
|
|
|
|
file_count++;
|
|
|
|
} else {
|
|
|
|
fput_many(file, file_count);
|
|
|
|
file = req->rw.ki_filp;
|
|
|
|
file_count = 1;
|
|
|
|
}
|
2019-01-09 16:06:50 +00:00
|
|
|
}
|
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
if (to_free == ARRAY_SIZE(reqs))
|
|
|
|
io_free_req_many(ctx, reqs, &to_free);
|
|
|
|
}
|
|
|
|
io_commit_cqring(ctx);
|
|
|
|
|
2019-01-09 16:06:50 +00:00
|
|
|
if (file)
|
|
|
|
fput_many(file, file_count);
|
2019-01-09 15:59:42 +00:00
|
|
|
io_free_req_many(ctx, reqs, &to_free);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
|
|
|
|
long min)
|
|
|
|
{
|
|
|
|
struct io_kiocb *req, *tmp;
|
|
|
|
LIST_HEAD(done);
|
|
|
|
bool spin;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Only spin for completions if we don't have multiple devices hanging
|
|
|
|
* off our complete list, and we're under the requested amount.
|
|
|
|
*/
|
|
|
|
spin = !ctx->poll_multi_file && *nr_events < min;
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
list_for_each_entry_safe(req, tmp, &ctx->poll_list, list) {
|
|
|
|
struct kiocb *kiocb = &req->rw;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Move completed entries to our local list. If we find a
|
|
|
|
* request that requires polling, break out and complete
|
|
|
|
* the done list first, if we have entries there.
|
|
|
|
*/
|
|
|
|
if (req->flags & REQ_F_IOPOLL_COMPLETED) {
|
|
|
|
list_move_tail(&req->list, &done);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (!list_empty(&done))
|
|
|
|
break;
|
|
|
|
|
|
|
|
ret = kiocb->ki_filp->f_op->iopoll(kiocb, spin);
|
|
|
|
if (ret < 0)
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (ret && spin)
|
|
|
|
spin = false;
|
|
|
|
ret = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!list_empty(&done))
|
|
|
|
io_iopoll_complete(ctx, nr_events, &done);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Poll for a mininum of 'min' events. Note that if min == 0 we consider that a
|
|
|
|
* non-spinning poll check - we'll still enter the driver poll loop, but only
|
|
|
|
* as a non-spinning completion check.
|
|
|
|
*/
|
|
|
|
static int io_iopoll_getevents(struct io_ring_ctx *ctx, unsigned int *nr_events,
|
|
|
|
long min)
|
|
|
|
{
|
|
|
|
while (!list_empty(&ctx->poll_list)) {
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = io_do_iopoll(ctx, nr_events, min);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
if (!min || *nr_events >= min)
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We can't just wait for polled events to come to us, we have to actively
|
|
|
|
* find and complete them.
|
|
|
|
*/
|
|
|
|
static void io_iopoll_reap_events(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
if (!(ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return;
|
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
while (!list_empty(&ctx->poll_list)) {
|
|
|
|
unsigned int nr_events = 0;
|
|
|
|
|
|
|
|
io_iopoll_getevents(ctx, &nr_events, 1);
|
|
|
|
}
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned *nr_events,
|
|
|
|
long min)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
do {
|
|
|
|
int tmin = 0;
|
|
|
|
|
|
|
|
if (*nr_events < min)
|
|
|
|
tmin = min - *nr_events;
|
|
|
|
|
|
|
|
ret = io_iopoll_getevents(ctx, nr_events, tmin);
|
|
|
|
if (ret <= 0)
|
|
|
|
break;
|
|
|
|
ret = 0;
|
|
|
|
} while (min && !*nr_events && !need_resched());
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static void kiocb_end_write(struct kiocb *kiocb)
|
|
|
|
{
|
|
|
|
if (kiocb->ki_flags & IOCB_WRITE) {
|
|
|
|
struct inode *inode = file_inode(kiocb->ki_filp);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Tell lockdep we inherited freeze protection from submission
|
|
|
|
* thread.
|
|
|
|
*/
|
|
|
|
if (S_ISREG(inode->i_mode))
|
|
|
|
__sb_writers_acquired(inode->i_sb, SB_FREEZE_WRITE);
|
|
|
|
file_end_write(kiocb->ki_filp);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-01-11 05:13:58 +00:00
|
|
|
static void io_fput(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
if (!(req->flags & REQ_F_FIXED_FILE))
|
|
|
|
fput(req->rw.ki_filp);
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static void io_complete_rw(struct kiocb *kiocb, long res, long res2)
|
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw);
|
|
|
|
|
|
|
|
kiocb_end_write(kiocb);
|
|
|
|
|
2019-01-11 05:13:58 +00:00
|
|
|
io_fput(req);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
io_cqring_add_event(req->ctx, req->user_data, res, 0);
|
2019-03-12 16:16:44 +00:00
|
|
|
io_put_req(req);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
|
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw);
|
|
|
|
|
|
|
|
kiocb_end_write(kiocb);
|
|
|
|
|
|
|
|
req->error = res;
|
|
|
|
if (res != -EAGAIN)
|
|
|
|
req->flags |= REQ_F_IOPOLL_COMPLETED;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* After the iocb has been issued, it's safe to be found on the poll list.
|
|
|
|
* Adding the kiocb to the list AFTER submission ensures that we don't
|
|
|
|
* find it from a io_iopoll_getevents() thread before the issuer is done
|
|
|
|
* accessing the kiocb cookie.
|
|
|
|
*/
|
|
|
|
static void io_iopoll_req_issued(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Track whether we have multiple files in our lists. This will impact
|
|
|
|
* how we do polling eventually, not spinning if we're on potentially
|
|
|
|
* different devices.
|
|
|
|
*/
|
|
|
|
if (list_empty(&ctx->poll_list)) {
|
|
|
|
ctx->poll_multi_file = false;
|
|
|
|
} else if (!ctx->poll_multi_file) {
|
|
|
|
struct io_kiocb *list_req;
|
|
|
|
|
|
|
|
list_req = list_first_entry(&ctx->poll_list, struct io_kiocb,
|
|
|
|
list);
|
|
|
|
if (list_req->rw.ki_filp != req->rw.ki_filp)
|
|
|
|
ctx->poll_multi_file = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For fast devices, IO may have already completed. If it has, add
|
|
|
|
* it to the front so we find it first.
|
|
|
|
*/
|
|
|
|
if (req->flags & REQ_F_IOPOLL_COMPLETED)
|
|
|
|
list_add(&req->list, &ctx->poll_list);
|
|
|
|
else
|
|
|
|
list_add_tail(&req->list, &ctx->poll_list);
|
|
|
|
}
|
|
|
|
|
2019-01-09 16:06:50 +00:00
|
|
|
static void io_file_put(struct io_submit_state *state, struct file *file)
|
|
|
|
{
|
|
|
|
if (!state) {
|
|
|
|
fput(file);
|
|
|
|
} else if (state->file) {
|
|
|
|
int diff = state->has_refs - state->used_refs;
|
|
|
|
|
|
|
|
if (diff)
|
|
|
|
fput_many(state->file, diff);
|
|
|
|
state->file = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get as many references to a file as we have IOs left in this submission,
|
|
|
|
* assuming most submissions are for one file, or at least that each file
|
|
|
|
* has more than one submission.
|
|
|
|
*/
|
|
|
|
static struct file *io_file_get(struct io_submit_state *state, int fd)
|
|
|
|
{
|
|
|
|
if (!state)
|
|
|
|
return fget(fd);
|
|
|
|
|
|
|
|
if (state->file) {
|
|
|
|
if (state->fd == fd) {
|
|
|
|
state->used_refs++;
|
|
|
|
state->ios_left--;
|
|
|
|
return state->file;
|
|
|
|
}
|
|
|
|
io_file_put(state, NULL);
|
|
|
|
}
|
|
|
|
state->file = fget_many(fd, state->ios_left);
|
|
|
|
if (!state->file)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
state->fd = fd;
|
|
|
|
state->has_refs = state->ios_left;
|
|
|
|
state->used_refs = 1;
|
|
|
|
state->ios_left--;
|
|
|
|
return state->file;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
/*
|
|
|
|
* If we tracked the file through the SCM inflight mechanism, we could support
|
|
|
|
* any file. For now, just ensure that anything potentially problematic is done
|
|
|
|
* inline.
|
|
|
|
*/
|
|
|
|
static bool io_file_supports_async(struct file *file)
|
|
|
|
{
|
|
|
|
umode_t mode = file_inode(file)->i_mode;
|
|
|
|
|
|
|
|
if (S_ISBLK(mode) || S_ISCHR(mode))
|
|
|
|
return true;
|
|
|
|
if (S_ISREG(mode) && file->f_op != &io_uring_fops)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
static int io_prep_rw(struct io_kiocb *req, const struct sqe_submit *s,
|
2019-01-09 16:06:50 +00:00
|
|
|
bool force_nonblock, struct io_submit_state *state)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
const struct io_uring_sqe *sqe = s->sqe;
|
2019-01-09 15:59:42 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
struct kiocb *kiocb = &req->rw;
|
2019-01-11 05:13:58 +00:00
|
|
|
unsigned ioprio, flags;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
int fd, ret;
|
|
|
|
|
|
|
|
/* For -EAGAIN retry, everything is already prepped */
|
|
|
|
if (kiocb->ki_filp)
|
|
|
|
return 0;
|
|
|
|
|
2019-01-11 05:13:58 +00:00
|
|
|
flags = READ_ONCE(sqe->flags);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
fd = READ_ONCE(sqe->fd);
|
2019-01-11 05:13:58 +00:00
|
|
|
|
|
|
|
if (flags & IOSQE_FIXED_FILE) {
|
|
|
|
if (unlikely(!ctx->user_files ||
|
|
|
|
(unsigned) fd >= ctx->nr_user_files))
|
|
|
|
return -EBADF;
|
|
|
|
kiocb->ki_filp = ctx->user_files[fd];
|
|
|
|
req->flags |= REQ_F_FIXED_FILE;
|
|
|
|
} else {
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
if (s->needs_fixed_file)
|
|
|
|
return -EBADF;
|
2019-01-11 05:13:58 +00:00
|
|
|
kiocb->ki_filp = io_file_get(state, fd);
|
|
|
|
if (unlikely(!kiocb->ki_filp))
|
|
|
|
return -EBADF;
|
|
|
|
if (force_nonblock && !io_file_supports_async(kiocb->ki_filp))
|
|
|
|
force_nonblock = false;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
kiocb->ki_pos = READ_ONCE(sqe->off);
|
|
|
|
kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
|
|
|
|
kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
|
|
|
|
|
|
|
|
ioprio = READ_ONCE(sqe->ioprio);
|
|
|
|
if (ioprio) {
|
|
|
|
ret = ioprio_check_cap(ioprio);
|
|
|
|
if (ret)
|
|
|
|
goto out_fput;
|
|
|
|
|
|
|
|
kiocb->ki_ioprio = ioprio;
|
|
|
|
} else
|
|
|
|
kiocb->ki_ioprio = get_current_ioprio();
|
|
|
|
|
|
|
|
ret = kiocb_set_rw_flags(kiocb, READ_ONCE(sqe->rw_flags));
|
|
|
|
if (unlikely(ret))
|
|
|
|
goto out_fput;
|
|
|
|
if (force_nonblock) {
|
|
|
|
kiocb->ki_flags |= IOCB_NOWAIT;
|
|
|
|
req->flags |= REQ_F_FORCE_NONBLOCK;
|
|
|
|
}
|
2019-01-09 15:59:42 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL) {
|
|
|
|
ret = -EOPNOTSUPP;
|
|
|
|
if (!(kiocb->ki_flags & IOCB_DIRECT) ||
|
|
|
|
!kiocb->ki_filp->f_op->iopoll)
|
|
|
|
goto out_fput;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
req->error = 0;
|
|
|
|
kiocb->ki_flags |= IOCB_HIPRI;
|
|
|
|
kiocb->ki_complete = io_complete_rw_iopoll;
|
|
|
|
} else {
|
|
|
|
if (kiocb->ki_flags & IOCB_HIPRI) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out_fput;
|
|
|
|
}
|
|
|
|
kiocb->ki_complete = io_complete_rw;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return 0;
|
|
|
|
out_fput:
|
2019-01-11 05:13:58 +00:00
|
|
|
if (!(flags & IOSQE_FIXED_FILE)) {
|
|
|
|
/*
|
|
|
|
* in case of error, we didn't use this file reference. drop it.
|
|
|
|
*/
|
|
|
|
if (state)
|
|
|
|
state->used_refs--;
|
|
|
|
io_file_put(state, kiocb->ki_filp);
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void io_rw_done(struct kiocb *kiocb, ssize_t ret)
|
|
|
|
{
|
|
|
|
switch (ret) {
|
|
|
|
case -EIOCBQUEUED:
|
|
|
|
break;
|
|
|
|
case -ERESTARTSYS:
|
|
|
|
case -ERESTARTNOINTR:
|
|
|
|
case -ERESTARTNOHAND:
|
|
|
|
case -ERESTART_RESTARTBLOCK:
|
|
|
|
/*
|
|
|
|
* We can't just restart the syscall, since previously
|
|
|
|
* submitted sqes may already be in progress. Just fail this
|
|
|
|
* IO with EINTR.
|
|
|
|
*/
|
|
|
|
ret = -EINTR;
|
|
|
|
/* fall through */
|
|
|
|
default:
|
|
|
|
kiocb->ki_complete(kiocb, ret, 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
static int io_import_fixed(struct io_ring_ctx *ctx, int rw,
|
|
|
|
const struct io_uring_sqe *sqe,
|
|
|
|
struct iov_iter *iter)
|
|
|
|
{
|
|
|
|
size_t len = READ_ONCE(sqe->len);
|
|
|
|
struct io_mapped_ubuf *imu;
|
|
|
|
unsigned index, buf_index;
|
|
|
|
size_t offset;
|
|
|
|
u64 buf_addr;
|
|
|
|
|
|
|
|
/* attempt to use fixed buffers without having provided iovecs */
|
|
|
|
if (unlikely(!ctx->user_bufs))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
buf_index = READ_ONCE(sqe->buf_index);
|
|
|
|
if (unlikely(buf_index >= ctx->nr_user_bufs))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
index = array_index_nospec(buf_index, ctx->nr_user_bufs);
|
|
|
|
imu = &ctx->user_bufs[index];
|
|
|
|
buf_addr = READ_ONCE(sqe->addr);
|
|
|
|
|
|
|
|
/* overflow */
|
|
|
|
if (buf_addr + len < buf_addr)
|
|
|
|
return -EFAULT;
|
|
|
|
/* not inside the mapped region */
|
|
|
|
if (buf_addr < imu->ubuf || buf_addr + len > imu->ubuf + imu->len)
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* May not be a start of buffer, set size appropriately
|
|
|
|
* and advance us to the beginning.
|
|
|
|
*/
|
|
|
|
offset = buf_addr - imu->ubuf;
|
|
|
|
iov_iter_bvec(iter, rw, imu->bvec, imu->nr_bvecs, offset + len);
|
|
|
|
if (offset)
|
|
|
|
iov_iter_advance(iter, offset);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static int io_import_iovec(struct io_ring_ctx *ctx, int rw,
|
|
|
|
const struct sqe_submit *s, struct iovec **iovec,
|
|
|
|
struct iov_iter *iter)
|
|
|
|
{
|
|
|
|
const struct io_uring_sqe *sqe = s->sqe;
|
|
|
|
void __user *buf = u64_to_user_ptr(READ_ONCE(sqe->addr));
|
|
|
|
size_t sqe_len = READ_ONCE(sqe->len);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
u8 opcode;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We're reading ->opcode for the second time, but the first read
|
|
|
|
* doesn't care whether it's _FIXED or not, so it doesn't matter
|
|
|
|
* whether ->opcode changes concurrently. The first read does care
|
|
|
|
* about whether it is a READ or a WRITE, so we don't trust this read
|
|
|
|
* for that purpose and instead let the caller pass in the read/write
|
|
|
|
* flag.
|
|
|
|
*/
|
|
|
|
opcode = READ_ONCE(sqe->opcode);
|
|
|
|
if (opcode == IORING_OP_READ_FIXED ||
|
|
|
|
opcode == IORING_OP_WRITE_FIXED) {
|
|
|
|
ssize_t ret = io_import_fixed(ctx, rw, sqe, iter);
|
|
|
|
*iovec = NULL;
|
|
|
|
return ret;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
if (!s->has_user)
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
if (ctx->compat)
|
|
|
|
return compat_import_iovec(rw, buf, sqe_len, UIO_FASTIOV,
|
|
|
|
iovec, iter);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
return import_iovec(rw, buf, sqe_len, UIO_FASTIOV, iovec, iter);
|
|
|
|
}
|
|
|
|
|
2019-01-19 05:56:34 +00:00
|
|
|
/*
|
|
|
|
* Make a note of the last file/offset/direction we punted to async
|
|
|
|
* context. We'll use this information to see if we can piggy back a
|
|
|
|
* sequential request onto the previous one, if it's still hasn't been
|
|
|
|
* completed by the async worker.
|
|
|
|
*/
|
|
|
|
static void io_async_list_note(int rw, struct io_kiocb *req, size_t len)
|
|
|
|
{
|
|
|
|
struct async_list *async_list = &req->ctx->pending_async[rw];
|
|
|
|
struct kiocb *kiocb = &req->rw;
|
|
|
|
struct file *filp = kiocb->ki_filp;
|
|
|
|
off_t io_end = kiocb->ki_pos + len;
|
|
|
|
|
|
|
|
if (filp == async_list->file && kiocb->ki_pos == async_list->io_end) {
|
|
|
|
unsigned long max_pages;
|
|
|
|
|
|
|
|
/* Use 8x RA size as a decent limiter for both reads/writes */
|
|
|
|
max_pages = filp->f_ra.ra_pages;
|
|
|
|
if (!max_pages)
|
|
|
|
max_pages = VM_MAX_READAHEAD >> (PAGE_SHIFT - 10);
|
|
|
|
max_pages *= 8;
|
|
|
|
|
|
|
|
/* If max pages are exceeded, reset the state */
|
|
|
|
len >>= PAGE_SHIFT;
|
|
|
|
if (async_list->io_pages + len <= max_pages) {
|
|
|
|
req->flags |= REQ_F_SEQ_PREV;
|
|
|
|
async_list->io_pages += len;
|
|
|
|
} else {
|
|
|
|
io_end = 0;
|
|
|
|
async_list->io_pages = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* New file? Reset state. */
|
|
|
|
if (async_list->file != filp) {
|
|
|
|
async_list->io_pages = 0;
|
|
|
|
async_list->file = filp;
|
|
|
|
}
|
|
|
|
async_list->io_end = io_end;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static ssize_t io_read(struct io_kiocb *req, const struct sqe_submit *s,
|
2019-01-09 16:06:50 +00:00
|
|
|
bool force_nonblock, struct io_submit_state *state)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
|
|
|
|
struct kiocb *kiocb = &req->rw;
|
|
|
|
struct iov_iter iter;
|
|
|
|
struct file *file;
|
2019-01-19 05:56:34 +00:00
|
|
|
size_t iov_count;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
ssize_t ret;
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
ret = io_prep_rw(req, s, force_nonblock, state);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
file = kiocb->ki_filp;
|
|
|
|
|
|
|
|
ret = -EBADF;
|
|
|
|
if (unlikely(!(file->f_mode & FMODE_READ)))
|
|
|
|
goto out_fput;
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (unlikely(!file->f_op->read_iter))
|
|
|
|
goto out_fput;
|
|
|
|
|
|
|
|
ret = io_import_iovec(req->ctx, READ, s, &iovec, &iter);
|
|
|
|
if (ret)
|
|
|
|
goto out_fput;
|
|
|
|
|
2019-01-19 05:56:34 +00:00
|
|
|
iov_count = iov_iter_count(&iter);
|
|
|
|
ret = rw_verify_area(READ, file, &kiocb->ki_pos, iov_count);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
if (!ret) {
|
|
|
|
ssize_t ret2;
|
|
|
|
|
|
|
|
/* Catch -EAGAIN return for forced non-blocking submission */
|
|
|
|
ret2 = call_read_iter(file, kiocb, &iter);
|
2019-01-19 05:56:34 +00:00
|
|
|
if (!force_nonblock || ret2 != -EAGAIN) {
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
io_rw_done(kiocb, ret2);
|
2019-01-19 05:56:34 +00:00
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* If ->needs_lock is true, we're already in async
|
|
|
|
* context.
|
|
|
|
*/
|
|
|
|
if (!s->needs_lock)
|
|
|
|
io_async_list_note(READ, req, iov_count);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
ret = -EAGAIN;
|
2019-01-19 05:56:34 +00:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
kfree(iovec);
|
|
|
|
out_fput:
|
|
|
|
/* Hold on to the file for -EAGAIN */
|
|
|
|
if (unlikely(ret && ret != -EAGAIN))
|
2019-01-11 05:13:58 +00:00
|
|
|
io_fput(req);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t io_write(struct io_kiocb *req, const struct sqe_submit *s,
|
2019-01-09 16:06:50 +00:00
|
|
|
bool force_nonblock, struct io_submit_state *state)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
|
|
|
|
struct kiocb *kiocb = &req->rw;
|
|
|
|
struct iov_iter iter;
|
|
|
|
struct file *file;
|
2019-01-19 05:56:34 +00:00
|
|
|
size_t iov_count;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
ssize_t ret;
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
ret = io_prep_rw(req, s, force_nonblock, state);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
ret = -EBADF;
|
|
|
|
file = kiocb->ki_filp;
|
|
|
|
if (unlikely(!(file->f_mode & FMODE_WRITE)))
|
|
|
|
goto out_fput;
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (unlikely(!file->f_op->write_iter))
|
|
|
|
goto out_fput;
|
|
|
|
|
|
|
|
ret = io_import_iovec(req->ctx, WRITE, s, &iovec, &iter);
|
|
|
|
if (ret)
|
|
|
|
goto out_fput;
|
|
|
|
|
2019-01-19 05:56:34 +00:00
|
|
|
iov_count = iov_iter_count(&iter);
|
|
|
|
|
|
|
|
ret = -EAGAIN;
|
|
|
|
if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT)) {
|
|
|
|
/* If ->needs_lock is true, we're already in async context. */
|
|
|
|
if (!s->needs_lock)
|
|
|
|
io_async_list_note(WRITE, req, iov_count);
|
|
|
|
goto out_free;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = rw_verify_area(WRITE, file, &kiocb->ki_pos, iov_count);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
if (!ret) {
|
|
|
|
/*
|
|
|
|
* Open-code file_start_write here to grab freeze protection,
|
|
|
|
* which will be released by another thread in
|
|
|
|
* io_complete_rw(). Fool lockdep by telling it the lock got
|
|
|
|
* released so that it doesn't complain about the held lock when
|
|
|
|
* we return to userspace.
|
|
|
|
*/
|
|
|
|
if (S_ISREG(file_inode(file)->i_mode)) {
|
|
|
|
__sb_start_write(file_inode(file)->i_sb,
|
|
|
|
SB_FREEZE_WRITE, true);
|
|
|
|
__sb_writers_release(file_inode(file)->i_sb,
|
|
|
|
SB_FREEZE_WRITE);
|
|
|
|
}
|
|
|
|
kiocb->ki_flags |= IOCB_WRITE;
|
|
|
|
io_rw_done(kiocb, call_write_iter(file, kiocb, &iter));
|
|
|
|
}
|
2019-01-19 05:56:34 +00:00
|
|
|
out_free:
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
kfree(iovec);
|
|
|
|
out_fput:
|
2019-01-19 05:56:34 +00:00
|
|
|
/* Hold on to the file for -EAGAIN */
|
|
|
|
if (unlikely(ret && ret != -EAGAIN))
|
2019-01-11 05:13:58 +00:00
|
|
|
io_fput(req);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* IORING_OP_NOP just posts a completion event, nothing else.
|
|
|
|
*/
|
|
|
|
static int io_nop(struct io_kiocb *req, u64 user_data)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
long err = 0;
|
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
/*
|
|
|
|
* Twilight zone - it's possible that someone issued an opcode that
|
|
|
|
* has a file attached, then got -EAGAIN on submission, and changed
|
|
|
|
* the sqe before we retried it from async context. Avoid dropping
|
|
|
|
* a file reference for this malicious case, and flag the error.
|
|
|
|
*/
|
|
|
|
if (req->rw.ki_filp) {
|
|
|
|
err = -EBADF;
|
2019-01-11 05:13:58 +00:00
|
|
|
io_fput(req);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
io_cqring_add_event(ctx, user_data, err, 0);
|
2019-03-12 16:16:44 +00:00
|
|
|
io_put_req(req);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-01-11 16:43:02 +00:00
|
|
|
static int io_prep_fsync(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
|
|
|
{
|
2019-01-11 05:13:58 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
unsigned flags;
|
2019-01-11 16:43:02 +00:00
|
|
|
int fd;
|
|
|
|
|
|
|
|
/* Prep already done */
|
|
|
|
if (req->rw.ki_filp)
|
|
|
|
return 0;
|
|
|
|
|
2019-01-11 05:13:58 +00:00
|
|
|
if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
|
2019-01-09 15:59:42 +00:00
|
|
|
return -EINVAL;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index))
|
2019-01-11 16:43:02 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
fd = READ_ONCE(sqe->fd);
|
2019-01-11 05:13:58 +00:00
|
|
|
flags = READ_ONCE(sqe->flags);
|
|
|
|
|
|
|
|
if (flags & IOSQE_FIXED_FILE) {
|
|
|
|
if (unlikely(!ctx->user_files || fd >= ctx->nr_user_files))
|
|
|
|
return -EBADF;
|
|
|
|
req->rw.ki_filp = ctx->user_files[fd];
|
|
|
|
req->flags |= REQ_F_FIXED_FILE;
|
|
|
|
} else {
|
|
|
|
req->rw.ki_filp = fget(fd);
|
|
|
|
if (unlikely(!req->rw.ki_filp))
|
|
|
|
return -EBADF;
|
|
|
|
}
|
2019-01-11 16:43:02 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_fsync(struct io_kiocb *req, const struct io_uring_sqe *sqe,
|
|
|
|
bool force_nonblock)
|
|
|
|
{
|
|
|
|
loff_t sqe_off = READ_ONCE(sqe->off);
|
|
|
|
loff_t sqe_len = READ_ONCE(sqe->len);
|
|
|
|
loff_t end = sqe_off + sqe_len;
|
|
|
|
unsigned fsync_flags;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
fsync_flags = READ_ONCE(sqe->fsync_flags);
|
|
|
|
if (unlikely(fsync_flags & ~IORING_FSYNC_DATASYNC))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
ret = io_prep_fsync(req, sqe);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
/* fsync always requires a blocking context */
|
|
|
|
if (force_nonblock)
|
|
|
|
return -EAGAIN;
|
|
|
|
|
|
|
|
ret = vfs_fsync_range(req->rw.ki_filp, sqe_off,
|
|
|
|
end > 0 ? end : LLONG_MAX,
|
|
|
|
fsync_flags & IORING_FSYNC_DATASYNC);
|
|
|
|
|
2019-01-11 05:13:58 +00:00
|
|
|
io_fput(req);
|
2019-01-11 16:43:02 +00:00
|
|
|
io_cqring_add_event(req->ctx, sqe->user_data, ret, 0);
|
2019-03-12 16:16:44 +00:00
|
|
|
io_put_req(req);
|
2019-01-11 16:43:02 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-01-17 16:41:58 +00:00
|
|
|
static void io_poll_remove_one(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
struct io_poll_iocb *poll = &req->poll;
|
|
|
|
|
|
|
|
spin_lock(&poll->head->lock);
|
|
|
|
WRITE_ONCE(poll->canceled, true);
|
|
|
|
if (!list_empty(&poll->wait.entry)) {
|
|
|
|
list_del_init(&poll->wait.entry);
|
|
|
|
queue_work(req->ctx->sqo_wq, &req->work);
|
|
|
|
}
|
|
|
|
spin_unlock(&poll->head->lock);
|
|
|
|
|
|
|
|
list_del_init(&req->list);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_poll_remove_all(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_kiocb *req;
|
|
|
|
|
|
|
|
spin_lock_irq(&ctx->completion_lock);
|
|
|
|
while (!list_empty(&ctx->cancel_list)) {
|
|
|
|
req = list_first_entry(&ctx->cancel_list, struct io_kiocb,list);
|
|
|
|
io_poll_remove_one(req);
|
|
|
|
}
|
|
|
|
spin_unlock_irq(&ctx->completion_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Find a running poll command that matches one specified in sqe->addr,
|
|
|
|
* and remove it if found.
|
|
|
|
*/
|
|
|
|
static int io_poll_remove(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct io_kiocb *poll_req, *next;
|
|
|
|
int ret = -ENOENT;
|
|
|
|
|
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
|
|
|
if (sqe->ioprio || sqe->off || sqe->len || sqe->buf_index ||
|
|
|
|
sqe->poll_events)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
spin_lock_irq(&ctx->completion_lock);
|
|
|
|
list_for_each_entry_safe(poll_req, next, &ctx->cancel_list, list) {
|
|
|
|
if (READ_ONCE(sqe->addr) == poll_req->user_data) {
|
|
|
|
io_poll_remove_one(poll_req);
|
|
|
|
ret = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
spin_unlock_irq(&ctx->completion_lock);
|
|
|
|
|
|
|
|
io_cqring_add_event(req->ctx, sqe->user_data, ret, 0);
|
2019-03-12 16:16:44 +00:00
|
|
|
io_put_req(req);
|
2019-01-17 16:41:58 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_poll_complete(struct io_kiocb *req, __poll_t mask)
|
|
|
|
{
|
|
|
|
io_cqring_add_event(req->ctx, req->user_data, mangle_poll(mask), 0);
|
|
|
|
io_fput(req);
|
2019-03-12 16:16:44 +00:00
|
|
|
io_put_req(req);
|
2019-01-17 16:41:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void io_poll_complete_work(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
|
|
|
struct io_poll_iocb *poll = &req->poll;
|
|
|
|
struct poll_table_struct pt = { ._key = poll->events };
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
__poll_t mask = 0;
|
|
|
|
|
|
|
|
if (!READ_ONCE(poll->canceled))
|
|
|
|
mask = vfs_poll(poll->file, &pt) & poll->events;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Note that ->ki_cancel callers also delete iocb from active_reqs after
|
|
|
|
* calling ->ki_cancel. We need the ctx_lock roundtrip here to
|
|
|
|
* synchronize with them. In the cancellation case the list_del_init
|
|
|
|
* itself is not actually needed, but harmless so we keep it in to
|
|
|
|
* avoid further branches in the fast path.
|
|
|
|
*/
|
|
|
|
spin_lock_irq(&ctx->completion_lock);
|
|
|
|
if (!mask && !READ_ONCE(poll->canceled)) {
|
|
|
|
add_wait_queue(poll->head, &poll->wait);
|
|
|
|
spin_unlock_irq(&ctx->completion_lock);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
list_del_init(&req->list);
|
|
|
|
spin_unlock_irq(&ctx->completion_lock);
|
|
|
|
|
|
|
|
io_poll_complete(req, mask);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
|
|
|
|
void *key)
|
|
|
|
{
|
|
|
|
struct io_poll_iocb *poll = container_of(wait, struct io_poll_iocb,
|
|
|
|
wait);
|
|
|
|
struct io_kiocb *req = container_of(poll, struct io_kiocb, poll);
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
__poll_t mask = key_to_poll(key);
|
|
|
|
|
|
|
|
poll->woken = true;
|
|
|
|
|
|
|
|
/* for instances that support it check for an event match first: */
|
|
|
|
if (mask) {
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
if (!(mask & poll->events))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* try to complete the iocb inline if we can: */
|
|
|
|
if (spin_trylock_irqsave(&ctx->completion_lock, flags)) {
|
|
|
|
list_del(&req->list);
|
|
|
|
spin_unlock_irqrestore(&ctx->completion_lock, flags);
|
|
|
|
|
|
|
|
list_del_init(&poll->wait.entry);
|
|
|
|
io_poll_complete(req, mask);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
list_del_init(&poll->wait.entry);
|
|
|
|
queue_work(ctx->sqo_wq, &req->work);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct io_poll_table {
|
|
|
|
struct poll_table_struct pt;
|
|
|
|
struct io_kiocb *req;
|
|
|
|
int error;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void io_poll_queue_proc(struct file *file, struct wait_queue_head *head,
|
|
|
|
struct poll_table_struct *p)
|
|
|
|
{
|
|
|
|
struct io_poll_table *pt = container_of(p, struct io_poll_table, pt);
|
|
|
|
|
|
|
|
if (unlikely(pt->req->poll.head)) {
|
|
|
|
pt->error = -EINVAL;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
pt->error = 0;
|
|
|
|
pt->req->poll.head = head;
|
|
|
|
add_wait_queue(head, &pt->req->poll.wait);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_poll_add(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
|
|
|
{
|
|
|
|
struct io_poll_iocb *poll = &req->poll;
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct io_poll_table ipt;
|
|
|
|
unsigned flags;
|
|
|
|
__poll_t mask;
|
|
|
|
u16 events;
|
|
|
|
int fd;
|
|
|
|
|
|
|
|
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return -EINVAL;
|
|
|
|
if (sqe->addr || sqe->ioprio || sqe->off || sqe->len || sqe->buf_index)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
INIT_WORK(&req->work, io_poll_complete_work);
|
|
|
|
events = READ_ONCE(sqe->poll_events);
|
|
|
|
poll->events = demangle_poll(events) | EPOLLERR | EPOLLHUP;
|
|
|
|
|
|
|
|
flags = READ_ONCE(sqe->flags);
|
|
|
|
fd = READ_ONCE(sqe->fd);
|
|
|
|
|
|
|
|
if (flags & IOSQE_FIXED_FILE) {
|
|
|
|
if (unlikely(!ctx->user_files || fd >= ctx->nr_user_files))
|
|
|
|
return -EBADF;
|
|
|
|
poll->file = ctx->user_files[fd];
|
|
|
|
req->flags |= REQ_F_FIXED_FILE;
|
|
|
|
} else {
|
|
|
|
poll->file = fget(fd);
|
|
|
|
}
|
|
|
|
if (unlikely(!poll->file))
|
|
|
|
return -EBADF;
|
|
|
|
|
|
|
|
poll->head = NULL;
|
|
|
|
poll->woken = false;
|
|
|
|
poll->canceled = false;
|
|
|
|
|
|
|
|
ipt.pt._qproc = io_poll_queue_proc;
|
|
|
|
ipt.pt._key = poll->events;
|
|
|
|
ipt.req = req;
|
|
|
|
ipt.error = -EINVAL; /* same as no support for IOCB_CMD_POLL */
|
|
|
|
|
|
|
|
/* initialized the list so that we can do list_empty checks */
|
|
|
|
INIT_LIST_HEAD(&poll->wait.entry);
|
|
|
|
init_waitqueue_func_entry(&poll->wait, io_poll_wake);
|
|
|
|
|
|
|
|
mask = vfs_poll(poll->file, &ipt.pt) & poll->events;
|
|
|
|
if (unlikely(!poll->head)) {
|
|
|
|
/* we did not manage to set up a waitqueue, done */
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
spin_lock_irq(&ctx->completion_lock);
|
|
|
|
spin_lock(&poll->head->lock);
|
|
|
|
if (poll->woken) {
|
|
|
|
/* wake_up context handles the rest */
|
|
|
|
mask = 0;
|
|
|
|
ipt.error = 0;
|
|
|
|
} else if (mask || ipt.error) {
|
|
|
|
/* if we get an error or a mask we are done */
|
|
|
|
WARN_ON_ONCE(list_empty(&poll->wait.entry));
|
|
|
|
list_del_init(&poll->wait.entry);
|
|
|
|
} else {
|
|
|
|
/* actually waiting for an event */
|
|
|
|
list_add_tail(&req->list, &ctx->cancel_list);
|
|
|
|
}
|
|
|
|
spin_unlock(&poll->head->lock);
|
|
|
|
spin_unlock_irq(&ctx->completion_lock);
|
|
|
|
|
|
|
|
out:
|
|
|
|
if (unlikely(ipt.error)) {
|
|
|
|
if (!(flags & IOSQE_FIXED_FILE))
|
|
|
|
fput(poll->file);
|
|
|
|
/*
|
|
|
|
* Drop one of our refs to this req, __io_submit_sqe() will
|
|
|
|
* drop the other one since we're returning an error.
|
|
|
|
*/
|
2019-03-12 16:16:44 +00:00
|
|
|
io_put_req(req);
|
2019-01-17 16:41:58 +00:00
|
|
|
return ipt.error;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mask)
|
|
|
|
io_poll_complete(req, mask);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
|
2019-01-09 16:06:50 +00:00
|
|
|
const struct sqe_submit *s, bool force_nonblock,
|
|
|
|
struct io_submit_state *state)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
ssize_t ret;
|
|
|
|
int opcode;
|
|
|
|
|
|
|
|
if (unlikely(s->index >= ctx->sq_entries))
|
|
|
|
return -EINVAL;
|
|
|
|
req->user_data = READ_ONCE(s->sqe->user_data);
|
|
|
|
|
|
|
|
opcode = READ_ONCE(s->sqe->opcode);
|
|
|
|
switch (opcode) {
|
|
|
|
case IORING_OP_NOP:
|
|
|
|
ret = io_nop(req, req->user_data);
|
|
|
|
break;
|
|
|
|
case IORING_OP_READV:
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
if (unlikely(s->sqe->buf_index))
|
|
|
|
return -EINVAL;
|
2019-01-09 16:06:50 +00:00
|
|
|
ret = io_read(req, s, force_nonblock, state);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
break;
|
|
|
|
case IORING_OP_WRITEV:
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
if (unlikely(s->sqe->buf_index))
|
|
|
|
return -EINVAL;
|
|
|
|
ret = io_write(req, s, force_nonblock, state);
|
|
|
|
break;
|
|
|
|
case IORING_OP_READ_FIXED:
|
|
|
|
ret = io_read(req, s, force_nonblock, state);
|
|
|
|
break;
|
|
|
|
case IORING_OP_WRITE_FIXED:
|
2019-01-09 16:06:50 +00:00
|
|
|
ret = io_write(req, s, force_nonblock, state);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
break;
|
2019-01-11 16:43:02 +00:00
|
|
|
case IORING_OP_FSYNC:
|
|
|
|
ret = io_fsync(req, s->sqe, force_nonblock);
|
|
|
|
break;
|
2019-01-17 16:41:58 +00:00
|
|
|
case IORING_OP_POLL_ADD:
|
|
|
|
ret = io_poll_add(req, s->sqe);
|
|
|
|
break;
|
|
|
|
case IORING_OP_POLL_REMOVE:
|
|
|
|
ret = io_poll_remove(req, s->sqe);
|
|
|
|
break;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
default:
|
|
|
|
ret = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL) {
|
|
|
|
if (req->error == -EAGAIN)
|
|
|
|
return -EAGAIN;
|
|
|
|
|
|
|
|
/* workqueue context doesn't hold uring_lock, grab it now */
|
|
|
|
if (s->needs_lock)
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
io_iopoll_req_issued(req);
|
|
|
|
if (s->needs_lock)
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2019-01-19 05:56:34 +00:00
|
|
|
static struct async_list *io_async_list_from_sqe(struct io_ring_ctx *ctx,
|
|
|
|
const struct io_uring_sqe *sqe)
|
|
|
|
{
|
|
|
|
switch (sqe->opcode) {
|
|
|
|
case IORING_OP_READV:
|
|
|
|
case IORING_OP_READ_FIXED:
|
|
|
|
return &ctx->pending_async[READ];
|
|
|
|
case IORING_OP_WRITEV:
|
|
|
|
case IORING_OP_WRITE_FIXED:
|
|
|
|
return &ctx->pending_async[WRITE];
|
|
|
|
default:
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
static inline bool io_sqe_needs_user(const struct io_uring_sqe *sqe)
|
|
|
|
{
|
|
|
|
u8 opcode = READ_ONCE(sqe->opcode);
|
|
|
|
|
|
|
|
return !(opcode == IORING_OP_READ_FIXED ||
|
|
|
|
opcode == IORING_OP_WRITE_FIXED);
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static void io_sq_wq_submit_work(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2019-01-19 05:56:34 +00:00
|
|
|
struct mm_struct *cur_mm = NULL;
|
|
|
|
struct async_list *async_list;
|
|
|
|
LIST_HEAD(req_list);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
mm_segment_t old_fs;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
int ret;
|
|
|
|
|
2019-01-19 05:56:34 +00:00
|
|
|
async_list = io_async_list_from_sqe(ctx, req->submit.sqe);
|
|
|
|
restart:
|
|
|
|
do {
|
|
|
|
struct sqe_submit *s = &req->submit;
|
|
|
|
const struct io_uring_sqe *sqe = s->sqe;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-01-19 05:56:34 +00:00
|
|
|
/* Ensure we clear previously set forced non-block flag */
|
|
|
|
req->flags &= ~REQ_F_FORCE_NONBLOCK;
|
|
|
|
req->rw.ki_flags &= ~IOCB_NOWAIT;
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
if (io_sqe_needs_user(sqe) && !cur_mm) {
|
|
|
|
if (!mmget_not_zero(ctx->sqo_mm)) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
} else {
|
|
|
|
cur_mm = ctx->sqo_mm;
|
|
|
|
use_mm(cur_mm);
|
|
|
|
old_fs = get_fs();
|
|
|
|
set_fs(USER_DS);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!ret) {
|
|
|
|
s->has_user = cur_mm != NULL;
|
|
|
|
s->needs_lock = true;
|
|
|
|
do {
|
|
|
|
ret = __io_submit_sqe(ctx, req, s, false, NULL);
|
|
|
|
/*
|
|
|
|
* We can get EAGAIN for polled IO even though
|
|
|
|
* we're forcing a sync submission from here,
|
|
|
|
* since we can't wait for request slots on the
|
|
|
|
* block side.
|
|
|
|
*/
|
|
|
|
if (ret != -EAGAIN)
|
|
|
|
break;
|
|
|
|
cond_resched();
|
|
|
|
} while (1);
|
2019-03-12 16:16:44 +00:00
|
|
|
|
|
|
|
/* drop submission reference */
|
|
|
|
io_put_req(req);
|
2019-01-19 05:56:34 +00:00
|
|
|
}
|
|
|
|
if (ret) {
|
|
|
|
io_cqring_add_event(ctx, sqe->user_data, ret, 0);
|
2019-03-12 16:16:44 +00:00
|
|
|
io_put_req(req);
|
2019-01-19 05:56:34 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* async context always use a copy of the sqe */
|
|
|
|
kfree(sqe);
|
|
|
|
|
|
|
|
if (!async_list)
|
|
|
|
break;
|
|
|
|
if (!list_empty(&req_list)) {
|
|
|
|
req = list_first_entry(&req_list, struct io_kiocb,
|
|
|
|
list);
|
|
|
|
list_del(&req->list);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (list_empty(&async_list->list))
|
|
|
|
break;
|
|
|
|
|
|
|
|
req = NULL;
|
|
|
|
spin_lock(&async_list->lock);
|
|
|
|
if (list_empty(&async_list->list)) {
|
|
|
|
spin_unlock(&async_list->lock);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
list_splice_init(&async_list->list, &req_list);
|
|
|
|
spin_unlock(&async_list->lock);
|
|
|
|
|
|
|
|
req = list_first_entry(&req_list, struct io_kiocb, list);
|
|
|
|
list_del(&req->list);
|
|
|
|
} while (req);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
|
|
|
|
/*
|
2019-01-19 05:56:34 +00:00
|
|
|
* Rare case of racing with a submitter. If we find the count has
|
|
|
|
* dropped to zero AND we have pending work items, then restart
|
|
|
|
* the processing. This is a tiny race window.
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
*/
|
2019-01-19 05:56:34 +00:00
|
|
|
if (async_list) {
|
|
|
|
ret = atomic_dec_return(&async_list->cnt);
|
|
|
|
while (!ret && !list_empty(&async_list->list)) {
|
|
|
|
spin_lock(&async_list->lock);
|
|
|
|
atomic_inc(&async_list->cnt);
|
|
|
|
list_splice_init(&async_list->list, &req_list);
|
|
|
|
spin_unlock(&async_list->lock);
|
|
|
|
|
|
|
|
if (!list_empty(&req_list)) {
|
|
|
|
req = list_first_entry(&req_list,
|
|
|
|
struct io_kiocb, list);
|
|
|
|
list_del(&req->list);
|
|
|
|
goto restart;
|
|
|
|
}
|
|
|
|
ret = atomic_dec_return(&async_list->cnt);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
}
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-01-19 05:56:34 +00:00
|
|
|
if (cur_mm) {
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
set_fs(old_fs);
|
2019-01-19 05:56:34 +00:00
|
|
|
unuse_mm(cur_mm);
|
|
|
|
mmput(cur_mm);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
2019-01-19 05:56:34 +00:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-01-19 05:56:34 +00:00
|
|
|
/*
|
|
|
|
* See if we can piggy back onto previously submitted work, that is still
|
|
|
|
* running. We currently only allow this if the new request is sequential
|
|
|
|
* to the previous one we punted.
|
|
|
|
*/
|
|
|
|
static bool io_add_to_prev_work(struct async_list *list, struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
bool ret = false;
|
|
|
|
|
|
|
|
if (!list)
|
|
|
|
return false;
|
|
|
|
if (!(req->flags & REQ_F_SEQ_PREV))
|
|
|
|
return false;
|
|
|
|
if (!atomic_read(&list->cnt))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
ret = true;
|
|
|
|
spin_lock(&list->lock);
|
|
|
|
list_add_tail(&req->list, &list->list);
|
|
|
|
if (!atomic_read(&list->cnt)) {
|
|
|
|
list_del_init(&req->list);
|
|
|
|
ret = false;
|
|
|
|
}
|
|
|
|
spin_unlock(&list->lock);
|
|
|
|
return ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2019-01-09 16:06:50 +00:00
|
|
|
static int io_submit_sqe(struct io_ring_ctx *ctx, struct sqe_submit *s,
|
|
|
|
struct io_submit_state *state)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct io_kiocb *req;
|
|
|
|
ssize_t ret;
|
|
|
|
|
|
|
|
/* enforce forwards compatibility on users */
|
2019-01-11 05:13:58 +00:00
|
|
|
if (unlikely(s->sqe->flags & ~IOSQE_FIXED_FILE))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
2019-01-09 16:10:43 +00:00
|
|
|
req = io_get_req(ctx, state);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
if (unlikely(!req))
|
|
|
|
return -EAGAIN;
|
|
|
|
|
|
|
|
req->rw.ki_filp = NULL;
|
|
|
|
|
2019-01-09 16:06:50 +00:00
|
|
|
ret = __io_submit_sqe(ctx, req, s, true, state);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
if (ret == -EAGAIN) {
|
|
|
|
struct io_uring_sqe *sqe_copy;
|
|
|
|
|
|
|
|
sqe_copy = kmalloc(sizeof(*sqe_copy), GFP_KERNEL);
|
|
|
|
if (sqe_copy) {
|
2019-01-19 05:56:34 +00:00
|
|
|
struct async_list *list;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
memcpy(sqe_copy, s->sqe, sizeof(*sqe_copy));
|
|
|
|
s->sqe = sqe_copy;
|
|
|
|
|
|
|
|
memcpy(&req->submit, s, sizeof(*s));
|
2019-01-19 05:56:34 +00:00
|
|
|
list = io_async_list_from_sqe(ctx, s->sqe);
|
|
|
|
if (!io_add_to_prev_work(list, req)) {
|
|
|
|
if (list)
|
|
|
|
atomic_inc(&list->cnt);
|
|
|
|
INIT_WORK(&req->work, io_sq_wq_submit_work);
|
|
|
|
queue_work(ctx->sqo_wq, &req->work);
|
|
|
|
}
|
2019-03-12 16:16:44 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Queued up for async execution, worker will release
|
|
|
|
* submit reference when the iocb is actually
|
|
|
|
* submitted.
|
|
|
|
*/
|
|
|
|
return 0;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
}
|
2019-03-12 16:16:44 +00:00
|
|
|
|
|
|
|
/* drop submission reference */
|
|
|
|
io_put_req(req);
|
|
|
|
|
|
|
|
/* and drop final reference, if we failed */
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
if (ret)
|
2019-03-12 16:16:44 +00:00
|
|
|
io_put_req(req);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-01-09 16:06:50 +00:00
|
|
|
/*
|
|
|
|
* Batched submission is done, ensure local IO is flushed out.
|
|
|
|
*/
|
|
|
|
static void io_submit_state_end(struct io_submit_state *state)
|
|
|
|
{
|
|
|
|
blk_finish_plug(&state->plug);
|
|
|
|
io_file_put(state, NULL);
|
2019-01-09 16:10:43 +00:00
|
|
|
if (state->free_reqs)
|
|
|
|
kmem_cache_free_bulk(req_cachep, state->free_reqs,
|
|
|
|
&state->reqs[state->cur_req]);
|
2019-01-09 16:06:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Start submission side cache.
|
|
|
|
*/
|
|
|
|
static void io_submit_state_start(struct io_submit_state *state,
|
|
|
|
struct io_ring_ctx *ctx, unsigned max_ios)
|
|
|
|
{
|
|
|
|
blk_start_plug(&state->plug);
|
2019-01-09 16:10:43 +00:00
|
|
|
state->free_reqs = 0;
|
2019-01-09 16:06:50 +00:00
|
|
|
state->file = NULL;
|
|
|
|
state->ios_left = max_ios;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static void io_commit_sqring(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_sq_ring *ring = ctx->sq_ring;
|
|
|
|
|
|
|
|
if (ctx->cached_sq_head != READ_ONCE(ring->r.head)) {
|
|
|
|
/*
|
|
|
|
* Ensure any loads from the SQEs are done at this point,
|
|
|
|
* since once we write the new head, the application could
|
|
|
|
* write new data to them.
|
|
|
|
*/
|
|
|
|
smp_store_release(&ring->r.head, ctx->cached_sq_head);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* write side barrier of head update, app has read side. See
|
|
|
|
* comment at the top of this file
|
|
|
|
*/
|
|
|
|
smp_wmb();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Undo last io_get_sqring()
|
|
|
|
*/
|
|
|
|
static void io_drop_sqring(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
ctx->cached_sq_head--;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Fetch an sqe, if one is available. Note that s->sqe will point to memory
|
|
|
|
* that is mapped by userspace. This means that care needs to be taken to
|
|
|
|
* ensure that reads are stable, as we cannot rely on userspace always
|
|
|
|
* being a good citizen. If members of the sqe are validated and then later
|
|
|
|
* used, it's important that those reads are done through READ_ONCE() to
|
|
|
|
* prevent a re-load down the line.
|
|
|
|
*/
|
|
|
|
static bool io_get_sqring(struct io_ring_ctx *ctx, struct sqe_submit *s)
|
|
|
|
{
|
|
|
|
struct io_sq_ring *ring = ctx->sq_ring;
|
|
|
|
unsigned head;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The cached sq head (or cq tail) serves two purposes:
|
|
|
|
*
|
|
|
|
* 1) allows us to batch the cost of updating the user visible
|
|
|
|
* head updates.
|
|
|
|
* 2) allows the kernel side to track the head on its own, even
|
|
|
|
* though the application is the one updating it.
|
|
|
|
*/
|
|
|
|
head = ctx->cached_sq_head;
|
|
|
|
/* See comment at the top of this file */
|
|
|
|
smp_rmb();
|
|
|
|
if (head == READ_ONCE(ring->r.tail))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
head = READ_ONCE(ring->array[head & ctx->sq_mask]);
|
|
|
|
if (head < ctx->sq_entries) {
|
|
|
|
s->index = head;
|
|
|
|
s->sqe = &ctx->sq_sqes[head];
|
|
|
|
ctx->cached_sq_head++;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* drop invalid entries */
|
|
|
|
ctx->cached_sq_head++;
|
|
|
|
ring->dropped++;
|
|
|
|
/* See comment at the top of this file */
|
|
|
|
smp_wmb();
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
static int io_submit_sqes(struct io_ring_ctx *ctx, struct sqe_submit *sqes,
|
|
|
|
unsigned int nr, bool has_user, bool mm_fault)
|
|
|
|
{
|
|
|
|
struct io_submit_state state, *statep = NULL;
|
|
|
|
int ret, i, submitted = 0;
|
|
|
|
|
|
|
|
if (nr > IO_PLUG_THRESHOLD) {
|
|
|
|
io_submit_state_start(&state, ctx, nr);
|
|
|
|
statep = &state;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < nr; i++) {
|
|
|
|
if (unlikely(mm_fault)) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
} else {
|
|
|
|
sqes[i].has_user = has_user;
|
|
|
|
sqes[i].needs_lock = true;
|
|
|
|
sqes[i].needs_fixed_file = true;
|
|
|
|
ret = io_submit_sqe(ctx, &sqes[i], statep);
|
|
|
|
}
|
|
|
|
if (!ret) {
|
|
|
|
submitted++;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
io_cqring_add_event(ctx, sqes[i].sqe->user_data, ret, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (statep)
|
|
|
|
io_submit_state_end(&state);
|
|
|
|
|
|
|
|
return submitted;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_sq_thread(void *data)
|
|
|
|
{
|
|
|
|
struct sqe_submit sqes[IO_IOPOLL_BATCH];
|
|
|
|
struct io_ring_ctx *ctx = data;
|
|
|
|
struct mm_struct *cur_mm = NULL;
|
|
|
|
mm_segment_t old_fs;
|
|
|
|
DEFINE_WAIT(wait);
|
|
|
|
unsigned inflight;
|
|
|
|
unsigned long timeout;
|
|
|
|
|
|
|
|
old_fs = get_fs();
|
|
|
|
set_fs(USER_DS);
|
|
|
|
|
|
|
|
timeout = inflight = 0;
|
|
|
|
while (!kthread_should_stop() && !ctx->sqo_stop) {
|
|
|
|
bool all_fixed, mm_fault = false;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (inflight) {
|
|
|
|
unsigned nr_events = 0;
|
|
|
|
|
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL) {
|
|
|
|
/*
|
|
|
|
* We disallow the app entering submit/complete
|
|
|
|
* with polling, but we still need to lock the
|
|
|
|
* ring to prevent racing with polled issue
|
|
|
|
* that got punted to a workqueue.
|
|
|
|
*/
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
io_iopoll_check(ctx, &nr_events, 0);
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* Normal IO, just pretend everything completed.
|
|
|
|
* We don't have to poll completions for that.
|
|
|
|
*/
|
|
|
|
nr_events = inflight;
|
|
|
|
}
|
|
|
|
|
|
|
|
inflight -= nr_events;
|
|
|
|
if (!inflight)
|
|
|
|
timeout = jiffies + ctx->sq_thread_idle;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!io_get_sqring(ctx, &sqes[0])) {
|
|
|
|
/*
|
|
|
|
* We're polling. If we're within the defined idle
|
|
|
|
* period, then let us spin without work before going
|
|
|
|
* to sleep.
|
|
|
|
*/
|
|
|
|
if (inflight || !time_after(jiffies, timeout)) {
|
|
|
|
cpu_relax();
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Drop cur_mm before scheduling, we can't hold it for
|
|
|
|
* long periods (or over schedule()). Do this before
|
|
|
|
* adding ourselves to the waitqueue, as the unuse/drop
|
|
|
|
* may sleep.
|
|
|
|
*/
|
|
|
|
if (cur_mm) {
|
|
|
|
unuse_mm(cur_mm);
|
|
|
|
mmput(cur_mm);
|
|
|
|
cur_mm = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
prepare_to_wait(&ctx->sqo_wait, &wait,
|
|
|
|
TASK_INTERRUPTIBLE);
|
|
|
|
|
|
|
|
/* Tell userspace we may need a wakeup call */
|
|
|
|
ctx->sq_ring->flags |= IORING_SQ_NEED_WAKEUP;
|
|
|
|
smp_wmb();
|
|
|
|
|
|
|
|
if (!io_get_sqring(ctx, &sqes[0])) {
|
|
|
|
if (kthread_should_stop()) {
|
|
|
|
finish_wait(&ctx->sqo_wait, &wait);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (signal_pending(current))
|
|
|
|
flush_signals(current);
|
|
|
|
schedule();
|
|
|
|
finish_wait(&ctx->sqo_wait, &wait);
|
|
|
|
|
|
|
|
ctx->sq_ring->flags &= ~IORING_SQ_NEED_WAKEUP;
|
|
|
|
smp_wmb();
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
finish_wait(&ctx->sqo_wait, &wait);
|
|
|
|
|
|
|
|
ctx->sq_ring->flags &= ~IORING_SQ_NEED_WAKEUP;
|
|
|
|
smp_wmb();
|
|
|
|
}
|
|
|
|
|
|
|
|
i = 0;
|
|
|
|
all_fixed = true;
|
|
|
|
do {
|
|
|
|
if (all_fixed && io_sqe_needs_user(sqes[i].sqe))
|
|
|
|
all_fixed = false;
|
|
|
|
|
|
|
|
i++;
|
|
|
|
if (i == ARRAY_SIZE(sqes))
|
|
|
|
break;
|
|
|
|
} while (io_get_sqring(ctx, &sqes[i]));
|
|
|
|
|
|
|
|
/* Unless all new commands are FIXED regions, grab mm */
|
|
|
|
if (!all_fixed && !cur_mm) {
|
|
|
|
mm_fault = !mmget_not_zero(ctx->sqo_mm);
|
|
|
|
if (!mm_fault) {
|
|
|
|
use_mm(ctx->sqo_mm);
|
|
|
|
cur_mm = ctx->sqo_mm;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
inflight += io_submit_sqes(ctx, sqes, i, cur_mm != NULL,
|
|
|
|
mm_fault);
|
|
|
|
|
|
|
|
/* Commit SQ ring head once we've consumed all SQEs */
|
|
|
|
io_commit_sqring(ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
set_fs(old_fs);
|
|
|
|
if (cur_mm) {
|
|
|
|
unuse_mm(cur_mm);
|
|
|
|
mmput(cur_mm);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static int io_ring_submit(struct io_ring_ctx *ctx, unsigned int to_submit)
|
|
|
|
{
|
2019-01-09 16:06:50 +00:00
|
|
|
struct io_submit_state state, *statep = NULL;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
int i, ret = 0, submit = 0;
|
|
|
|
|
2019-01-09 16:06:50 +00:00
|
|
|
if (to_submit > IO_PLUG_THRESHOLD) {
|
|
|
|
io_submit_state_start(&state, ctx, to_submit);
|
|
|
|
statep = &state;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
for (i = 0; i < to_submit; i++) {
|
|
|
|
struct sqe_submit s;
|
|
|
|
|
|
|
|
if (!io_get_sqring(ctx, &s))
|
|
|
|
break;
|
|
|
|
|
|
|
|
s.has_user = true;
|
2019-01-09 15:59:42 +00:00
|
|
|
s.needs_lock = false;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
s.needs_fixed_file = false;
|
2019-01-09 15:59:42 +00:00
|
|
|
|
2019-01-09 16:06:50 +00:00
|
|
|
ret = io_submit_sqe(ctx, &s, statep);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
if (ret) {
|
|
|
|
io_drop_sqring(ctx);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
submit++;
|
|
|
|
}
|
|
|
|
io_commit_sqring(ctx);
|
|
|
|
|
2019-01-09 16:06:50 +00:00
|
|
|
if (statep)
|
|
|
|
io_submit_state_end(statep);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
return submit ? submit : ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned io_cqring_events(struct io_cq_ring *ring)
|
|
|
|
{
|
|
|
|
return READ_ONCE(ring->r.tail) - READ_ONCE(ring->r.head);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait until events become available, if we don't already have some. The
|
|
|
|
* application must reap them itself, as they reside on the shared cq ring.
|
|
|
|
*/
|
|
|
|
static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
|
|
|
|
const sigset_t __user *sig, size_t sigsz)
|
|
|
|
{
|
|
|
|
struct io_cq_ring *ring = ctx->cq_ring;
|
|
|
|
sigset_t ksigmask, sigsaved;
|
|
|
|
DEFINE_WAIT(wait);
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
/* See comment at the top of this file */
|
|
|
|
smp_rmb();
|
|
|
|
if (io_cqring_events(ring) >= min_events)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (sig) {
|
|
|
|
ret = set_user_sigmask(sig, &ksigmask, &sigsaved, sigsz);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
do {
|
|
|
|
prepare_to_wait(&ctx->wait, &wait, TASK_INTERRUPTIBLE);
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
/* See comment at the top of this file */
|
|
|
|
smp_rmb();
|
|
|
|
if (io_cqring_events(ring) >= min_events)
|
|
|
|
break;
|
|
|
|
|
|
|
|
schedule();
|
|
|
|
|
|
|
|
ret = -EINTR;
|
|
|
|
if (signal_pending(current))
|
|
|
|
break;
|
|
|
|
} while (1);
|
|
|
|
|
|
|
|
finish_wait(&ctx->wait, &wait);
|
|
|
|
|
|
|
|
if (sig)
|
|
|
|
restore_user_sigmask(sig, &sigsaved);
|
|
|
|
|
|
|
|
return READ_ONCE(ring->r.head) == READ_ONCE(ring->r.tail) ? ret : 0;
|
|
|
|
}
|
|
|
|
|
2019-01-11 05:13:58 +00:00
|
|
|
static void __io_sqe_files_unregister(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
if (ctx->ring_sock) {
|
|
|
|
struct sock *sock = ctx->ring_sock->sk;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
|
|
|
|
while ((skb = skb_dequeue(&sock->sk_receive_queue)) != NULL)
|
|
|
|
kfree_skb(skb);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < ctx->nr_user_files; i++)
|
|
|
|
fput(ctx->user_files[i]);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_sqe_files_unregister(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
if (!ctx->user_files)
|
|
|
|
return -ENXIO;
|
|
|
|
|
|
|
|
__io_sqe_files_unregister(ctx);
|
|
|
|
kfree(ctx->user_files);
|
|
|
|
ctx->user_files = NULL;
|
|
|
|
ctx->nr_user_files = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
static void io_sq_thread_stop(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
if (ctx->sqo_thread) {
|
|
|
|
ctx->sqo_stop = 1;
|
|
|
|
mb();
|
|
|
|
kthread_stop(ctx->sqo_thread);
|
|
|
|
ctx->sqo_thread = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-01-11 05:13:58 +00:00
|
|
|
static void io_finish_async(struct io_ring_ctx *ctx)
|
|
|
|
{
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
io_sq_thread_stop(ctx);
|
|
|
|
|
2019-01-11 05:13:58 +00:00
|
|
|
if (ctx->sqo_wq) {
|
|
|
|
destroy_workqueue(ctx->sqo_wq);
|
|
|
|
ctx->sqo_wq = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
static void io_destruct_skb(struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = skb->sk->sk_user_data;
|
|
|
|
|
|
|
|
io_finish_async(ctx);
|
|
|
|
unix_destruct_scm(skb);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Ensure the UNIX gc is aware of our file set, so we are certain that
|
|
|
|
* the io_uring can be safely unregistered on process exit, even if we have
|
|
|
|
* loops in the file referencing.
|
|
|
|
*/
|
|
|
|
static int __io_sqe_files_scm(struct io_ring_ctx *ctx, int nr, int offset)
|
|
|
|
{
|
|
|
|
struct sock *sk = ctx->ring_sock->sk;
|
|
|
|
struct scm_fp_list *fpl;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (!capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN)) {
|
|
|
|
unsigned long inflight = ctx->user->unix_inflight + nr;
|
|
|
|
|
|
|
|
if (inflight > task_rlimit(current, RLIMIT_NOFILE))
|
|
|
|
return -EMFILE;
|
|
|
|
}
|
|
|
|
|
|
|
|
fpl = kzalloc(sizeof(*fpl), GFP_KERNEL);
|
|
|
|
if (!fpl)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
skb = alloc_skb(0, GFP_KERNEL);
|
|
|
|
if (!skb) {
|
|
|
|
kfree(fpl);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
skb->sk = sk;
|
|
|
|
skb->destructor = io_destruct_skb;
|
|
|
|
|
|
|
|
fpl->user = get_uid(ctx->user);
|
|
|
|
for (i = 0; i < nr; i++) {
|
|
|
|
fpl->fp[i] = get_file(ctx->user_files[i + offset]);
|
|
|
|
unix_inflight(fpl->user, fpl->fp[i]);
|
|
|
|
}
|
|
|
|
|
|
|
|
fpl->max = fpl->count = nr;
|
|
|
|
UNIXCB(skb).fp = fpl;
|
|
|
|
refcount_add(skb->truesize, &sk->sk_wmem_alloc);
|
|
|
|
skb_queue_head(&sk->sk_receive_queue, skb);
|
|
|
|
|
|
|
|
for (i = 0; i < nr; i++)
|
|
|
|
fput(fpl->fp[i]);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If UNIX sockets are enabled, fd passing can cause a reference cycle which
|
|
|
|
* causes regular reference counting to break down. We rely on the UNIX
|
|
|
|
* garbage collection to take care of this problem for us.
|
|
|
|
*/
|
|
|
|
static int io_sqe_files_scm(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
unsigned left, total;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
total = 0;
|
|
|
|
left = ctx->nr_user_files;
|
|
|
|
while (left) {
|
|
|
|
unsigned this_files = min_t(unsigned, left, SCM_MAX_FD);
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = __io_sqe_files_scm(ctx, this_files, total);
|
|
|
|
if (ret)
|
|
|
|
break;
|
|
|
|
left -= this_files;
|
|
|
|
total += this_files;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!ret)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
while (total < ctx->nr_user_files) {
|
|
|
|
fput(ctx->user_files[total]);
|
|
|
|
total++;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static int io_sqe_files_scm(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
|
|
|
|
unsigned nr_args)
|
|
|
|
{
|
|
|
|
__s32 __user *fds = (__s32 __user *) arg;
|
|
|
|
int fd, ret = 0;
|
|
|
|
unsigned i;
|
|
|
|
|
|
|
|
if (ctx->user_files)
|
|
|
|
return -EBUSY;
|
|
|
|
if (!nr_args)
|
|
|
|
return -EINVAL;
|
|
|
|
if (nr_args > IORING_MAX_FIXED_FILES)
|
|
|
|
return -EMFILE;
|
|
|
|
|
|
|
|
ctx->user_files = kcalloc(nr_args, sizeof(struct file *), GFP_KERNEL);
|
|
|
|
if (!ctx->user_files)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
for (i = 0; i < nr_args; i++) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
if (copy_from_user(&fd, &fds[i], sizeof(fd)))
|
|
|
|
break;
|
|
|
|
|
|
|
|
ctx->user_files[i] = fget(fd);
|
|
|
|
|
|
|
|
ret = -EBADF;
|
|
|
|
if (!ctx->user_files[i])
|
|
|
|
break;
|
|
|
|
/*
|
|
|
|
* Don't allow io_uring instances to be registered. If UNIX
|
|
|
|
* isn't enabled, then this causes a reference cycle and this
|
|
|
|
* instance can never get freed. If UNIX is enabled we'll
|
|
|
|
* handle it just fine, but there's still no point in allowing
|
|
|
|
* a ring fd as it doesn't support regular read/write anyway.
|
|
|
|
*/
|
|
|
|
if (ctx->user_files[i]->f_op == &io_uring_fops) {
|
|
|
|
fput(ctx->user_files[i]);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
ctx->nr_user_files++;
|
|
|
|
ret = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ret) {
|
|
|
|
for (i = 0; i < ctx->nr_user_files; i++)
|
|
|
|
fput(ctx->user_files[i]);
|
|
|
|
|
|
|
|
kfree(ctx->user_files);
|
|
|
|
ctx->nr_user_files = 0;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = io_sqe_files_scm(ctx);
|
|
|
|
if (ret)
|
|
|
|
io_sqe_files_unregister(ctx);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
static int io_sq_offload_start(struct io_ring_ctx *ctx,
|
|
|
|
struct io_uring_params *p)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
init_waitqueue_head(&ctx->sqo_wait);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
mmgrab(current->mm);
|
|
|
|
ctx->sqo_mm = current->mm;
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
ctx->sq_thread_idle = msecs_to_jiffies(p->sq_thread_idle);
|
|
|
|
if (!ctx->sq_thread_idle)
|
|
|
|
ctx->sq_thread_idle = HZ;
|
|
|
|
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (!cpu_possible(p->sq_thread_cpu))
|
|
|
|
goto err;
|
|
|
|
|
|
|
|
if (ctx->flags & IORING_SETUP_SQPOLL) {
|
|
|
|
if (p->flags & IORING_SETUP_SQ_AFF) {
|
|
|
|
int cpu;
|
|
|
|
|
|
|
|
cpu = array_index_nospec(p->sq_thread_cpu, NR_CPUS);
|
|
|
|
ctx->sqo_thread = kthread_create_on_cpu(io_sq_thread,
|
|
|
|
ctx, cpu,
|
|
|
|
"io_uring-sq");
|
|
|
|
} else {
|
|
|
|
ctx->sqo_thread = kthread_create(io_sq_thread, ctx,
|
|
|
|
"io_uring-sq");
|
|
|
|
}
|
|
|
|
if (IS_ERR(ctx->sqo_thread)) {
|
|
|
|
ret = PTR_ERR(ctx->sqo_thread);
|
|
|
|
ctx->sqo_thread = NULL;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
wake_up_process(ctx->sqo_thread);
|
|
|
|
} else if (p->flags & IORING_SETUP_SQ_AFF) {
|
|
|
|
/* Can't have SQ_AFF without SQPOLL */
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
/* Do QD, or 2 * CPUS, whatever is smallest */
|
|
|
|
ctx->sqo_wq = alloc_workqueue("io_ring-wq", WQ_UNBOUND | WQ_FREEZABLE,
|
|
|
|
min(ctx->sq_entries - 1, 2 * num_online_cpus()));
|
|
|
|
if (!ctx->sqo_wq) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
err:
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
io_sq_thread_stop(ctx);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
mmdrop(ctx->sqo_mm);
|
|
|
|
ctx->sqo_mm = NULL;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_unaccount_mem(struct user_struct *user, unsigned long nr_pages)
|
|
|
|
{
|
|
|
|
atomic_long_sub(nr_pages, &user->locked_vm);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_account_mem(struct user_struct *user, unsigned long nr_pages)
|
|
|
|
{
|
|
|
|
unsigned long page_limit, cur_pages, new_pages;
|
|
|
|
|
|
|
|
/* Don't allow more pages than we can safely lock */
|
|
|
|
page_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
|
|
|
|
|
|
|
|
do {
|
|
|
|
cur_pages = atomic_long_read(&user->locked_vm);
|
|
|
|
new_pages = cur_pages + nr_pages;
|
|
|
|
if (new_pages > page_limit)
|
|
|
|
return -ENOMEM;
|
|
|
|
} while (atomic_long_cmpxchg(&user->locked_vm, cur_pages,
|
|
|
|
new_pages) != cur_pages);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_mem_free(void *ptr)
|
|
|
|
{
|
|
|
|
struct page *page = virt_to_head_page(ptr);
|
|
|
|
|
|
|
|
if (put_page_testzero(page))
|
|
|
|
free_compound_page(page);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *io_mem_alloc(size_t size)
|
|
|
|
{
|
|
|
|
gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO | __GFP_NOWARN | __GFP_COMP |
|
|
|
|
__GFP_NORETRY;
|
|
|
|
|
|
|
|
return (void *) __get_free_pages(gfp_flags, get_order(size));
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned long ring_pages(unsigned sq_entries, unsigned cq_entries)
|
|
|
|
{
|
|
|
|
struct io_sq_ring *sq_ring;
|
|
|
|
struct io_cq_ring *cq_ring;
|
|
|
|
size_t bytes;
|
|
|
|
|
|
|
|
bytes = struct_size(sq_ring, array, sq_entries);
|
|
|
|
bytes += array_size(sizeof(struct io_uring_sqe), sq_entries);
|
|
|
|
bytes += struct_size(cq_ring, cqes, cq_entries);
|
|
|
|
|
|
|
|
return (bytes + PAGE_SIZE - 1) / PAGE_SIZE;
|
|
|
|
}
|
|
|
|
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
static int io_sqe_buffer_unregister(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
int i, j;
|
|
|
|
|
|
|
|
if (!ctx->user_bufs)
|
|
|
|
return -ENXIO;
|
|
|
|
|
|
|
|
for (i = 0; i < ctx->nr_user_bufs; i++) {
|
|
|
|
struct io_mapped_ubuf *imu = &ctx->user_bufs[i];
|
|
|
|
|
|
|
|
for (j = 0; j < imu->nr_bvecs; j++)
|
|
|
|
put_page(imu->bvec[j].bv_page);
|
|
|
|
|
|
|
|
if (ctx->account_mem)
|
|
|
|
io_unaccount_mem(ctx->user, imu->nr_bvecs);
|
|
|
|
kfree(imu->bvec);
|
|
|
|
imu->nr_bvecs = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
kfree(ctx->user_bufs);
|
|
|
|
ctx->user_bufs = NULL;
|
|
|
|
ctx->nr_user_bufs = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_copy_iov(struct io_ring_ctx *ctx, struct iovec *dst,
|
|
|
|
void __user *arg, unsigned index)
|
|
|
|
{
|
|
|
|
struct iovec __user *src;
|
|
|
|
|
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
if (ctx->compat) {
|
|
|
|
struct compat_iovec __user *ciovs;
|
|
|
|
struct compat_iovec ciov;
|
|
|
|
|
|
|
|
ciovs = (struct compat_iovec __user *) arg;
|
|
|
|
if (copy_from_user(&ciov, &ciovs[index], sizeof(ciov)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
dst->iov_base = (void __user *) (unsigned long) ciov.iov_base;
|
|
|
|
dst->iov_len = ciov.iov_len;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
src = (struct iovec __user *) arg;
|
|
|
|
if (copy_from_user(dst, &src[index], sizeof(*dst)))
|
|
|
|
return -EFAULT;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_sqe_buffer_register(struct io_ring_ctx *ctx, void __user *arg,
|
|
|
|
unsigned nr_args)
|
|
|
|
{
|
|
|
|
struct vm_area_struct **vmas = NULL;
|
|
|
|
struct page **pages = NULL;
|
|
|
|
int i, j, got_pages = 0;
|
|
|
|
int ret = -EINVAL;
|
|
|
|
|
|
|
|
if (ctx->user_bufs)
|
|
|
|
return -EBUSY;
|
|
|
|
if (!nr_args || nr_args > UIO_MAXIOV)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
ctx->user_bufs = kcalloc(nr_args, sizeof(struct io_mapped_ubuf),
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!ctx->user_bufs)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
for (i = 0; i < nr_args; i++) {
|
|
|
|
struct io_mapped_ubuf *imu = &ctx->user_bufs[i];
|
|
|
|
unsigned long off, start, end, ubuf;
|
|
|
|
int pret, nr_pages;
|
|
|
|
struct iovec iov;
|
|
|
|
size_t size;
|
|
|
|
|
|
|
|
ret = io_copy_iov(ctx, &iov, arg, i);
|
|
|
|
if (ret)
|
|
|
|
break;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't impose further limits on the size and buffer
|
|
|
|
* constraints here, we'll -EINVAL later when IO is
|
|
|
|
* submitted if they are wrong.
|
|
|
|
*/
|
|
|
|
ret = -EFAULT;
|
|
|
|
if (!iov.iov_base || !iov.iov_len)
|
|
|
|
goto err;
|
|
|
|
|
|
|
|
/* arbitrary limit, but we need something */
|
|
|
|
if (iov.iov_len > SZ_1G)
|
|
|
|
goto err;
|
|
|
|
|
|
|
|
ubuf = (unsigned long) iov.iov_base;
|
|
|
|
end = (ubuf + iov.iov_len + PAGE_SIZE - 1) >> PAGE_SHIFT;
|
|
|
|
start = ubuf >> PAGE_SHIFT;
|
|
|
|
nr_pages = end - start;
|
|
|
|
|
|
|
|
if (ctx->account_mem) {
|
|
|
|
ret = io_account_mem(ctx->user, nr_pages);
|
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
if (!pages || nr_pages > got_pages) {
|
|
|
|
kfree(vmas);
|
|
|
|
kfree(pages);
|
|
|
|
pages = kmalloc_array(nr_pages, sizeof(struct page *),
|
|
|
|
GFP_KERNEL);
|
|
|
|
vmas = kmalloc_array(nr_pages,
|
|
|
|
sizeof(struct vm_area_struct *),
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!pages || !vmas) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
if (ctx->account_mem)
|
|
|
|
io_unaccount_mem(ctx->user, nr_pages);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
got_pages = nr_pages;
|
|
|
|
}
|
|
|
|
|
|
|
|
imu->bvec = kmalloc_array(nr_pages, sizeof(struct bio_vec),
|
|
|
|
GFP_KERNEL);
|
|
|
|
ret = -ENOMEM;
|
|
|
|
if (!imu->bvec) {
|
|
|
|
if (ctx->account_mem)
|
|
|
|
io_unaccount_mem(ctx->user, nr_pages);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
down_read(¤t->mm->mmap_sem);
|
|
|
|
pret = get_user_pages_longterm(ubuf, nr_pages, FOLL_WRITE,
|
|
|
|
pages, vmas);
|
|
|
|
if (pret == nr_pages) {
|
|
|
|
/* don't support file backed memory */
|
|
|
|
for (j = 0; j < nr_pages; j++) {
|
|
|
|
struct vm_area_struct *vma = vmas[j];
|
|
|
|
|
|
|
|
if (vma->vm_file &&
|
|
|
|
!is_file_hugepages(vma->vm_file)) {
|
|
|
|
ret = -EOPNOTSUPP;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
ret = pret < 0 ? pret : -EFAULT;
|
|
|
|
}
|
|
|
|
up_read(¤t->mm->mmap_sem);
|
|
|
|
if (ret) {
|
|
|
|
/*
|
|
|
|
* if we did partial map, or found file backed vmas,
|
|
|
|
* release any pages we did get
|
|
|
|
*/
|
|
|
|
if (pret > 0) {
|
|
|
|
for (j = 0; j < pret; j++)
|
|
|
|
put_page(pages[j]);
|
|
|
|
}
|
|
|
|
if (ctx->account_mem)
|
|
|
|
io_unaccount_mem(ctx->user, nr_pages);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
|
|
|
off = ubuf & ~PAGE_MASK;
|
|
|
|
size = iov.iov_len;
|
|
|
|
for (j = 0; j < nr_pages; j++) {
|
|
|
|
size_t vec_len;
|
|
|
|
|
|
|
|
vec_len = min_t(size_t, size, PAGE_SIZE - off);
|
|
|
|
imu->bvec[j].bv_page = pages[j];
|
|
|
|
imu->bvec[j].bv_len = vec_len;
|
|
|
|
imu->bvec[j].bv_offset = off;
|
|
|
|
off = 0;
|
|
|
|
size -= vec_len;
|
|
|
|
}
|
|
|
|
/* store original address for later verification */
|
|
|
|
imu->ubuf = ubuf;
|
|
|
|
imu->len = iov.iov_len;
|
|
|
|
imu->nr_bvecs = nr_pages;
|
|
|
|
|
|
|
|
ctx->nr_user_bufs++;
|
|
|
|
}
|
|
|
|
kfree(pages);
|
|
|
|
kfree(vmas);
|
|
|
|
return 0;
|
|
|
|
err:
|
|
|
|
kfree(pages);
|
|
|
|
kfree(vmas);
|
|
|
|
io_sqe_buffer_unregister(ctx);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static void io_ring_ctx_free(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2019-01-11 05:13:58 +00:00
|
|
|
io_finish_async(ctx);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
if (ctx->sqo_mm)
|
|
|
|
mmdrop(ctx->sqo_mm);
|
2019-01-09 15:59:42 +00:00
|
|
|
|
|
|
|
io_iopoll_reap_events(ctx);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
io_sqe_buffer_unregister(ctx);
|
2019-01-11 05:13:58 +00:00
|
|
|
io_sqe_files_unregister(ctx);
|
2019-01-09 15:59:42 +00:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
if (ctx->ring_sock)
|
|
|
|
sock_release(ctx->ring_sock);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
io_mem_free(ctx->sq_ring);
|
|
|
|
io_mem_free(ctx->sq_sqes);
|
|
|
|
io_mem_free(ctx->cq_ring);
|
|
|
|
|
|
|
|
percpu_ref_exit(&ctx->refs);
|
|
|
|
if (ctx->account_mem)
|
|
|
|
io_unaccount_mem(ctx->user,
|
|
|
|
ring_pages(ctx->sq_entries, ctx->cq_entries));
|
|
|
|
free_uid(ctx->user);
|
|
|
|
kfree(ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static __poll_t io_uring_poll(struct file *file, poll_table *wait)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
__poll_t mask = 0;
|
|
|
|
|
|
|
|
poll_wait(file, &ctx->cq_wait, wait);
|
|
|
|
/* See comment at the top of this file */
|
|
|
|
smp_rmb();
|
|
|
|
if (READ_ONCE(ctx->sq_ring->r.tail) + 1 != ctx->cached_sq_head)
|
|
|
|
mask |= EPOLLOUT | EPOLLWRNORM;
|
|
|
|
if (READ_ONCE(ctx->cq_ring->r.head) != ctx->cached_cq_tail)
|
|
|
|
mask |= EPOLLIN | EPOLLRDNORM;
|
|
|
|
|
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_uring_fasync(int fd, struct file *file, int on)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
|
|
|
|
return fasync_helper(fd, file, on, &ctx->cq_fasync);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
percpu_ref_kill(&ctx->refs);
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
|
2019-01-17 16:41:58 +00:00
|
|
|
io_poll_remove_all(ctx);
|
2019-01-09 15:59:42 +00:00
|
|
|
io_iopoll_reap_events(ctx);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
wait_for_completion(&ctx->ctx_done);
|
|
|
|
io_ring_ctx_free(ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_uring_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
|
|
|
|
file->private_data = NULL;
|
|
|
|
io_ring_ctx_wait_and_kill(ctx);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
|
|
|
|
{
|
|
|
|
loff_t offset = (loff_t) vma->vm_pgoff << PAGE_SHIFT;
|
|
|
|
unsigned long sz = vma->vm_end - vma->vm_start;
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
unsigned long pfn;
|
|
|
|
struct page *page;
|
|
|
|
void *ptr;
|
|
|
|
|
|
|
|
switch (offset) {
|
|
|
|
case IORING_OFF_SQ_RING:
|
|
|
|
ptr = ctx->sq_ring;
|
|
|
|
break;
|
|
|
|
case IORING_OFF_SQES:
|
|
|
|
ptr = ctx->sq_sqes;
|
|
|
|
break;
|
|
|
|
case IORING_OFF_CQ_RING:
|
|
|
|
ptr = ctx->cq_ring;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
page = virt_to_head_page(ptr);
|
|
|
|
if (sz > (PAGE_SIZE << compound_order(page)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
pfn = virt_to_phys(ptr) >> PAGE_SHIFT;
|
|
|
|
return remap_pfn_range(vma, vma->vm_start, pfn, sz, vma->vm_page_prot);
|
|
|
|
}
|
|
|
|
|
|
|
|
SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
|
|
|
|
u32, min_complete, u32, flags, const sigset_t __user *, sig,
|
|
|
|
size_t, sigsz)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
long ret = -EBADF;
|
|
|
|
int submitted = 0;
|
|
|
|
struct fd f;
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
if (flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
f = fdget(fd);
|
|
|
|
if (!f.file)
|
|
|
|
return -EBADF;
|
|
|
|
|
|
|
|
ret = -EOPNOTSUPP;
|
|
|
|
if (f.file->f_op != &io_uring_fops)
|
|
|
|
goto out_fput;
|
|
|
|
|
|
|
|
ret = -ENXIO;
|
|
|
|
ctx = f.file->private_data;
|
|
|
|
if (!percpu_ref_tryget(&ctx->refs))
|
|
|
|
goto out_fput;
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
/*
|
|
|
|
* For SQ polling, the thread will do all submissions and completions.
|
|
|
|
* Just return the requested submit count, and wake the thread if
|
|
|
|
* we were asked to.
|
|
|
|
*/
|
|
|
|
if (ctx->flags & IORING_SETUP_SQPOLL) {
|
|
|
|
if (flags & IORING_ENTER_SQ_WAKEUP)
|
|
|
|
wake_up(&ctx->sqo_wait);
|
|
|
|
submitted = to_submit;
|
|
|
|
goto out_ctx;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
ret = 0;
|
|
|
|
if (to_submit) {
|
|
|
|
to_submit = min(to_submit, ctx->sq_entries);
|
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
submitted = io_ring_submit(ctx, to_submit);
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
|
|
|
|
if (submitted < 0)
|
|
|
|
goto out_ctx;
|
|
|
|
}
|
|
|
|
if (flags & IORING_ENTER_GETEVENTS) {
|
2019-01-09 15:59:42 +00:00
|
|
|
unsigned nr_events = 0;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
min_complete = min(min_complete, ctx->cq_entries);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The application could have included the 'to_submit' count
|
|
|
|
* in how many events it wanted to wait for. If we failed to
|
|
|
|
* submit the desired count, we may need to adjust the number
|
|
|
|
* of events to poll/wait for.
|
|
|
|
*/
|
|
|
|
if (submitted < to_submit)
|
|
|
|
min_complete = min_t(unsigned, submitted, min_complete);
|
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL) {
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
ret = io_iopoll_check(ctx, &nr_events, min_complete);
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
} else {
|
|
|
|
ret = io_cqring_wait(ctx, min_complete, sig, sigsz);
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
out_ctx:
|
|
|
|
io_ring_drop_ctx_refs(ctx, 1);
|
|
|
|
out_fput:
|
|
|
|
fdput(f);
|
|
|
|
return submitted ? submitted : ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations io_uring_fops = {
|
|
|
|
.release = io_uring_release,
|
|
|
|
.mmap = io_uring_mmap,
|
|
|
|
.poll = io_uring_poll,
|
|
|
|
.fasync = io_uring_fasync,
|
|
|
|
};
|
|
|
|
|
|
|
|
static int io_allocate_scq_urings(struct io_ring_ctx *ctx,
|
|
|
|
struct io_uring_params *p)
|
|
|
|
{
|
|
|
|
struct io_sq_ring *sq_ring;
|
|
|
|
struct io_cq_ring *cq_ring;
|
|
|
|
size_t size;
|
|
|
|
|
|
|
|
sq_ring = io_mem_alloc(struct_size(sq_ring, array, p->sq_entries));
|
|
|
|
if (!sq_ring)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
ctx->sq_ring = sq_ring;
|
|
|
|
sq_ring->ring_mask = p->sq_entries - 1;
|
|
|
|
sq_ring->ring_entries = p->sq_entries;
|
|
|
|
ctx->sq_mask = sq_ring->ring_mask;
|
|
|
|
ctx->sq_entries = sq_ring->ring_entries;
|
|
|
|
|
|
|
|
size = array_size(sizeof(struct io_uring_sqe), p->sq_entries);
|
|
|
|
if (size == SIZE_MAX)
|
|
|
|
return -EOVERFLOW;
|
|
|
|
|
|
|
|
ctx->sq_sqes = io_mem_alloc(size);
|
|
|
|
if (!ctx->sq_sqes) {
|
|
|
|
io_mem_free(ctx->sq_ring);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
cq_ring = io_mem_alloc(struct_size(cq_ring, cqes, p->cq_entries));
|
|
|
|
if (!cq_ring) {
|
|
|
|
io_mem_free(ctx->sq_ring);
|
|
|
|
io_mem_free(ctx->sq_sqes);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx->cq_ring = cq_ring;
|
|
|
|
cq_ring->ring_mask = p->cq_entries - 1;
|
|
|
|
cq_ring->ring_entries = p->cq_entries;
|
|
|
|
ctx->cq_mask = cq_ring->ring_mask;
|
|
|
|
ctx->cq_entries = cq_ring->ring_entries;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocate an anonymous fd, this is what constitutes the application
|
|
|
|
* visible backing of an io_uring instance. The application mmaps this
|
|
|
|
* fd to gain access to the SQ/CQ ring details. If UNIX sockets are enabled,
|
|
|
|
* we have to tie this fd to a socket for file garbage collection purposes.
|
|
|
|
*/
|
|
|
|
static int io_uring_get_fd(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct file *file;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
ret = sock_create_kern(&init_net, PF_UNIX, SOCK_RAW, IPPROTO_IP,
|
|
|
|
&ctx->ring_sock);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
ret = get_unused_fd_flags(O_RDWR | O_CLOEXEC);
|
|
|
|
if (ret < 0)
|
|
|
|
goto err;
|
|
|
|
|
|
|
|
file = anon_inode_getfile("[io_uring]", &io_uring_fops, ctx,
|
|
|
|
O_RDWR | O_CLOEXEC);
|
|
|
|
if (IS_ERR(file)) {
|
|
|
|
put_unused_fd(ret);
|
|
|
|
ret = PTR_ERR(file);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
ctx->ring_sock->file = file;
|
2019-01-11 05:13:58 +00:00
|
|
|
ctx->ring_sock->sk->sk_user_data = ctx;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#endif
|
|
|
|
fd_install(ret, file);
|
|
|
|
return ret;
|
|
|
|
err:
|
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
sock_release(ctx->ring_sock);
|
|
|
|
ctx->ring_sock = NULL;
|
|
|
|
#endif
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_uring_create(unsigned entries, struct io_uring_params *p)
|
|
|
|
{
|
|
|
|
struct user_struct *user = NULL;
|
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
bool account_mem;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!entries || entries > IORING_MAX_ENTRIES)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Use twice as many entries for the CQ ring. It's possible for the
|
|
|
|
* application to drive a higher depth than the size of the SQ ring,
|
|
|
|
* since the sqes are only used at submission time. This allows for
|
|
|
|
* some flexibility in overcommitting a bit.
|
|
|
|
*/
|
|
|
|
p->sq_entries = roundup_pow_of_two(entries);
|
|
|
|
p->cq_entries = 2 * p->sq_entries;
|
|
|
|
|
|
|
|
user = get_uid(current_user());
|
|
|
|
account_mem = !capable(CAP_IPC_LOCK);
|
|
|
|
|
|
|
|
if (account_mem) {
|
|
|
|
ret = io_account_mem(user,
|
|
|
|
ring_pages(p->sq_entries, p->cq_entries));
|
|
|
|
if (ret) {
|
|
|
|
free_uid(user);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx = io_ring_ctx_alloc(p);
|
|
|
|
if (!ctx) {
|
|
|
|
if (account_mem)
|
|
|
|
io_unaccount_mem(user, ring_pages(p->sq_entries,
|
|
|
|
p->cq_entries));
|
|
|
|
free_uid(user);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
ctx->compat = in_compat_syscall();
|
|
|
|
ctx->account_mem = account_mem;
|
|
|
|
ctx->user = user;
|
|
|
|
|
|
|
|
ret = io_allocate_scq_urings(ctx, p);
|
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
ret = io_sq_offload_start(ctx, p);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
|
|
|
|
ret = io_uring_get_fd(ctx);
|
|
|
|
if (ret < 0)
|
|
|
|
goto err;
|
|
|
|
|
|
|
|
memset(&p->sq_off, 0, sizeof(p->sq_off));
|
|
|
|
p->sq_off.head = offsetof(struct io_sq_ring, r.head);
|
|
|
|
p->sq_off.tail = offsetof(struct io_sq_ring, r.tail);
|
|
|
|
p->sq_off.ring_mask = offsetof(struct io_sq_ring, ring_mask);
|
|
|
|
p->sq_off.ring_entries = offsetof(struct io_sq_ring, ring_entries);
|
|
|
|
p->sq_off.flags = offsetof(struct io_sq_ring, flags);
|
|
|
|
p->sq_off.dropped = offsetof(struct io_sq_ring, dropped);
|
|
|
|
p->sq_off.array = offsetof(struct io_sq_ring, array);
|
|
|
|
|
|
|
|
memset(&p->cq_off, 0, sizeof(p->cq_off));
|
|
|
|
p->cq_off.head = offsetof(struct io_cq_ring, r.head);
|
|
|
|
p->cq_off.tail = offsetof(struct io_cq_ring, r.tail);
|
|
|
|
p->cq_off.ring_mask = offsetof(struct io_cq_ring, ring_mask);
|
|
|
|
p->cq_off.ring_entries = offsetof(struct io_cq_ring, ring_entries);
|
|
|
|
p->cq_off.overflow = offsetof(struct io_cq_ring, overflow);
|
|
|
|
p->cq_off.cqes = offsetof(struct io_cq_ring, cqes);
|
|
|
|
return ret;
|
|
|
|
err:
|
|
|
|
io_ring_ctx_wait_and_kill(ctx);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Sets up an aio uring context, and returns the fd. Applications asks for a
|
|
|
|
* ring size, we return the actual sq/cq ring sizes (among other things) in the
|
|
|
|
* params structure passed in.
|
|
|
|
*/
|
|
|
|
static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
|
|
|
|
{
|
|
|
|
struct io_uring_params p;
|
|
|
|
long ret;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (copy_from_user(&p, params, sizeof(p)))
|
|
|
|
return -EFAULT;
|
|
|
|
for (i = 0; i < ARRAY_SIZE(p.resv); i++) {
|
|
|
|
if (p.resv[i])
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
if (p.flags & ~(IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL |
|
|
|
|
IORING_SETUP_SQ_AFF))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
ret = io_uring_create(entries, &p);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
if (copy_to_user(params, &p, sizeof(p)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
SYSCALL_DEFINE2(io_uring_setup, u32, entries,
|
|
|
|
struct io_uring_params __user *, params)
|
|
|
|
{
|
|
|
|
return io_uring_setup(entries, params);
|
|
|
|
}
|
|
|
|
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
|
|
|
|
void __user *arg, unsigned nr_args)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
percpu_ref_kill(&ctx->refs);
|
|
|
|
wait_for_completion(&ctx->ctx_done);
|
|
|
|
|
|
|
|
switch (opcode) {
|
|
|
|
case IORING_REGISTER_BUFFERS:
|
|
|
|
ret = io_sqe_buffer_register(ctx, arg, nr_args);
|
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_BUFFERS:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_sqe_buffer_unregister(ctx);
|
|
|
|
break;
|
2019-01-11 05:13:58 +00:00
|
|
|
case IORING_REGISTER_FILES:
|
|
|
|
ret = io_sqe_files_register(ctx, arg, nr_args);
|
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_FILES:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_sqe_files_unregister(ctx);
|
|
|
|
break;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
default:
|
|
|
|
ret = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* bring the ctx back to life */
|
|
|
|
reinit_completion(&ctx->ctx_done);
|
|
|
|
percpu_ref_reinit(&ctx->refs);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
|
|
|
|
void __user *, arg, unsigned int, nr_args)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
long ret = -EBADF;
|
|
|
|
struct fd f;
|
|
|
|
|
|
|
|
f = fdget(fd);
|
|
|
|
if (!f.file)
|
|
|
|
return -EBADF;
|
|
|
|
|
|
|
|
ret = -EOPNOTSUPP;
|
|
|
|
if (f.file->f_op != &io_uring_fops)
|
|
|
|
goto out_fput;
|
|
|
|
|
|
|
|
ctx = f.file->private_data;
|
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
ret = __io_uring_register(ctx, opcode, arg, nr_args);
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
out_fput:
|
|
|
|
fdput(f);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static int __init io_uring_init(void)
|
|
|
|
{
|
|
|
|
req_cachep = KMEM_CACHE(io_kiocb, SLAB_HWCACHE_ALIGN | SLAB_PANIC);
|
|
|
|
return 0;
|
|
|
|
};
|
|
|
|
__initcall(io_uring_init);
|