Pavel Begunkov
6d63416dc5
io_uring: optimise plugging
...
Plugging is only needed with requests that also need a file, so hide
plugging under a ->needs_file check. Also, place ->needs_file and ->plug
bits into the same byte of io_op_defs, it may matter for compilers, e.g.
only with the change a tested one decided to optimise two memory testb
into a more with two register testb.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/1600d1287bb7d16451d4ef3343252787a5314927.1633532552.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:55 -06:00
Pavel Begunkov
54daa9b2d8
io_uring: correct fill events helpers types
...
CQE result is a 32-bit integer, so the functions generating CQEs are
better to accept not long but ints. Convert io_cqring_fill_event() and
other helpers.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/7ca6f15255e9117eae28adcac272744cae29b113.1633373302.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:55 -06:00
Pavel Begunkov
eb6e6f0690
io_uring: inline io_poll_complete
...
Inline io_poll_complete(), it's simple and doesn't have any particular
purpose.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/933d7ee3e4450749a2d892235462c8f18d030293.1633373302.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:55 -06:00
Pavel Begunkov
867f8fa5ae
io_uring: inline io_req_needs_clean()
...
There is only a single user of io_req_needs_clean() inline it.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/6111d0221ef4b439cad401e135dd6a5f990a0501.1633373302.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
d17e56eb49
io_uring: remove struct io_completion
...
We keep struct io_completion only as a temporal storage of cflags, Place
it in io_kiocb, it's cleaner, removes extra bits and even might be used
for future optimisations.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/5299bd5c223204065464bd87a515d0e405316086.1633373302.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
d886e185a1
io_uring: control ->async_data with a REQ_F flag
...
->async_data is a slow path, so it won't matter much if we do the clean
up inside io_clean_op(). Moreover, in many cases it's allocated together
with setting one or more of IO_REQ_CLEAN_FLAGS flags, so it'd go through
io_clean_op() anyway.
Control ->async_data allocation with a new flag REQ_F_ASYNC_DATA, so we
can do all the maintainence under io_req_needs_clean() fast check.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/6892cf5883c459f36bda26f30ceb16742b20b84b.1633373302.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
c1e53a6988
io_uring: optimise io_free_batch_list()
...
Delay reading the next node in io_free_batch_list(), allows the compiler
to load the value a bit later improving register spilling in some cases.
With gcc 11.1 it helped to move @task_refs variable from the stack to a
register and optimises out a couple of per request instructions.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/cc9fdfb6f72a4e8bc9918a5e9f2d97869a263ae4.1633373302.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
c072481ded
io_uring: mark cold functions
...
Attribute cold functions so compilers can optimise them for size. It
shrinks the binary by 2.5-3%
text data bss dec hex filename
90670 14002 8 104680 198e8 ./fs/io_uring.o
88053 14002 8 102063 18eaf ./fs/io_uring.o
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/b53d385f91dca45170b67d7f11c7abd787e821f6.1633373302.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
37f0e767e1
io_uring: optimise ctx referencing by requests
...
Currenlty, we allocate one ctx reference per request at submission time
and put them at free. It's batched and not so expensive but it still
bloats the kernel, adds 2 function calls for rcu and adds some overhead
for request counting in io_free_batch_list().
Always keep one reference with a request, even when it's freed and in
io_uring request caches. There is extra work at ring exit / quiesce
paths, which now need to put all cached requests. io_ring_exit_work() is
already looping, so it's not a problem. Add hybrid-busy waiting to
io_ctx_quiesce() as well for now.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/99613fbe396e80777228cde39bbda1aa8938554e.1633373302.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
d60aa65ba2
io_uring: merge CQ and poll waitqueues
...
->cq_wait and ->poll_wait and waken up in the same manner, use a single
waitqueue for both of them. CQ waiters are queued exclusively, so wake
up should first go over all pollers and that's what we need.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/00fe603e50000365774cf8435ef5fe03f049c1c9.1633373302.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
aede728aae
io_uring: don't wake sqpoll in io_cqring_ev_posted
...
io_cqring_ev_posted() doesn't need to wake SQPOLL, it's either done by
userspace or with task_work, but no action is required on request
completion. Rip off bits waking it up in io_cqring_ev_posted().
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/b49dab27b64cf11f4c50f2f90dcaac123430e05d.1633373302.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
a33ae9ce16
io_uring: optimise request allocation
...
Even after fully inlining io_alloc_req() my compiler does a NULL check
in the path of successful allocation, no hacks like an empty dereference
help it. Restructure io_alloc_req() by splitting out refilling part, so
the compiler generate a slightly better binary.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/eda17571bdc7248d8e617b23e7132a5416e4680b.1633373302.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
fff4e40e30
io_uring: delay req queueing into compl-batch list
...
io_req_complete_state() is inlined and used in lots of places, so we
want to keep it concise. Move adding a request into a completion batch
list from io_req_complete_state() into the consumer, i.e.
__io_queue_sqe().
before vs after
text data bss dec hex filename
91894 14002 8 105904 19db0 ./fs/io_uring.o
91046 14002 8 105056 19a60 ./fs/io_uring.o
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/4afca4e11abfd4cc8e99777fdcaf4d34cf4d022d.1633373302.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
51d48dab62
io_uring: add more likely/unlikely() annotations
...
Add two extra unlikely() in io_submit_sqes() and one around
io_req_needs_clean() to help the compiler to avoid extra jumps
in hot paths.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/88e087afe657e7660194353aada9b00f11d480f9.1633373302.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
7e3709d576
io_uring: optimise kiocb layout
...
We want ->comp_list in the second cacheline, which is hotter comparing
to the 3rd. Swap the field with ->link, which is not as hot and
controlled by flags and so not accessed unless there is a link.
By the way add a couple of comments for io_kiocb fields.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/9d9dde31f8f62279a5f48c575bbc27b8290edc0c.1633373302.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
6224590d24
io_uring: add flag to not fail link after timeout
...
For some reason non-off IORING_OP_TIMEOUT always fails links, it's
pretty inconvenient and unnecessary limits chaining after it to hard
linking, which is far from ideal, e.g. doesn't pair well with timeout
cancellation. Add a flag forcing it to not fail links on -ETIME.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/17c7ec0fb7a6113cc6be8cdaedcada0ba836ac0e.1633199723.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
30d51dd4ad
io_uring: clean up buffer select
...
Hiding a pointer to a struct io_buffer in rw.addr is error prone. We
have some place in io_kiocb, so keep kbuf's in a separate field
without aliasing and risks of it being misused.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/3e63a6a953b04cad81d9ea827b12344dd57b37b4.1633107393.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
fc0ae0244b
io_uring: init opcode in io_init_req()
...
Move io_req_prep() call inside of io_init_req(), it simplifies a bit
error handling for callers.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/a0f59291fd52da4672c323542fd56fd899e23f8f.1633107393.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
e0eb71dcfc
io_uring: don't return from io_drain_req()
...
Never return from io_drain_req() but punt to tw if we've got there but
it's a false positive and we shouldn't actually drain.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/93583cee51b8783706b76c73196c155b28d9e762.1633107393.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
22b2ca310a
io_uring: extra a helper for drain init
...
Add a helper io_init_req_drain for initialising requests with
IOSQE_DRAIN set. Also move bits from preambule of io_drain_req() in
there, because we already modify all the bits needed inside the helper.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/dcb412825b35b1cb8891245a387d7d69f8d14cef.1633107393.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
5e371265ea
io_uring: disable draining earlier
...
Clear ->drain_active in two more cases where we check for a need of
draining. It's not a bug, but still may lead to some extra requests
being punted to io-wq, and that may be not desirable.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/d20b265f77bb4e8860b15b9987252c7c711dfcba.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
a1cdbb4cb5
io_uring: comment why inline complete calls io_clean_op()
...
io_req_complete_state() calls io_clean_op() and it may be not entirely
obvious, leave a comment.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/21806f862151e223fdf439e5e8ed7178a8d66979.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
ef05d9ebcc
io_uring: kill off ->inflight_entry field
...
->inflight_entry is not used anymore after converting everything to
single linked lists, remove it. Also adjust io_kiocb layout, so all hot
bits are in first 3 cachelines.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/fd8d68087ede26c4e1707ce6b175aa1eb2381f2b.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
6962980947
io_uring: restructure submit sqes to_submit checks
...
Put an explicit check for number of requests to submit. First,
we can turn while into do-while and it generates better code, and second
that if can be cheaper, e.g. by using CPU flags after sub in
io_sqring_entries().
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/5926baadd20c28feab7a5e1725fedf32e4553ff7.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
d9f9d2842c
io_uring: reshuffle queue_sqe completion handling
...
If a request completed inline the result should only be zero, it's a
grave error otherwise. So, when we see REQ_F_COMPLETE_INLINE it's not
even necessary to check the return code, and the flag check can be moved
earlier.
It's one "if" less for inline completions, and same two checks for it
normally completing (ret == 0). Those are two cases we care about the
most.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/ebd4e397a9c26d96c99b24447acc309741041a83.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:54 -06:00
Pavel Begunkov
d475a9a622
io_uring: inline hot path of __io_queue_sqe()
...
Extract slow paths from __io_queue_sqe() into a function and inline the
hot path. With that we have everything completely inlined on the
submission path up until io_issue_sqe().
-> io_submit_sqes()
-> io_submit_sqe() (inlined)
-> io_queue_sqe() (inlined)
-> __io_queue_sqe() (inlined)
-> io_issue_sqe()
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/f1606864d95d7f26dc28c7eec3dc6ed6ec32618a.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
4652fe3f10
io_uring: split slow path from io_queue_sqe
...
We don't want the slow path of io_queue_sqe to be inlined, so extract a
function from it.
text data bss dec hex filename
91950 13986 8 105944 19dd8 ./fs/io_uring.o
91758 13986 8 105752 19d18 ./fs/io_uring.o
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/fb01253911f8fb374268f65b1ba939b54ca6583f.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
2a56a9bd64
io_uring: remove drain_active check from hot path
...
req->ctx->active_drain is a bit too expensive, partially because of two
dereferences. Do a trick, if we see it set in io_init_req(), set
REQ_F_FORCE_ASYNC and it automatically goes through a slower path where
we can catch it. It's nearly free to do in io_init_req() because there
is already ->restricted check and it's in the same byte of a bitmask.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/d7e7ddc63c15e8a300833132abb3eb8fd3918aef.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
f15a343177
io_uring: deduplicate io_queue_sqe() call sites
...
There are two call sites of io_queue_sqe() in io_submit_sqe(), combine
them into one, because io_queue_sqe() is inline and we don't want to
bloat binary, and will become even bigger
text data bss dec hex filename
92126 13986 8 106120 19e88 ./fs/io_uring.o
91966 13986 8 105960 19de8 ./fs/io_uring.o
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/506124b8e767f0a4576f7a459f6aea3d13fb4dda.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
553deffd09
io_uring: don't pass state to io_submit_state_end
...
Submission state and ctx and coupled together, no need to passs
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/e22d77a5786ef77e0c49b933ad74bae55cfb6ca6.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
1cce17aca6
io_uring: don't pass tail into io_free_batch_list
...
io_free_batch_list() iterates all requests in the passed in list,
so we don't really need to know the tail but can keep iterating until
meet NULL. Just pass the first node into it and it will be enough.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/4a12c84b6d887d980e05f417ba4172d04c64acae.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
d4b7a5ef2b
io_uring: inline completion batching helpers
...
We now have a single function for batched put of requests, just inline
struct req_batch and all related helpers into it.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/595a2917f80dd94288cd7203052c7934f5446580.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
f5ed3bcd5b
io_uring: optimise batch completion
...
First, convert rest of iopoll bits to single linked lists, and also
replace per-request list_add_tail() with splicing a part of slist.
With that, use io_free_batch_list() to put/free requests. The main
advantage of it is that it's now the only user of struct req_batch and
friends, and so they can be inlined. The main overhead there was
per-request call to not-inlined io_req_free_batch(), which is expensive
enough.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/b37fc6d5954b241e025eead7ab92c6f44a42f229.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
b3fa03fd1b
io_uring: convert iopoll_completed to store_release
...
Convert explicit barrier around iopoll_completed to smp_load_acquire()
and smp_store_release(). Similar on the callback side, but replaces a
single smp_rmb() with per-request smp_load_acquire(), neither imply any
extra CPU ordering for x86. Use READ_ONCE as usual where it doesn't
matter.
Use it to move filling CQEs by iopoll earlier, that will be necessary
to avoid traversing the list one extra time in the future.
Suggested-by: Bart Van Assche <bvanassche@acm.org >
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/8bd663cb15efdc72d6247c38ee810964e744a450.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
3aa83bfb6e
io_uring: add a helper for batch free
...
Add a helper io_free_batch_list(), which takes a single linked list and
puts/frees all requests from it in an efficient manner. Will be reused
later.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/4fc8306b542c6b1dd1d08e8021ef3bdb0ad15010.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
5eef4e87eb
io_uring: use single linked list for iopoll
...
Use single linked lists for keeping iopoll requests, takes less space,
may be faster, but mostly will be of benefit for further patches.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/314033676b100cd485518c3bc55e1b95a0dcd71f.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
e3f721e6f6
io_uring: split iopoll loop
...
The main loop of io_do_iopoll() iterates and does ->iopoll() until it
meets a first completed request, then it continues from that position
and splices requests to pass them through io_iopoll_complete().
Split the loop in two for clearness, iopolling and reaping completed
requests from the list.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/a7f6fd27a94845e5dc925a47a4a9765a92e514fb.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
c2b6c6bc4e
io_uring: replace list with stack for req caches
...
Replace struct list_head free_list serving for caching requests with
singly linked stack, which is faster.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/1bc942b82422fb2624b8353bd93aca183a022846.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
3ab665b74e
io_uring: remove allocation cache array
...
We have several of request allocation layers, remove the last one, which
is the submit->reqs array, and always use submit->free_reqs instead.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/8547095c35f7a87bab14f6447ecd30a273ed7500.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
6f33b0bc4e
io_uring: use slist for completion batching
...
Currently we collect requests for completion batching in an array.
Replace them with a singly linked list. It's as fast as arrays but
doesn't take some much space in ctx, and will be used in future patches.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/a666826f2854d17e9fb9417fb302edfeb750f425.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
5ba3c874eb
io_uring: make io_do_iopoll return number of reqs
...
Don't pass nr_events pointer around but return directly, it's less
expensive than pointer increments.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/f771a8153a86f16f12ff4272524e9e549c5de40b.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
87a115fb71
io_uring: force_nonspin
...
We don't really need to pass the number of requests to complete into
io_do_iopoll(), a flag whether to enforce non-spin mode is enough.
Should be straightforward, maybe except io_iopoll_check(). We pass !min
there, because we do never enter with the number of already reaped
requests is larger than the specified @min, apart from the first
iteration, where nr_events is 0 and so the final check should be
identical.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/782b39d1d8ec584eae15bca0a1feb6f0571fe5b8.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
6878b40e7b
io_uring: mark having different creds unlikely
...
Hint the compiler that it's not as likely to have creds different from
current attached to a request. The current code generation is far from
ideal, hopefully it can help to some compilers to remove duplicated jump
tables and so.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/e7815251ac4bf5a4a23d298c752f029ae19f3837.1632516769.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Hao Xu
8d4af6857c
io_uring: return boolean value for io_alloc_async_data
...
boolean value is good enough for io_alloc_async_data.
Signed-off-by: Hao Xu <haoxu@linux.alibaba.com >
Link: https://lore.kernel.org/r/20210922101522.9179-1-haoxu@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
68fe256aad
io_uring: optimise io_req_init() sqe flags checks
...
IOSQE_IO_DRAIN is quite marginal and we don't care too much about
IOSQE_BUFFER_SELECT. Save to ifs and hide both of them under
SQE_VALID_FLAGS check. Now we first check whether it uses a "safe"
subset, i.e. without DRAIN and BUFFER_SELECT, and only if it's not
true we test the rest of the flags.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/dccfb9ab2ab0969a2d8dc59af88fa0ce44eeb1d5.1631703764.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Pavel Begunkov
a3f349071e
io_uring: remove ctx referencing from complete_post
...
Now completions are done from task context, that means that it's either
the task itself, task_work or io-wq worker. In all those cases the ctx
will be staying alive by mutexing, explicit referencing or req references
by iowq. Remove extra ctx pinning from io_req_complete_post().
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/60a0e96434c16ab4fe587651448290d61ec9a113.1631703756.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:53 -06:00
Hao Xu
83f84356bc
io_uring: add more uring info to fdinfo for debug
...
Developers may need some uring info to help themselves debug and address
issues in production. This includes sqring/cqring head/tail and the
detailed sqe/cqe info, which is very useful when an application is hung
on a ring.
Signed-off-by: Hao Xu <haoxu@linux.alibaba.com >
Link: https://lore.kernel.org/r/20210913130854.38542-1-haoxu@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:52 -06:00
Pavel Begunkov
d97ec6239a
io_uring: kill extra wake_up_process in tw add
...
TWA_SIGNAL already wakes the thread, no need in wake_up_process() after
it.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/7e90cf643f633e857443e0c9e72471b221735c50.1631115443.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:52 -06:00
Pavel Begunkov
c450178d9b
io_uring: dedup CQE flushing non-empty checks
...
We don't do io_submit_flush_completions() when there is no requests
enqueued, and every single caller checks for it. Hide that check into
the function not forgetting about inlining. That will make it much
easier for changing the empty check condition in the future.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/d7ff8cef5da1b38e8ea648f5aad9a315ddfc7b57.1631115443.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:52 -06:00
Pavel Begunkov
d81499bfcd
io_uring: inline linked part of io_req_find_next
...
Inline part of __io_req_find_next() that returns a request but doesn't
need io_disarm_next(). It's just two places, but makes links a bit
faster.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com >
Link: https://lore.kernel.org/r/4126d13f23d0e91b39b3558e16bd86cafa7fcef2.1631115443.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk >
2021-10-19 05:49:52 -06:00