2019-05-19 12:08:55 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-only
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* linux/fs/nfs/pagelist.c
|
|
|
|
*
|
|
|
|
* A set of helper functions for managing NFS read and write requests.
|
|
|
|
* The main purpose of these routines is to provide support for the
|
|
|
|
* coalescing of several requests into a single RPC call.
|
|
|
|
*
|
|
|
|
* Copyright 2000, 2001 (c) Trond Myklebust <trond.myklebust@fys.uio.no>
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/file.h>
|
Detach sched.h from mm.h
First thing mm.h does is including sched.h solely for can_do_mlock() inline
function which has "current" dereference inside. By dealing with can_do_mlock()
mm.h can be detached from sched.h which is good. See below, why.
This patch
a) removes unconditional inclusion of sched.h from mm.h
b) makes can_do_mlock() normal function in mm/mlock.c
c) exports can_do_mlock() to not break compilation
d) adds sched.h inclusions back to files that were getting it indirectly.
e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were
getting them indirectly
Net result is:
a) mm.h users would get less code to open, read, preprocess, parse, ... if
they don't need sched.h
b) sched.h stops being dependency for significant number of files:
on x86_64 allmodconfig touching sched.h results in recompile of 4083 files,
after patch it's only 3744 (-8.3%).
Cross-compile tested on
all arm defconfigs, all mips defconfigs, all powerpc defconfigs,
alpha alpha-up
arm
i386 i386-up i386-defconfig i386-allnoconfig
ia64 ia64-up
m68k
mips
parisc parisc-up
powerpc powerpc-up
s390 s390-up
sparc sparc-up
sparc64 sparc64-up
um-x86_64
x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig
as well as my two usual configs.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-20 21:22:52 +00:00
|
|
|
#include <linux/sched.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/sunrpc/clnt.h>
|
2012-01-18 03:04:24 +00:00
|
|
|
#include <linux/nfs.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/nfs3.h>
|
|
|
|
#include <linux/nfs4.h>
|
|
|
|
#include <linux/nfs_fs.h>
|
2019-04-07 17:59:08 +00:00
|
|
|
#include <linux/nfs_page.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/nfs_mount.h>
|
2011-05-26 20:00:52 +00:00
|
|
|
#include <linux/export.h>
|
2022-11-20 14:15:34 +00:00
|
|
|
#include <linux/filelock.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2007-04-10 13:26:35 +00:00
|
|
|
#include "internal.h"
|
2011-03-01 01:34:15 +00:00
|
|
|
#include "pnfs.h"
|
2020-05-12 21:14:11 +00:00
|
|
|
#include "nfstrace.h"
|
NFS: Convert buffered read paths to use netfs when fscache is enabled
Convert the NFS buffered read code paths to corresponding netfs APIs,
but only when fscache is configured and enabled.
The netfs API defines struct netfs_request_ops which must be filled
in by the network filesystem. For NFS, we only need to define 5 of
the functions, the main one being the issue_read() function.
The issue_read() function is called by the netfs layer when a read
cannot be fulfilled locally, and must be sent to the server (either
the cache is not active, or it is active but the data is not available).
Once the read from the server is complete, netfs requires a call to
netfs_subreq_terminated() which conveys either how many bytes were read
successfully, or an error. Note that issue_read() is called with a
structure, netfs_io_subrequest, which defines the IO requested, and
contains a start and a length (both in bytes), and assumes the underlying
netfs will return a either an error on the whole region, or the number
of bytes successfully read.
The NFS IO path is page based and the main APIs are the pgio APIs defined
in pagelist.c. For the pgio APIs, there is no way for the caller to
know how many RPCs will be sent and how the pages will be broken up
into underlying RPCs, each of which will have their own completion and
return code. In contrast, netfs is subrequest based, a single
subrequest may contain multiple pages, and a single subrequest is
initiated with issue_read() and terminated with netfs_subreq_terminated().
Thus, to utilze the netfs APIs, NFS needs some way to accommodate
the netfs API requirement on the single response to the whole
subrequest, while also minimizing disruptive changes to the NFS
pgio layer.
The approach taken with this patch is to allocate a small structure
for each nfs_netfs_issue_read() call, store the final error and number
of bytes successfully transferred in the structure, and update these values
as each RPC completes. The refcount on the structure is used as a marker
for the last RPC completion, is incremented in nfs_netfs_read_initiate(),
and decremented inside nfs_netfs_read_completion(), when a nfs_pgio_header
contains a valid pointer to the data. On the final put (which signals
the final outstanding RPC is complete) in nfs_netfs_read_completion(),
call netfs_subreq_terminated() with either the final error value (if
one or more READs complete with an error) or the number of bytes
successfully transferred (if all RPCs complete successfully). Note
that when all RPCs complete successfully, the number of bytes transferred
is capped to the length of the subrequest. Capping the transferred length
to the subrequest length prevents "Subreq overread" warnings from netfs.
This is due to the "aligned_len" in nfs_pageio_add_page(), and the
corner case where NFS requests a full page at the end of the file,
even when i_size reflects only a partial page (NFS overread).
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
Tested-by: Daire Byrne <daire@dneg.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2023-02-20 13:43:06 +00:00
|
|
|
#include "fscache.h"
|
2007-04-10 13:26:35 +00:00
|
|
|
|
2014-05-06 13:12:32 +00:00
|
|
|
#define NFSDBG_FACILITY NFSDBG_PAGECACHE
|
|
|
|
|
2006-12-07 04:33:20 +00:00
|
|
|
static struct kmem_cache *nfs_page_cachep;
|
2014-05-06 13:12:36 +00:00
|
|
|
static const struct rpc_call_ops nfs_pgio_common_ops;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2023-01-19 21:33:36 +00:00
|
|
|
struct nfs_page_iter_page {
|
|
|
|
const struct nfs_page *req;
|
|
|
|
size_t count;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void nfs_page_iter_page_init(struct nfs_page_iter_page *i,
|
|
|
|
const struct nfs_page *req)
|
|
|
|
{
|
|
|
|
i->req = req;
|
|
|
|
i->count = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nfs_page_iter_page_advance(struct nfs_page_iter_page *i, size_t sz)
|
|
|
|
{
|
|
|
|
const struct nfs_page *req = i->req;
|
|
|
|
size_t tmp = i->count + sz;
|
|
|
|
|
|
|
|
i->count = (tmp < req->wb_bytes) ? tmp : req->wb_bytes;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct page *nfs_page_iter_page_get(struct nfs_page_iter_page *i)
|
|
|
|
{
|
|
|
|
const struct nfs_page *req = i->req;
|
|
|
|
struct page *page;
|
|
|
|
|
|
|
|
if (i->count != req->wb_bytes) {
|
|
|
|
size_t base = i->count + req->wb_pgbase;
|
|
|
|
size_t len = PAGE_SIZE - offset_in_page(base);
|
|
|
|
|
|
|
|
page = nfs_page_to_page(req, base);
|
|
|
|
nfs_page_iter_page_advance(i, len);
|
|
|
|
return page;
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2020-11-15 22:37:37 +00:00
|
|
|
static struct nfs_pgio_mirror *
|
|
|
|
nfs_pgio_get_mirror(struct nfs_pageio_descriptor *desc, u32 idx)
|
|
|
|
{
|
|
|
|
if (desc->pg_ops->pg_get_mirror)
|
|
|
|
return desc->pg_ops->pg_get_mirror(desc, idx);
|
|
|
|
return &desc->pg_mirrors[0];
|
|
|
|
}
|
|
|
|
|
2014-11-10 00:35:35 +00:00
|
|
|
struct nfs_pgio_mirror *
|
|
|
|
nfs_pgio_current_mirror(struct nfs_pageio_descriptor *desc)
|
|
|
|
{
|
2020-11-15 22:37:37 +00:00
|
|
|
return nfs_pgio_get_mirror(desc, desc->pg_mirror_idx);
|
2014-11-10 00:35:35 +00:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(nfs_pgio_current_mirror);
|
|
|
|
|
2020-11-15 22:37:37 +00:00
|
|
|
static u32
|
|
|
|
nfs_pgio_set_current_mirror(struct nfs_pageio_descriptor *desc, u32 idx)
|
|
|
|
{
|
|
|
|
if (desc->pg_ops->pg_set_mirror)
|
|
|
|
return desc->pg_ops->pg_set_mirror(desc, idx);
|
|
|
|
return desc->pg_mirror_idx;
|
|
|
|
}
|
|
|
|
|
2012-04-20 18:47:46 +00:00
|
|
|
void nfs_pgheader_init(struct nfs_pageio_descriptor *desc,
|
|
|
|
struct nfs_pgio_header *hdr,
|
|
|
|
void (*release)(struct nfs_pgio_header *hdr))
|
|
|
|
{
|
2014-11-10 00:35:35 +00:00
|
|
|
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
|
2014-09-19 14:55:07 +00:00
|
|
|
|
|
|
|
|
|
|
|
hdr->req = nfs_list_entry(mirror->pg_list.next);
|
2012-04-20 18:47:46 +00:00
|
|
|
hdr->inode = desc->pg_inode;
|
2019-04-07 17:59:11 +00:00
|
|
|
hdr->cred = nfs_req_openctx(hdr->req)->cred;
|
2012-04-20 18:47:46 +00:00
|
|
|
hdr->io_start = req_offset(hdr->req);
|
2014-09-19 14:55:07 +00:00
|
|
|
hdr->good_bytes = mirror->pg_count;
|
2017-06-20 23:35:37 +00:00
|
|
|
hdr->io_completion = desc->pg_io_completion;
|
2012-04-20 18:47:51 +00:00
|
|
|
hdr->dreq = desc->pg_dreq;
|
NFS: Convert buffered read paths to use netfs when fscache is enabled
Convert the NFS buffered read code paths to corresponding netfs APIs,
but only when fscache is configured and enabled.
The netfs API defines struct netfs_request_ops which must be filled
in by the network filesystem. For NFS, we only need to define 5 of
the functions, the main one being the issue_read() function.
The issue_read() function is called by the netfs layer when a read
cannot be fulfilled locally, and must be sent to the server (either
the cache is not active, or it is active but the data is not available).
Once the read from the server is complete, netfs requires a call to
netfs_subreq_terminated() which conveys either how many bytes were read
successfully, or an error. Note that issue_read() is called with a
structure, netfs_io_subrequest, which defines the IO requested, and
contains a start and a length (both in bytes), and assumes the underlying
netfs will return a either an error on the whole region, or the number
of bytes successfully read.
The NFS IO path is page based and the main APIs are the pgio APIs defined
in pagelist.c. For the pgio APIs, there is no way for the caller to
know how many RPCs will be sent and how the pages will be broken up
into underlying RPCs, each of which will have their own completion and
return code. In contrast, netfs is subrequest based, a single
subrequest may contain multiple pages, and a single subrequest is
initiated with issue_read() and terminated with netfs_subreq_terminated().
Thus, to utilze the netfs APIs, NFS needs some way to accommodate
the netfs API requirement on the single response to the whole
subrequest, while also minimizing disruptive changes to the NFS
pgio layer.
The approach taken with this patch is to allocate a small structure
for each nfs_netfs_issue_read() call, store the final error and number
of bytes successfully transferred in the structure, and update these values
as each RPC completes. The refcount on the structure is used as a marker
for the last RPC completion, is incremented in nfs_netfs_read_initiate(),
and decremented inside nfs_netfs_read_completion(), when a nfs_pgio_header
contains a valid pointer to the data. On the final put (which signals
the final outstanding RPC is complete) in nfs_netfs_read_completion(),
call netfs_subreq_terminated() with either the final error value (if
one or more READs complete with an error) or the number of bytes
successfully transferred (if all RPCs complete successfully). Note
that when all RPCs complete successfully, the number of bytes transferred
is capped to the length of the subrequest. Capping the transferred length
to the subrequest length prevents "Subreq overread" warnings from netfs.
This is due to the "aligned_len" in nfs_pageio_add_page(), and the
corner case where NFS requests a full page at the end of the file,
even when i_size reflects only a partial page (NFS overread).
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
Tested-by: Daire Byrne <daire@dneg.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2023-02-20 13:43:06 +00:00
|
|
|
nfs_netfs_set_pgio_header(hdr, desc);
|
2012-04-20 18:47:46 +00:00
|
|
|
hdr->release = release;
|
2012-04-20 18:47:48 +00:00
|
|
|
hdr->completion_ops = desc->pg_completion_ops;
|
2012-04-20 18:47:51 +00:00
|
|
|
if (hdr->completion_ops->init_hdr)
|
|
|
|
hdr->completion_ops->init_hdr(hdr);
|
2014-09-19 14:55:07 +00:00
|
|
|
|
|
|
|
hdr->pgio_mirror_idx = desc->pg_mirror_idx;
|
2012-04-20 18:47:46 +00:00
|
|
|
}
|
2012-07-30 20:05:25 +00:00
|
|
|
EXPORT_SYMBOL_GPL(nfs_pgheader_init);
|
2012-04-20 18:47:46 +00:00
|
|
|
|
|
|
|
void nfs_set_pgio_error(struct nfs_pgio_header *hdr, int error, loff_t pos)
|
|
|
|
{
|
2018-09-25 16:34:43 +00:00
|
|
|
unsigned int new = pos - hdr->io_start;
|
|
|
|
|
2020-05-12 21:14:11 +00:00
|
|
|
trace_nfs_pgio_error(hdr, error, pos);
|
2018-09-25 16:34:43 +00:00
|
|
|
if (hdr->good_bytes > new) {
|
|
|
|
hdr->good_bytes = new;
|
2012-04-20 18:47:46 +00:00
|
|
|
clear_bit(NFS_IOHDR_EOF, &hdr->flags);
|
2018-09-25 16:34:43 +00:00
|
|
|
if (!test_and_set_bit(NFS_IOHDR_ERROR, &hdr->flags))
|
|
|
|
hdr->error = error;
|
2012-04-20 18:47:46 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-03-21 17:48:36 +00:00
|
|
|
static inline struct nfs_page *nfs_page_alloc(void)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2022-03-21 17:48:36 +00:00
|
|
|
struct nfs_page *p =
|
|
|
|
kmem_cache_zalloc(nfs_page_cachep, nfs_io_gfp_mask());
|
2010-12-09 22:17:15 +00:00
|
|
|
if (p)
|
2005-04-16 22:20:36 +00:00
|
|
|
INIT_LIST_HEAD(&p->wb_list);
|
|
|
|
return p;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
nfs_page_free(struct nfs_page *p)
|
|
|
|
{
|
|
|
|
kmem_cache_free(nfs_page_cachep, p);
|
|
|
|
}
|
|
|
|
|
2013-04-09 01:38:12 +00:00
|
|
|
/**
|
|
|
|
* nfs_iocounter_wait - wait for i/o to complete
|
2016-01-06 15:40:18 +00:00
|
|
|
* @l_ctx: nfs_lock_context with io_counter to use
|
2013-04-09 01:38:12 +00:00
|
|
|
*
|
|
|
|
* returns -ERESTARTSYS if interrupted by a fatal signal.
|
|
|
|
* Otherwise returns 0 once the io_count hits 0.
|
|
|
|
*/
|
|
|
|
int
|
2016-01-06 15:40:18 +00:00
|
|
|
nfs_iocounter_wait(struct nfs_lock_context *l_ctx)
|
2013-04-09 01:38:12 +00:00
|
|
|
{
|
2018-03-15 10:44:34 +00:00
|
|
|
return wait_var_event_killable(&l_ctx->io_count,
|
|
|
|
!atomic_read(&l_ctx->io_count));
|
2013-04-09 01:38:12 +00:00
|
|
|
}
|
|
|
|
|
2017-04-11 16:50:10 +00:00
|
|
|
/**
|
|
|
|
* nfs_async_iocounter_wait - wait on a rpc_waitqueue for I/O
|
|
|
|
* to complete
|
|
|
|
* @task: the rpc_task that should wait
|
|
|
|
* @l_ctx: nfs_lock_context with io_counter to check
|
|
|
|
*
|
|
|
|
* Returns true if there is outstanding I/O to wait on and the
|
|
|
|
* task has been put to sleep.
|
|
|
|
*/
|
|
|
|
bool
|
|
|
|
nfs_async_iocounter_wait(struct rpc_task *task, struct nfs_lock_context *l_ctx)
|
|
|
|
{
|
|
|
|
struct inode *inode = d_inode(l_ctx->open_context->dentry);
|
|
|
|
bool ret = false;
|
|
|
|
|
|
|
|
if (atomic_read(&l_ctx->io_count) > 0) {
|
|
|
|
rpc_sleep_on(&NFS_SERVER(inode)->uoc_rpcwaitq, task, NULL);
|
|
|
|
ret = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (atomic_read(&l_ctx->io_count) == 0) {
|
|
|
|
rpc_wake_up_queued_task(&NFS_SERVER(inode)->uoc_rpcwaitq, task);
|
|
|
|
ret = false;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(nfs_async_iocounter_wait);
|
|
|
|
|
2020-03-30 16:40:47 +00:00
|
|
|
/*
|
|
|
|
* nfs_page_lock_head_request - page lock the head of the page group
|
|
|
|
* @req: any member of the page group
|
|
|
|
*/
|
|
|
|
struct nfs_page *
|
|
|
|
nfs_page_group_lock_head(struct nfs_page *req)
|
|
|
|
{
|
|
|
|
struct nfs_page *head = req->wb_head;
|
|
|
|
|
|
|
|
while (!nfs_lock_request(head)) {
|
|
|
|
int ret = nfs_wait_on_request(head);
|
|
|
|
if (ret < 0)
|
|
|
|
return ERR_PTR(ret);
|
|
|
|
}
|
|
|
|
if (head != req)
|
|
|
|
kref_get(&head->wb_kref);
|
|
|
|
return head;
|
|
|
|
}
|
|
|
|
|
2020-03-30 15:12:16 +00:00
|
|
|
/*
|
|
|
|
* nfs_unroll_locks - unlock all newly locked reqs and wait on @req
|
|
|
|
* @head: head request of page group, must be holding head lock
|
|
|
|
* @req: request that couldn't lock and needs to wait on the req bit lock
|
|
|
|
*
|
|
|
|
* This is a helper function for nfs_lock_and_join_requests
|
|
|
|
* returns 0 on success, < 0 on error.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
nfs_unroll_locks(struct nfs_page *head, struct nfs_page *req)
|
|
|
|
{
|
|
|
|
struct nfs_page *tmp;
|
|
|
|
|
|
|
|
/* relinquish all the locks successfully grabbed this run */
|
|
|
|
for (tmp = head->wb_this_page ; tmp != req; tmp = tmp->wb_this_page) {
|
|
|
|
if (!kref_read(&tmp->wb_kref))
|
|
|
|
continue;
|
|
|
|
nfs_unlock_and_release_request(tmp);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* nfs_page_group_lock_subreq - try to lock a subrequest
|
|
|
|
* @head: head request of page group
|
|
|
|
* @subreq: request to lock
|
|
|
|
*
|
|
|
|
* This is a helper function for nfs_lock_and_join_requests which
|
|
|
|
* must be called with the head request and page group both locked.
|
|
|
|
* On error, it returns with the page group unlocked.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
nfs_page_group_lock_subreq(struct nfs_page *head, struct nfs_page *subreq)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!kref_get_unless_zero(&subreq->wb_kref))
|
|
|
|
return 0;
|
|
|
|
while (!nfs_lock_request(subreq)) {
|
|
|
|
nfs_page_group_unlock(head);
|
|
|
|
ret = nfs_wait_on_request(subreq);
|
|
|
|
if (!ret)
|
|
|
|
ret = nfs_page_group_lock(head);
|
|
|
|
if (ret < 0) {
|
|
|
|
nfs_unroll_locks(head, subreq);
|
|
|
|
nfs_release_request(subreq);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* nfs_page_group_lock_subrequests - try to lock the subrequests
|
|
|
|
* @head: head request of page group
|
|
|
|
*
|
|
|
|
* This is a helper function for nfs_lock_and_join_requests which
|
2020-03-30 16:40:47 +00:00
|
|
|
* must be called with the head request locked.
|
2020-03-30 15:12:16 +00:00
|
|
|
*/
|
|
|
|
int nfs_page_group_lock_subrequests(struct nfs_page *head)
|
|
|
|
{
|
|
|
|
struct nfs_page *subreq;
|
|
|
|
int ret;
|
|
|
|
|
2020-03-30 16:40:47 +00:00
|
|
|
ret = nfs_page_group_lock(head);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
2020-03-30 15:12:16 +00:00
|
|
|
/* lock each request in the page group */
|
|
|
|
for (subreq = head->wb_this_page; subreq != head;
|
|
|
|
subreq = subreq->wb_this_page) {
|
|
|
|
ret = nfs_page_group_lock_subreq(head, subreq);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
}
|
2020-03-30 16:40:47 +00:00
|
|
|
nfs_page_group_unlock(head);
|
2020-03-30 15:12:16 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-05-15 15:56:45 +00:00
|
|
|
/*
|
2020-04-01 17:04:49 +00:00
|
|
|
* nfs_page_set_headlock - set the request PG_HEADLOCK
|
|
|
|
* @req: request that is to be locked
|
2014-05-15 15:56:45 +00:00
|
|
|
*
|
2020-04-01 17:04:49 +00:00
|
|
|
* this lock must be held when modifying req->wb_head
|
2014-07-18 00:42:15 +00:00
|
|
|
*
|
2017-07-17 14:54:14 +00:00
|
|
|
* return 0 on success, < 0 on error
|
2014-05-15 15:56:45 +00:00
|
|
|
*/
|
2014-07-18 00:42:15 +00:00
|
|
|
int
|
2020-04-01 17:04:49 +00:00
|
|
|
nfs_page_set_headlock(struct nfs_page *req)
|
2014-05-15 15:56:45 +00:00
|
|
|
{
|
2020-04-01 17:04:49 +00:00
|
|
|
if (!test_and_set_bit(PG_HEADLOCK, &req->wb_flags))
|
2014-08-08 15:00:54 +00:00
|
|
|
return 0;
|
2014-07-18 00:42:15 +00:00
|
|
|
|
2020-04-01 17:04:49 +00:00
|
|
|
set_bit(PG_CONTENDED1, &req->wb_flags);
|
2017-07-17 14:54:14 +00:00
|
|
|
smp_mb__after_atomic();
|
2020-04-01 17:04:49 +00:00
|
|
|
return wait_on_bit_lock(&req->wb_flags, PG_HEADLOCK,
|
2014-08-08 15:00:54 +00:00
|
|
|
TASK_UNINTERRUPTIBLE);
|
2014-05-15 15:56:45 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2020-04-01 17:04:49 +00:00
|
|
|
* nfs_page_clear_headlock - clear the request PG_HEADLOCK
|
|
|
|
* @req: request that is to be locked
|
2014-05-15 15:56:45 +00:00
|
|
|
*/
|
|
|
|
void
|
2020-04-01 17:04:49 +00:00
|
|
|
nfs_page_clear_headlock(struct nfs_page *req)
|
2014-05-15 15:56:45 +00:00
|
|
|
{
|
2021-07-13 16:28:22 +00:00
|
|
|
clear_bit_unlock(PG_HEADLOCK, &req->wb_flags);
|
2014-06-10 22:02:42 +00:00
|
|
|
smp_mb__after_atomic();
|
2020-04-01 17:04:49 +00:00
|
|
|
if (!test_bit(PG_CONTENDED1, &req->wb_flags))
|
2017-07-11 21:53:48 +00:00
|
|
|
return;
|
2020-04-01 17:04:49 +00:00
|
|
|
wake_up_bit(&req->wb_flags, PG_HEADLOCK);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* nfs_page_group_lock - lock the head of the page group
|
|
|
|
* @req: request in group that is to be locked
|
|
|
|
*
|
|
|
|
* this lock must be held when traversing or modifying the page
|
|
|
|
* group list
|
|
|
|
*
|
|
|
|
* return 0 on success, < 0 on error
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
nfs_page_group_lock(struct nfs_page *req)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = nfs_page_set_headlock(req);
|
|
|
|
if (ret || req->wb_head == req)
|
|
|
|
return ret;
|
|
|
|
return nfs_page_set_headlock(req->wb_head);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* nfs_page_group_unlock - unlock the head of the page group
|
|
|
|
* @req: request in group that is to be unlocked
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
nfs_page_group_unlock(struct nfs_page *req)
|
|
|
|
{
|
|
|
|
if (req != req->wb_head)
|
|
|
|
nfs_page_clear_headlock(req->wb_head);
|
|
|
|
nfs_page_clear_headlock(req);
|
2014-05-15 15:56:45 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* nfs_page_group_sync_on_bit_locked
|
|
|
|
*
|
|
|
|
* must be called with page group lock held
|
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
nfs_page_group_sync_on_bit_locked(struct nfs_page *req, unsigned int bit)
|
|
|
|
{
|
|
|
|
struct nfs_page *head = req->wb_head;
|
|
|
|
struct nfs_page *tmp;
|
|
|
|
|
|
|
|
WARN_ON_ONCE(!test_bit(PG_HEADLOCK, &head->wb_flags));
|
|
|
|
WARN_ON_ONCE(test_and_set_bit(bit, &req->wb_flags));
|
|
|
|
|
|
|
|
tmp = req->wb_this_page;
|
|
|
|
while (tmp != req) {
|
|
|
|
if (!test_bit(bit, &tmp->wb_flags))
|
|
|
|
return false;
|
|
|
|
tmp = tmp->wb_this_page;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* true! reset all bits */
|
|
|
|
tmp = req;
|
|
|
|
do {
|
|
|
|
clear_bit(bit, &tmp->wb_flags);
|
|
|
|
tmp = tmp->wb_this_page;
|
|
|
|
} while (tmp != req);
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* nfs_page_group_sync_on_bit - set bit on current request, but only
|
|
|
|
* return true if the bit is set for all requests in page group
|
|
|
|
* @req - request in page group
|
|
|
|
* @bit - PG_* bit that is used to sync page group
|
|
|
|
*/
|
|
|
|
bool nfs_page_group_sync_on_bit(struct nfs_page *req, unsigned int bit)
|
|
|
|
{
|
|
|
|
bool ret;
|
|
|
|
|
2017-07-17 14:54:14 +00:00
|
|
|
nfs_page_group_lock(req);
|
2014-05-15 15:56:45 +00:00
|
|
|
ret = nfs_page_group_sync_on_bit_locked(req, bit);
|
|
|
|
nfs_page_group_unlock(req);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* nfs_page_group_init - Initialize the page group linkage for @req
|
|
|
|
* @req - a new nfs request
|
|
|
|
* @prev - the previous request in page group, or NULL if @req is the first
|
|
|
|
* or only request in the group (the head).
|
|
|
|
*/
|
|
|
|
static inline void
|
|
|
|
nfs_page_group_init(struct nfs_page *req, struct nfs_page *prev)
|
|
|
|
{
|
2014-11-12 17:08:00 +00:00
|
|
|
struct inode *inode;
|
2014-05-15 15:56:45 +00:00
|
|
|
WARN_ON_ONCE(prev == req);
|
|
|
|
|
|
|
|
if (!prev) {
|
2014-07-11 14:20:46 +00:00
|
|
|
/* a head request */
|
2014-05-15 15:56:45 +00:00
|
|
|
req->wb_head = req;
|
|
|
|
req->wb_this_page = req;
|
|
|
|
} else {
|
2014-07-11 14:20:46 +00:00
|
|
|
/* a subrequest */
|
2014-05-15 15:56:45 +00:00
|
|
|
WARN_ON_ONCE(prev->wb_this_page != prev->wb_head);
|
|
|
|
WARN_ON_ONCE(!test_bit(PG_HEADLOCK, &prev->wb_head->wb_flags));
|
|
|
|
req->wb_head = prev->wb_head;
|
|
|
|
req->wb_this_page = prev->wb_this_page;
|
|
|
|
prev->wb_this_page = req;
|
|
|
|
|
2014-07-11 14:20:46 +00:00
|
|
|
/* All subrequests take a ref on the head request until
|
|
|
|
* nfs_page_group_destroy is called */
|
|
|
|
kref_get(&req->wb_head->wb_kref);
|
|
|
|
|
2014-11-12 17:08:00 +00:00
|
|
|
/* grab extra ref and bump the request count if head request
|
|
|
|
* has extra ref from the write/commit path to handle handoff
|
|
|
|
* between write and commit lists. */
|
2014-07-11 14:20:45 +00:00
|
|
|
if (test_bit(PG_INODE_REF, &prev->wb_head->wb_flags)) {
|
2023-01-19 21:33:38 +00:00
|
|
|
inode = nfs_page_to_inode(req);
|
2014-07-11 14:20:45 +00:00
|
|
|
set_bit(PG_INODE_REF, &req->wb_flags);
|
2014-05-15 15:56:45 +00:00
|
|
|
kref_get(&req->wb_kref);
|
2017-08-01 19:39:46 +00:00
|
|
|
atomic_long_inc(&NFS_I(inode)->nrequests);
|
2014-07-11 14:20:45 +00:00
|
|
|
}
|
2014-05-15 15:56:45 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* nfs_page_group_destroy - sync the destruction of page groups
|
|
|
|
* @req - request that no longer needs the page group
|
|
|
|
*
|
|
|
|
* releases the page group reference from each member once all
|
|
|
|
* members have called this function.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
nfs_page_group_destroy(struct kref *kref)
|
|
|
|
{
|
|
|
|
struct nfs_page *req = container_of(kref, struct nfs_page, wb_kref);
|
2017-07-18 23:31:10 +00:00
|
|
|
struct nfs_page *head = req->wb_head;
|
2014-05-15 15:56:45 +00:00
|
|
|
struct nfs_page *tmp, *next;
|
|
|
|
|
|
|
|
if (!nfs_page_group_sync_on_bit(req, PG_TEARDOWN))
|
2017-07-18 23:31:10 +00:00
|
|
|
goto out;
|
2014-05-15 15:56:45 +00:00
|
|
|
|
|
|
|
tmp = req;
|
|
|
|
do {
|
|
|
|
next = tmp->wb_this_page;
|
|
|
|
/* unlink and free */
|
|
|
|
tmp->wb_this_page = tmp;
|
|
|
|
tmp->wb_head = tmp;
|
|
|
|
nfs_free_request(tmp);
|
|
|
|
tmp = next;
|
|
|
|
} while (tmp != req);
|
2017-07-18 23:31:10 +00:00
|
|
|
out:
|
|
|
|
/* subrequests must release the ref on the head request */
|
|
|
|
if (head != req)
|
|
|
|
nfs_release_request(head);
|
2014-05-15 15:56:45 +00:00
|
|
|
}
|
|
|
|
|
2023-01-19 21:33:39 +00:00
|
|
|
static struct nfs_page *nfs_page_create(struct nfs_lock_context *l_ctx,
|
|
|
|
unsigned int pgbase, pgoff_t index,
|
|
|
|
unsigned int offset, unsigned int count)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct nfs_page *req;
|
2019-04-07 17:59:06 +00:00
|
|
|
struct nfs_open_context *ctx = l_ctx->open_context;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2013-03-18 23:45:14 +00:00
|
|
|
if (test_bit(NFS_CONTEXT_BAD, &ctx->flags))
|
|
|
|
return ERR_PTR(-EBADF);
|
2010-05-13 16:51:02 +00:00
|
|
|
/* try to allocate the request struct */
|
|
|
|
req = nfs_page_alloc();
|
|
|
|
if (req == NULL)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2012-08-13 21:15:50 +00:00
|
|
|
req->wb_lock_context = l_ctx;
|
2019-04-07 17:59:06 +00:00
|
|
|
refcount_inc(&l_ctx->count);
|
2016-01-06 15:40:18 +00:00
|
|
|
atomic_inc(&l_ctx->io_count);
|
2010-10-28 14:10:37 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/* Initialize the request struct. Initially, we assume a
|
|
|
|
* long write-back delay. This will be adjusted in
|
|
|
|
* update_nfs_request below if the region is not locked. */
|
2023-01-19 21:33:39 +00:00
|
|
|
req->wb_pgbase = pgbase;
|
|
|
|
req->wb_index = index;
|
|
|
|
req->wb_offset = offset;
|
|
|
|
req->wb_bytes = count;
|
2007-06-17 17:26:38 +00:00
|
|
|
kref_init(&req->wb_kref);
|
2019-04-07 17:59:08 +00:00
|
|
|
req->wb_nio = 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
return req;
|
|
|
|
}
|
|
|
|
|
2023-01-19 21:33:39 +00:00
|
|
|
static void nfs_page_assign_folio(struct nfs_page *req, struct folio *folio)
|
|
|
|
{
|
|
|
|
if (folio != NULL) {
|
|
|
|
req->wb_folio = folio;
|
|
|
|
folio_get(folio);
|
|
|
|
set_bit(PG_FOLIO, &req->wb_flags);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nfs_page_assign_page(struct nfs_page *req, struct page *page)
|
|
|
|
{
|
|
|
|
if (page != NULL) {
|
|
|
|
req->wb_page = page;
|
|
|
|
get_page(page);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-04-07 17:59:06 +00:00
|
|
|
/**
|
2023-01-19 21:33:47 +00:00
|
|
|
* nfs_page_create_from_page - Create an NFS read/write request.
|
2019-04-07 17:59:06 +00:00
|
|
|
* @ctx: open context to use
|
|
|
|
* @page: page to write
|
2023-01-19 21:33:47 +00:00
|
|
|
* @pgbase: starting offset within the page for the write
|
|
|
|
* @offset: file offset for the write
|
2019-04-07 17:59:06 +00:00
|
|
|
* @count: number of bytes to read/write
|
|
|
|
*
|
|
|
|
* The page must be locked by the caller. This makes sure we never
|
|
|
|
* create two different requests for the same page.
|
|
|
|
* User should ensure it is safe to sleep in this function.
|
|
|
|
*/
|
2023-01-19 21:33:47 +00:00
|
|
|
struct nfs_page *nfs_page_create_from_page(struct nfs_open_context *ctx,
|
|
|
|
struct page *page,
|
|
|
|
unsigned int pgbase, loff_t offset,
|
|
|
|
unsigned int count)
|
2019-04-07 17:59:06 +00:00
|
|
|
{
|
|
|
|
struct nfs_lock_context *l_ctx = nfs_get_lock_context(ctx);
|
|
|
|
struct nfs_page *ret;
|
|
|
|
|
|
|
|
if (IS_ERR(l_ctx))
|
|
|
|
return ERR_CAST(l_ctx);
|
2023-01-19 21:33:47 +00:00
|
|
|
ret = nfs_page_create(l_ctx, pgbase, offset >> PAGE_SHIFT,
|
|
|
|
offset_in_page(offset), count);
|
2023-01-19 21:33:39 +00:00
|
|
|
if (!IS_ERR(ret)) {
|
|
|
|
nfs_page_assign_page(ret, page);
|
2019-04-07 17:59:07 +00:00
|
|
|
nfs_page_group_init(ret, NULL);
|
2023-01-19 21:33:39 +00:00
|
|
|
}
|
2019-04-07 17:59:06 +00:00
|
|
|
nfs_put_lock_context(l_ctx);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2023-01-19 21:33:41 +00:00
|
|
|
/**
|
|
|
|
* nfs_page_create_from_folio - Create an NFS read/write request.
|
|
|
|
* @ctx: open context to use
|
|
|
|
* @folio: folio to write
|
|
|
|
* @offset: starting offset within the folio for the write
|
|
|
|
* @count: number of bytes to read/write
|
|
|
|
*
|
|
|
|
* The page must be locked by the caller. This makes sure we never
|
|
|
|
* create two different requests for the same page.
|
|
|
|
* User should ensure it is safe to sleep in this function.
|
|
|
|
*/
|
|
|
|
struct nfs_page *nfs_page_create_from_folio(struct nfs_open_context *ctx,
|
|
|
|
struct folio *folio,
|
|
|
|
unsigned int offset,
|
|
|
|
unsigned int count)
|
2019-04-07 17:59:06 +00:00
|
|
|
{
|
|
|
|
struct nfs_lock_context *l_ctx = nfs_get_lock_context(ctx);
|
|
|
|
struct nfs_page *ret;
|
|
|
|
|
|
|
|
if (IS_ERR(l_ctx))
|
|
|
|
return ERR_CAST(l_ctx);
|
2023-01-19 21:33:41 +00:00
|
|
|
ret = nfs_page_create(l_ctx, offset, folio_index(folio), offset, count);
|
|
|
|
if (!IS_ERR(ret)) {
|
|
|
|
nfs_page_assign_folio(ret, folio);
|
2019-04-07 17:59:07 +00:00
|
|
|
nfs_page_group_init(ret, NULL);
|
2023-01-19 21:33:41 +00:00
|
|
|
}
|
2019-04-07 17:59:06 +00:00
|
|
|
nfs_put_lock_context(l_ctx);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct nfs_page *
|
2020-03-31 22:27:26 +00:00
|
|
|
nfs_create_subreq(struct nfs_page *req,
|
|
|
|
unsigned int pgbase,
|
|
|
|
unsigned int offset,
|
2019-04-07 17:59:06 +00:00
|
|
|
unsigned int count)
|
|
|
|
{
|
2020-03-31 22:27:26 +00:00
|
|
|
struct nfs_page *last;
|
2019-04-07 17:59:06 +00:00
|
|
|
struct nfs_page *ret;
|
2023-01-19 21:33:39 +00:00
|
|
|
struct folio *folio = nfs_page_to_folio(req);
|
2023-01-19 21:33:35 +00:00
|
|
|
struct page *page = nfs_page_to_page(req, pgbase);
|
2019-04-07 17:59:06 +00:00
|
|
|
|
2023-01-19 21:33:39 +00:00
|
|
|
ret = nfs_page_create(req->wb_lock_context, pgbase, req->wb_index,
|
|
|
|
offset, count);
|
2019-04-07 17:59:06 +00:00
|
|
|
if (!IS_ERR(ret)) {
|
2023-01-19 21:33:39 +00:00
|
|
|
if (folio)
|
|
|
|
nfs_page_assign_folio(ret, folio);
|
|
|
|
else
|
|
|
|
nfs_page_assign_page(ret, page);
|
2020-03-31 22:27:26 +00:00
|
|
|
/* find the last request */
|
|
|
|
for (last = req->wb_head;
|
|
|
|
last->wb_this_page != req->wb_head;
|
|
|
|
last = last->wb_this_page)
|
|
|
|
;
|
|
|
|
|
2019-04-07 17:59:06 +00:00
|
|
|
nfs_lock_request(ret);
|
2019-04-07 17:59:07 +00:00
|
|
|
nfs_page_group_init(ret, last);
|
2019-04-07 17:59:08 +00:00
|
|
|
ret->wb_nio = req->wb_nio;
|
2019-04-07 17:59:06 +00:00
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/**
|
2012-05-09 18:04:55 +00:00
|
|
|
* nfs_unlock_request - Unlock request and wake up sleepers.
|
2019-02-18 18:32:38 +00:00
|
|
|
* @req: pointer to request
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
2012-05-09 18:04:55 +00:00
|
|
|
void nfs_unlock_request(struct nfs_page *req)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2021-07-13 16:28:22 +00:00
|
|
|
clear_bit_unlock(PG_BUSY, &req->wb_flags);
|
2014-03-17 17:06:10 +00:00
|
|
|
smp_mb__after_atomic();
|
2017-07-11 21:53:48 +00:00
|
|
|
if (!test_bit(PG_CONTENDED2, &req->wb_flags))
|
|
|
|
return;
|
2005-06-22 17:16:21 +00:00
|
|
|
wake_up_bit(&req->wb_flags, PG_BUSY);
|
2012-05-09 18:30:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2012-05-09 18:04:55 +00:00
|
|
|
* nfs_unlock_and_release_request - Unlock request and release the nfs_page
|
2019-02-18 18:32:38 +00:00
|
|
|
* @req: pointer to request
|
2012-05-09 18:30:35 +00:00
|
|
|
*/
|
2012-05-09 18:04:55 +00:00
|
|
|
void nfs_unlock_and_release_request(struct nfs_page *req)
|
2012-05-09 18:30:35 +00:00
|
|
|
{
|
2012-05-09 18:04:55 +00:00
|
|
|
nfs_unlock_request(req);
|
2005-04-16 22:20:36 +00:00
|
|
|
nfs_release_request(req);
|
|
|
|
}
|
|
|
|
|
2011-03-25 18:15:11 +00:00
|
|
|
/*
|
2005-04-16 22:20:36 +00:00
|
|
|
* nfs_clear_request - Free up all resources allocated to the request
|
|
|
|
* @req:
|
|
|
|
*
|
2010-03-11 14:19:35 +00:00
|
|
|
* Release page and open context resources associated with a read/write
|
|
|
|
* request after it has completed.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
2011-03-25 18:15:11 +00:00
|
|
|
static void nfs_clear_request(struct nfs_page *req)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2023-01-19 21:33:39 +00:00
|
|
|
struct folio *folio = nfs_page_to_folio(req);
|
2006-03-20 18:44:04 +00:00
|
|
|
struct page *page = req->wb_page;
|
2010-06-25 20:35:53 +00:00
|
|
|
struct nfs_lock_context *l_ctx = req->wb_lock_context;
|
2019-04-07 17:59:12 +00:00
|
|
|
struct nfs_open_context *ctx;
|
2010-03-11 14:19:35 +00:00
|
|
|
|
2023-01-19 21:33:39 +00:00
|
|
|
if (folio != NULL) {
|
|
|
|
folio_put(folio);
|
|
|
|
req->wb_folio = NULL;
|
|
|
|
clear_bit(PG_FOLIO, &req->wb_flags);
|
|
|
|
} else if (page != NULL) {
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 12:29:47 +00:00
|
|
|
put_page(page);
|
2005-04-16 22:20:36 +00:00
|
|
|
req->wb_page = NULL;
|
|
|
|
}
|
2010-06-25 20:35:53 +00:00
|
|
|
if (l_ctx != NULL) {
|
2017-04-11 16:50:10 +00:00
|
|
|
if (atomic_dec_and_test(&l_ctx->io_count)) {
|
2018-03-15 10:44:34 +00:00
|
|
|
wake_up_var(&l_ctx->io_count);
|
2019-04-07 17:59:12 +00:00
|
|
|
ctx = l_ctx->open_context;
|
2017-04-11 16:50:10 +00:00
|
|
|
if (test_bit(NFS_CONTEXT_UNLOCK, &ctx->flags))
|
|
|
|
rpc_wake_up(&NFS_SERVER(d_inode(ctx->dentry))->uoc_rpcwaitq);
|
|
|
|
}
|
2010-06-25 20:35:53 +00:00
|
|
|
nfs_put_lock_context(l_ctx);
|
|
|
|
req->wb_lock_context = NULL;
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2021-03-05 01:29:50 +00:00
|
|
|
* nfs_free_request - Release the count on an NFS read/write request
|
2005-04-16 22:20:36 +00:00
|
|
|
* @req: request to release
|
|
|
|
*
|
|
|
|
* Note: Should never be called with the spinlock held!
|
|
|
|
*/
|
2014-07-11 14:20:48 +00:00
|
|
|
void nfs_free_request(struct nfs_page *req)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2014-05-15 15:56:45 +00:00
|
|
|
WARN_ON_ONCE(req->wb_this_page != req);
|
|
|
|
|
|
|
|
/* extra debug: make sure no sync bits are still set */
|
|
|
|
WARN_ON_ONCE(test_bit(PG_TEARDOWN, &req->wb_flags));
|
2014-05-15 15:56:46 +00:00
|
|
|
WARN_ON_ONCE(test_bit(PG_UNLOCKPAGE, &req->wb_flags));
|
|
|
|
WARN_ON_ONCE(test_bit(PG_UPTODATE, &req->wb_flags));
|
2014-05-15 15:56:47 +00:00
|
|
|
WARN_ON_ONCE(test_bit(PG_WB_END, &req->wb_flags));
|
|
|
|
WARN_ON_ONCE(test_bit(PG_REMOVE, &req->wb_flags));
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2010-03-11 14:19:35 +00:00
|
|
|
/* Release struct file and open context */
|
2005-04-16 22:20:36 +00:00
|
|
|
nfs_clear_request(req);
|
|
|
|
nfs_page_free(req);
|
|
|
|
}
|
|
|
|
|
2007-06-17 17:26:38 +00:00
|
|
|
void nfs_release_request(struct nfs_page *req)
|
|
|
|
{
|
2014-05-15 15:56:45 +00:00
|
|
|
kref_put(&req->wb_kref, nfs_page_group_destroy);
|
2010-02-03 13:27:22 +00:00
|
|
|
}
|
2017-08-01 21:29:29 +00:00
|
|
|
EXPORT_SYMBOL_GPL(nfs_release_request);
|
2010-02-03 13:27:22 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/**
|
|
|
|
* nfs_wait_on_request - Wait for a request to complete.
|
|
|
|
* @req: request to wait upon.
|
|
|
|
*
|
2007-12-06 21:24:39 +00:00
|
|
|
* Interruptible by fatal signals only.
|
2005-04-16 22:20:36 +00:00
|
|
|
* The user is responsible for holding a count on the request.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
nfs_wait_on_request(struct nfs_page *req)
|
|
|
|
{
|
2017-07-11 21:53:48 +00:00
|
|
|
if (!test_bit(PG_BUSY, &req->wb_flags))
|
|
|
|
return 0;
|
|
|
|
set_bit(PG_CONTENDED2, &req->wb_flags);
|
|
|
|
smp_mb__after_atomic();
|
sched: Remove proliferation of wait_on_bit() action functions
The current "wait_on_bit" interface requires an 'action'
function to be provided which does the actual waiting.
There are over 20 such functions, many of them identical.
Most cases can be satisfied by one of just two functions, one
which uses io_schedule() and one which just uses schedule().
So:
Rename wait_on_bit and wait_on_bit_lock to
wait_on_bit_action and wait_on_bit_lock_action
to make it explicit that they need an action function.
Introduce new wait_on_bit{,_lock} and wait_on_bit{,_lock}_io
which are *not* given an action function but implicitly use
a standard one.
The decision to error-out if a signal is pending is now made
based on the 'mode' argument rather than being encoded in the action
function.
All instances of the old wait_on_bit and wait_on_bit_lock which
can use the new version have been changed accordingly and their
action functions have been discarded.
wait_on_bit{_lock} does not return any specific error code in the
event of a signal so the caller must check for non-zero and
interpolate their own error code as appropriate.
The wait_on_bit() call in __fscache_wait_on_invalidate() was
ambiguous as it specified TASK_UNINTERRUPTIBLE but used
fscache_wait_bit_interruptible as an action function.
David Howells confirms this should be uniformly
"uninterruptible"
The main remaining user of wait_on_bit{,_lock}_action is NFS
which needs to use a freezer-aware schedule() call.
A comment in fs/gfs2/glock.c notes that having multiple 'action'
functions is useful as they display differently in the 'wchan'
field of 'ps'. (and /proc/$PID/wchan).
As the new bit_wait{,_io} functions are tagged "__sched", they
will not show up at all, but something higher in the stack. So
the distinction will still be visible, only with different
function names (gds2_glock_wait versus gfs2_glock_dq_wait in the
gfs2/glock.c case).
Since first version of this patch (against 3.15) two new action
functions appeared, on in NFS and one in CIFS. CIFS also now
uses an action function that makes the same freezer aware
schedule call as NFS.
Signed-off-by: NeilBrown <neilb@suse.de>
Acked-by: David Howells <dhowells@redhat.com> (fscache, keys)
Acked-by: Steven Whitehouse <swhiteho@redhat.com> (gfs2)
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steve French <sfrench@samba.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20140707051603.28027.72349.stgit@notabene.brown
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-07 05:16:04 +00:00
|
|
|
return wait_on_bit_io(&req->wb_flags, PG_BUSY,
|
|
|
|
TASK_UNINTERRUPTIBLE);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2017-08-01 21:29:29 +00:00
|
|
|
EXPORT_SYMBOL_GPL(nfs_wait_on_request);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2014-05-15 15:56:43 +00:00
|
|
|
/*
|
|
|
|
* nfs_generic_pg_test - determine if requests can be coalesced
|
|
|
|
* @desc: pointer to descriptor
|
|
|
|
* @prev: previous request in desc, or NULL
|
|
|
|
* @req: this request
|
|
|
|
*
|
2018-11-14 08:12:05 +00:00
|
|
|
* Returns zero if @req cannot be coalesced into @desc, otherwise it returns
|
2014-05-15 15:56:43 +00:00
|
|
|
* the size of the request.
|
|
|
|
*/
|
|
|
|
size_t nfs_generic_pg_test(struct nfs_pageio_descriptor *desc,
|
|
|
|
struct nfs_page *prev, struct nfs_page *req)
|
2011-05-29 08:45:39 +00:00
|
|
|
{
|
2014-11-10 00:35:35 +00:00
|
|
|
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
|
2014-09-19 14:55:07 +00:00
|
|
|
|
|
|
|
|
|
|
|
if (mirror->pg_count > mirror->pg_bsize) {
|
2014-05-15 15:56:52 +00:00
|
|
|
/* should never happen */
|
|
|
|
WARN_ON_ONCE(1);
|
2011-05-29 08:45:39 +00:00
|
|
|
return 0;
|
2014-05-15 15:56:52 +00:00
|
|
|
}
|
2011-05-29 08:45:39 +00:00
|
|
|
|
2014-08-21 16:09:17 +00:00
|
|
|
/*
|
|
|
|
* Limit the request size so that we can still allocate a page array
|
|
|
|
* for it without upsetting the slab allocator.
|
|
|
|
*/
|
2014-09-19 14:55:07 +00:00
|
|
|
if (((mirror->pg_count + req->wb_bytes) >> PAGE_SHIFT) *
|
2015-09-11 03:14:06 +00:00
|
|
|
sizeof(struct page *) > PAGE_SIZE)
|
2014-08-21 16:09:17 +00:00
|
|
|
return 0;
|
|
|
|
|
2014-09-19 14:55:07 +00:00
|
|
|
return min(mirror->pg_bsize - mirror->pg_count, (size_t)req->wb_bytes);
|
2011-05-29 08:45:39 +00:00
|
|
|
}
|
2011-06-19 22:33:46 +00:00
|
|
|
EXPORT_SYMBOL_GPL(nfs_generic_pg_test);
|
2011-05-29 08:45:39 +00:00
|
|
|
|
2014-06-09 15:48:33 +00:00
|
|
|
struct nfs_pgio_header *nfs_pgio_header_alloc(const struct nfs_rw_ops *ops)
|
2014-05-06 13:12:30 +00:00
|
|
|
{
|
2014-06-09 15:48:33 +00:00
|
|
|
struct nfs_pgio_header *hdr = ops->rw_alloc_header();
|
2014-05-06 13:12:30 +00:00
|
|
|
|
2014-06-09 15:48:33 +00:00
|
|
|
if (hdr) {
|
2014-05-06 13:12:30 +00:00
|
|
|
INIT_LIST_HEAD(&hdr->pages);
|
|
|
|
hdr->rw_ops = ops;
|
|
|
|
}
|
2014-06-09 15:48:33 +00:00
|
|
|
return hdr;
|
2014-05-06 13:12:30 +00:00
|
|
|
}
|
2014-06-09 15:48:33 +00:00
|
|
|
EXPORT_SYMBOL_GPL(nfs_pgio_header_alloc);
|
2014-05-06 13:12:30 +00:00
|
|
|
|
2014-05-06 13:12:29 +00:00
|
|
|
/**
|
2014-06-09 15:48:37 +00:00
|
|
|
* nfs_pgio_data_destroy - make @hdr suitable for reuse
|
|
|
|
*
|
|
|
|
* Frees memory and releases refs from nfs_generic_pgio, so that it may
|
|
|
|
* be called again.
|
|
|
|
*
|
|
|
|
* @hdr: A header that has had nfs_generic_pgio called
|
2014-05-06 13:12:29 +00:00
|
|
|
*/
|
2017-09-09 01:28:11 +00:00
|
|
|
static void nfs_pgio_data_destroy(struct nfs_pgio_header *hdr)
|
2014-05-06 13:12:29 +00:00
|
|
|
{
|
2014-10-13 14:26:43 +00:00
|
|
|
if (hdr->args.context)
|
|
|
|
put_nfs_open_context(hdr->args.context);
|
2014-06-09 15:48:35 +00:00
|
|
|
if (hdr->page_array.pagevec != hdr->page_array.page_array)
|
|
|
|
kfree(hdr->page_array.pagevec);
|
2014-05-06 13:12:29 +00:00
|
|
|
}
|
2017-09-09 01:28:11 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* nfs_pgio_header_free - Free a read or write header
|
|
|
|
* @hdr: The header to free
|
|
|
|
*/
|
|
|
|
void nfs_pgio_header_free(struct nfs_pgio_header *hdr)
|
|
|
|
{
|
|
|
|
nfs_pgio_data_destroy(hdr);
|
|
|
|
hdr->rw_ops->rw_free_header(hdr);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(nfs_pgio_header_free);
|
2014-05-06 13:12:29 +00:00
|
|
|
|
2014-05-06 13:12:34 +00:00
|
|
|
/**
|
|
|
|
* nfs_pgio_rpcsetup - Set up arguments for a pageio call
|
2014-06-09 15:48:35 +00:00
|
|
|
* @hdr: The pageio hdr
|
2023-01-19 21:33:36 +00:00
|
|
|
* @pgbase: base
|
2014-05-06 13:12:34 +00:00
|
|
|
* @count: Number of bytes to read
|
|
|
|
* @how: How to commit data (writes only)
|
|
|
|
* @cinfo: Commit information for the call (writes only)
|
|
|
|
*/
|
2023-01-19 21:33:36 +00:00
|
|
|
static void nfs_pgio_rpcsetup(struct nfs_pgio_header *hdr, unsigned int pgbase,
|
|
|
|
unsigned int count, int how,
|
|
|
|
struct nfs_commit_info *cinfo)
|
2014-05-06 13:12:34 +00:00
|
|
|
{
|
2014-06-09 15:48:35 +00:00
|
|
|
struct nfs_page *req = hdr->req;
|
2014-05-06 13:12:34 +00:00
|
|
|
|
|
|
|
/* Set up the RPC argument and reply structs
|
2014-06-09 15:48:35 +00:00
|
|
|
* NB: take care not to mess about with hdr->commit et al. */
|
2014-05-06 13:12:34 +00:00
|
|
|
|
2014-06-09 15:48:35 +00:00
|
|
|
hdr->args.fh = NFS_FH(hdr->inode);
|
2017-12-08 17:52:37 +00:00
|
|
|
hdr->args.offset = req_offset(req);
|
2014-05-06 13:12:34 +00:00
|
|
|
/* pnfs_set_layoutcommit needs this */
|
2014-06-09 15:48:35 +00:00
|
|
|
hdr->mds_offset = hdr->args.offset;
|
2023-01-19 21:33:36 +00:00
|
|
|
hdr->args.pgbase = pgbase;
|
2014-06-09 15:48:35 +00:00
|
|
|
hdr->args.pages = hdr->page_array.pagevec;
|
|
|
|
hdr->args.count = count;
|
2019-04-07 17:59:11 +00:00
|
|
|
hdr->args.context = get_nfs_open_context(nfs_req_openctx(req));
|
2014-06-09 15:48:35 +00:00
|
|
|
hdr->args.lock_context = req->wb_lock_context;
|
|
|
|
hdr->args.stable = NFS_UNSTABLE;
|
2014-05-06 13:12:34 +00:00
|
|
|
switch (how & (FLUSH_STABLE | FLUSH_COND_STABLE)) {
|
|
|
|
case 0:
|
|
|
|
break;
|
|
|
|
case FLUSH_COND_STABLE:
|
|
|
|
if (nfs_reqs_to_commit(cinfo))
|
|
|
|
break;
|
2020-08-23 22:36:59 +00:00
|
|
|
fallthrough;
|
2014-05-06 13:12:34 +00:00
|
|
|
default:
|
2014-06-09 15:48:35 +00:00
|
|
|
hdr->args.stable = NFS_FILE_SYNC;
|
2014-05-06 13:12:34 +00:00
|
|
|
}
|
|
|
|
|
2014-06-09 15:48:35 +00:00
|
|
|
hdr->res.fattr = &hdr->fattr;
|
2019-08-14 18:19:09 +00:00
|
|
|
hdr->res.count = 0;
|
2014-06-09 15:48:35 +00:00
|
|
|
hdr->res.eof = 0;
|
2014-06-09 15:48:36 +00:00
|
|
|
hdr->res.verf = &hdr->verf;
|
2014-06-09 15:48:35 +00:00
|
|
|
nfs_fattr_init(&hdr->fattr);
|
2014-05-06 13:12:34 +00:00
|
|
|
}
|
|
|
|
|
2014-05-06 13:12:31 +00:00
|
|
|
/**
|
2014-06-09 15:48:35 +00:00
|
|
|
* nfs_pgio_prepare - Prepare pageio hdr to go over the wire
|
2014-05-06 13:12:31 +00:00
|
|
|
* @task: The current task
|
2014-06-09 15:48:35 +00:00
|
|
|
* @calldata: pageio header to prepare
|
2014-05-06 13:12:31 +00:00
|
|
|
*/
|
2014-05-06 13:12:33 +00:00
|
|
|
static void nfs_pgio_prepare(struct rpc_task *task, void *calldata)
|
2014-05-06 13:12:31 +00:00
|
|
|
{
|
2014-06-09 15:48:35 +00:00
|
|
|
struct nfs_pgio_header *hdr = calldata;
|
2014-05-06 13:12:31 +00:00
|
|
|
int err;
|
2014-06-09 15:48:35 +00:00
|
|
|
err = NFS_PROTO(hdr->inode)->pgio_rpc_prepare(task, hdr);
|
2014-05-06 13:12:31 +00:00
|
|
|
if (err)
|
|
|
|
rpc_exit(task, err);
|
|
|
|
}
|
|
|
|
|
2014-06-09 15:48:35 +00:00
|
|
|
int nfs_initiate_pgio(struct rpc_clnt *clnt, struct nfs_pgio_header *hdr,
|
2018-12-03 00:30:31 +00:00
|
|
|
const struct cred *cred, const struct nfs_rpc_ops *rpc_ops,
|
2014-05-06 13:12:37 +00:00
|
|
|
const struct rpc_call_ops *call_ops, int how, int flags)
|
|
|
|
{
|
|
|
|
struct rpc_task *task;
|
|
|
|
struct rpc_message msg = {
|
2014-06-09 15:48:35 +00:00
|
|
|
.rpc_argp = &hdr->args,
|
|
|
|
.rpc_resp = &hdr->res,
|
2014-06-13 15:02:25 +00:00
|
|
|
.rpc_cred = cred,
|
2014-05-06 13:12:37 +00:00
|
|
|
};
|
|
|
|
struct rpc_task_setup task_setup_data = {
|
|
|
|
.rpc_client = clnt,
|
2014-06-09 15:48:35 +00:00
|
|
|
.task = &hdr->task,
|
2014-05-06 13:12:37 +00:00
|
|
|
.rpc_message = &msg,
|
|
|
|
.callback_ops = call_ops,
|
2014-06-09 15:48:35 +00:00
|
|
|
.callback_data = hdr,
|
2014-05-06 13:12:37 +00:00
|
|
|
.workqueue = nfsiod_workqueue,
|
2020-05-13 13:55:36 +00:00
|
|
|
.flags = RPC_TASK_ASYNC | flags,
|
2014-05-06 13:12:37 +00:00
|
|
|
};
|
|
|
|
|
2022-05-25 16:12:59 +00:00
|
|
|
if (nfs_server_capable(hdr->inode, NFS_CAP_MOVEABLE))
|
|
|
|
task_setup_data.flags |= RPC_TASK_MOVEABLE;
|
|
|
|
|
2014-06-09 20:12:20 +00:00
|
|
|
hdr->rw_ops->rw_initiate(hdr, &msg, rpc_ops, &task_setup_data, how);
|
2014-05-06 13:12:37 +00:00
|
|
|
|
2015-07-01 04:00:13 +00:00
|
|
|
dprintk("NFS: initiated pgio call "
|
2014-05-06 13:12:37 +00:00
|
|
|
"(req %s/%llu, %u bytes @ offset %llu)\n",
|
2014-06-20 17:30:26 +00:00
|
|
|
hdr->inode->i_sb->s_id,
|
|
|
|
(unsigned long long)NFS_FILEID(hdr->inode),
|
2014-06-09 15:48:35 +00:00
|
|
|
hdr->args.count,
|
|
|
|
(unsigned long long)hdr->args.offset);
|
2014-05-06 13:12:37 +00:00
|
|
|
|
|
|
|
task = rpc_run_task(&task_setup_data);
|
2020-03-28 15:39:29 +00:00
|
|
|
if (IS_ERR(task))
|
|
|
|
return PTR_ERR(task);
|
2014-05-06 13:12:37 +00:00
|
|
|
rpc_put_task(task);
|
2020-03-28 15:39:29 +00:00
|
|
|
return 0;
|
2014-05-06 13:12:37 +00:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(nfs_initiate_pgio);
|
|
|
|
|
2014-05-06 13:12:35 +00:00
|
|
|
/**
|
|
|
|
* nfs_pgio_error - Clean up from a pageio error
|
|
|
|
* @hdr: pageio header
|
|
|
|
*/
|
2015-12-04 18:03:17 +00:00
|
|
|
static void nfs_pgio_error(struct nfs_pgio_header *hdr)
|
2014-05-06 13:12:35 +00:00
|
|
|
{
|
|
|
|
set_bit(NFS_IOHDR_REDO, &hdr->flags);
|
2014-06-09 15:48:37 +00:00
|
|
|
hdr->completion_ops->completion(hdr);
|
2014-05-06 13:12:35 +00:00
|
|
|
}
|
|
|
|
|
2014-05-06 13:12:31 +00:00
|
|
|
/**
|
|
|
|
* nfs_pgio_release - Release pageio data
|
2014-06-09 15:48:35 +00:00
|
|
|
* @calldata: The pageio header to release
|
2014-05-06 13:12:31 +00:00
|
|
|
*/
|
2014-05-06 13:12:33 +00:00
|
|
|
static void nfs_pgio_release(void *calldata)
|
2014-05-06 13:12:31 +00:00
|
|
|
{
|
2014-06-09 15:48:35 +00:00
|
|
|
struct nfs_pgio_header *hdr = calldata;
|
2014-06-09 15:48:37 +00:00
|
|
|
hdr->completion_ops->completion(hdr);
|
2014-05-06 13:12:31 +00:00
|
|
|
}
|
|
|
|
|
2014-09-19 14:55:07 +00:00
|
|
|
static void nfs_pageio_mirror_init(struct nfs_pgio_mirror *mirror,
|
|
|
|
unsigned int bsize)
|
|
|
|
{
|
|
|
|
INIT_LIST_HEAD(&mirror->pg_list);
|
|
|
|
mirror->pg_bytes_written = 0;
|
|
|
|
mirror->pg_count = 0;
|
|
|
|
mirror->pg_bsize = bsize;
|
|
|
|
mirror->pg_base = 0;
|
|
|
|
mirror->pg_recoalesce = 0;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/**
|
2007-04-02 22:48:28 +00:00
|
|
|
* nfs_pageio_init - initialise a page io descriptor
|
|
|
|
* @desc: pointer to descriptor
|
2007-04-02 22:48:28 +00:00
|
|
|
* @inode: pointer to inode
|
2015-06-18 11:37:13 +00:00
|
|
|
* @pg_ops: pointer to pageio operations
|
|
|
|
* @compl_ops: pointer to pageio completion operations
|
|
|
|
* @rw_ops: pointer to nfs read/write operations
|
2007-04-02 22:48:28 +00:00
|
|
|
* @bsize: io block size
|
|
|
|
* @io_flags: extra parameters for the io function
|
2007-04-02 22:48:28 +00:00
|
|
|
*/
|
2007-04-02 22:48:28 +00:00
|
|
|
void nfs_pageio_init(struct nfs_pageio_descriptor *desc,
|
|
|
|
struct inode *inode,
|
2011-06-10 17:30:23 +00:00
|
|
|
const struct nfs_pageio_ops *pg_ops,
|
2012-04-20 18:47:48 +00:00
|
|
|
const struct nfs_pgio_completion_ops *compl_ops,
|
2014-05-06 13:12:30 +00:00
|
|
|
const struct nfs_rw_ops *rw_ops,
|
2007-05-04 18:44:06 +00:00
|
|
|
size_t bsize,
|
2017-08-20 15:33:25 +00:00
|
|
|
int io_flags)
|
2007-04-02 22:48:28 +00:00
|
|
|
{
|
2011-03-21 21:02:00 +00:00
|
|
|
desc->pg_moreio = 0;
|
2007-04-02 22:48:28 +00:00
|
|
|
desc->pg_inode = inode;
|
2011-06-10 17:30:23 +00:00
|
|
|
desc->pg_ops = pg_ops;
|
2012-04-20 18:47:48 +00:00
|
|
|
desc->pg_completion_ops = compl_ops;
|
2014-05-06 13:12:30 +00:00
|
|
|
desc->pg_rw_ops = rw_ops;
|
2007-04-02 22:48:28 +00:00
|
|
|
desc->pg_ioflags = io_flags;
|
|
|
|
desc->pg_error = 0;
|
2011-03-01 01:34:14 +00:00
|
|
|
desc->pg_lseg = NULL;
|
2017-06-20 23:35:37 +00:00
|
|
|
desc->pg_io_completion = NULL;
|
2012-04-20 18:47:51 +00:00
|
|
|
desc->pg_dreq = NULL;
|
NFS: Convert buffered read paths to use netfs when fscache is enabled
Convert the NFS buffered read code paths to corresponding netfs APIs,
but only when fscache is configured and enabled.
The netfs API defines struct netfs_request_ops which must be filled
in by the network filesystem. For NFS, we only need to define 5 of
the functions, the main one being the issue_read() function.
The issue_read() function is called by the netfs layer when a read
cannot be fulfilled locally, and must be sent to the server (either
the cache is not active, or it is active but the data is not available).
Once the read from the server is complete, netfs requires a call to
netfs_subreq_terminated() which conveys either how many bytes were read
successfully, or an error. Note that issue_read() is called with a
structure, netfs_io_subrequest, which defines the IO requested, and
contains a start and a length (both in bytes), and assumes the underlying
netfs will return a either an error on the whole region, or the number
of bytes successfully read.
The NFS IO path is page based and the main APIs are the pgio APIs defined
in pagelist.c. For the pgio APIs, there is no way for the caller to
know how many RPCs will be sent and how the pages will be broken up
into underlying RPCs, each of which will have their own completion and
return code. In contrast, netfs is subrequest based, a single
subrequest may contain multiple pages, and a single subrequest is
initiated with issue_read() and terminated with netfs_subreq_terminated().
Thus, to utilze the netfs APIs, NFS needs some way to accommodate
the netfs API requirement on the single response to the whole
subrequest, while also minimizing disruptive changes to the NFS
pgio layer.
The approach taken with this patch is to allocate a small structure
for each nfs_netfs_issue_read() call, store the final error and number
of bytes successfully transferred in the structure, and update these values
as each RPC completes. The refcount on the structure is used as a marker
for the last RPC completion, is incremented in nfs_netfs_read_initiate(),
and decremented inside nfs_netfs_read_completion(), when a nfs_pgio_header
contains a valid pointer to the data. On the final put (which signals
the final outstanding RPC is complete) in nfs_netfs_read_completion(),
call netfs_subreq_terminated() with either the final error value (if
one or more READs complete with an error) or the number of bytes
successfully transferred (if all RPCs complete successfully). Note
that when all RPCs complete successfully, the number of bytes transferred
is capped to the length of the subrequest. Capping the transferred length
to the subrequest length prevents "Subreq overread" warnings from netfs.
This is due to the "aligned_len" in nfs_pageio_add_page(), and the
corner case where NFS requests a full page at the end of the file,
even when i_size reflects only a partial page (NFS overread).
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
Tested-by: Daire Byrne <daire@dneg.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2023-02-20 13:43:06 +00:00
|
|
|
nfs_netfs_reset_pageio_descriptor(desc);
|
2014-09-19 14:55:07 +00:00
|
|
|
desc->pg_bsize = bsize;
|
|
|
|
|
|
|
|
desc->pg_mirror_count = 1;
|
|
|
|
desc->pg_mirror_idx = 0;
|
|
|
|
|
2017-08-19 14:10:34 +00:00
|
|
|
desc->pg_mirrors_dynamic = NULL;
|
|
|
|
desc->pg_mirrors = desc->pg_mirrors_static;
|
|
|
|
nfs_pageio_mirror_init(&desc->pg_mirrors[0], bsize);
|
2019-04-07 17:59:08 +00:00
|
|
|
desc->pg_maxretrans = 0;
|
2007-04-02 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
2014-05-06 13:12:32 +00:00
|
|
|
/**
|
|
|
|
* nfs_pgio_result - Basic pageio error handling
|
|
|
|
* @task: The task that ran
|
2014-06-09 15:48:35 +00:00
|
|
|
* @calldata: Pageio header to check
|
2014-05-06 13:12:32 +00:00
|
|
|
*/
|
2014-05-06 13:12:33 +00:00
|
|
|
static void nfs_pgio_result(struct rpc_task *task, void *calldata)
|
2014-05-06 13:12:32 +00:00
|
|
|
{
|
2014-06-09 15:48:35 +00:00
|
|
|
struct nfs_pgio_header *hdr = calldata;
|
|
|
|
struct inode *inode = hdr->inode;
|
2014-05-06 13:12:32 +00:00
|
|
|
|
2014-06-09 15:48:35 +00:00
|
|
|
if (hdr->rw_ops->rw_done(task, hdr, inode) != 0)
|
2014-05-06 13:12:32 +00:00
|
|
|
return;
|
|
|
|
if (task->tk_status < 0)
|
2014-06-09 15:48:35 +00:00
|
|
|
nfs_set_pgio_error(hdr, task->tk_status, hdr->args.offset);
|
2014-05-06 13:12:32 +00:00
|
|
|
else
|
2014-06-09 15:48:35 +00:00
|
|
|
hdr->rw_ops->rw_result(task, hdr);
|
2014-05-06 13:12:32 +00:00
|
|
|
}
|
|
|
|
|
2014-05-06 13:12:36 +00:00
|
|
|
/*
|
|
|
|
* Create an RPC task for the given read or write request and kick it.
|
|
|
|
* The page must have been locked by the caller.
|
|
|
|
*
|
|
|
|
* It may happen that the page we're passed is not marked dirty.
|
|
|
|
* This is the case if nfs_updatepage detects a conflicting request
|
|
|
|
* that has been written but not committed.
|
|
|
|
*/
|
2014-05-15 15:56:52 +00:00
|
|
|
int nfs_generic_pgio(struct nfs_pageio_descriptor *desc,
|
|
|
|
struct nfs_pgio_header *hdr)
|
2014-05-06 13:12:36 +00:00
|
|
|
{
|
2014-11-10 00:35:35 +00:00
|
|
|
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
|
2014-09-19 14:55:07 +00:00
|
|
|
|
2014-05-06 13:12:36 +00:00
|
|
|
struct nfs_page *req;
|
2014-08-14 21:39:32 +00:00
|
|
|
struct page **pages,
|
|
|
|
*last_page;
|
2014-09-19 14:55:07 +00:00
|
|
|
struct list_head *head = &mirror->pg_list;
|
2014-05-06 13:12:36 +00:00
|
|
|
struct nfs_commit_info cinfo;
|
2017-04-19 14:11:34 +00:00
|
|
|
struct nfs_page_array *pg_array = &hdr->page_array;
|
2014-08-14 21:39:32 +00:00
|
|
|
unsigned int pagecount, pageused;
|
2023-01-19 21:33:36 +00:00
|
|
|
unsigned int pg_base = offset_in_page(mirror->pg_base);
|
2022-03-21 17:48:36 +00:00
|
|
|
gfp_t gfp_flags = nfs_io_gfp_mask();
|
2014-05-06 13:12:36 +00:00
|
|
|
|
2023-01-19 21:33:36 +00:00
|
|
|
pagecount = nfs_page_array_len(pg_base, mirror->pg_count);
|
2017-06-09 15:03:23 +00:00
|
|
|
pg_array->npages = pagecount;
|
2017-04-19 14:11:34 +00:00
|
|
|
|
|
|
|
if (pagecount <= ARRAY_SIZE(pg_array->page_array))
|
|
|
|
pg_array->pagevec = pg_array->page_array;
|
|
|
|
else {
|
|
|
|
pg_array->pagevec = kcalloc(pagecount, sizeof(struct page *), gfp_flags);
|
|
|
|
if (!pg_array->pagevec) {
|
|
|
|
pg_array->npages = 0;
|
|
|
|
nfs_pgio_error(hdr);
|
|
|
|
desc->pg_error = -ENOMEM;
|
|
|
|
return desc->pg_error;
|
|
|
|
}
|
2015-12-04 18:03:17 +00:00
|
|
|
}
|
2014-05-06 13:12:36 +00:00
|
|
|
|
|
|
|
nfs_init_cinfo(&cinfo, desc->pg_inode, desc->pg_dreq);
|
2014-06-09 15:48:35 +00:00
|
|
|
pages = hdr->page_array.pagevec;
|
2014-08-14 21:39:32 +00:00
|
|
|
last_page = NULL;
|
|
|
|
pageused = 0;
|
2014-05-06 13:12:36 +00:00
|
|
|
while (!list_empty(head)) {
|
2023-01-19 21:33:36 +00:00
|
|
|
struct nfs_page_iter_page i;
|
|
|
|
struct page *page;
|
|
|
|
|
2014-05-06 13:12:36 +00:00
|
|
|
req = nfs_list_entry(head->next);
|
2019-02-18 16:35:54 +00:00
|
|
|
nfs_list_move_request(req, &hdr->pages);
|
2014-08-14 21:39:32 +00:00
|
|
|
|
2023-01-19 21:33:34 +00:00
|
|
|
if (req->wb_pgbase == 0)
|
|
|
|
last_page = NULL;
|
|
|
|
|
2023-01-19 21:33:36 +00:00
|
|
|
nfs_page_iter_page_init(&i, req);
|
|
|
|
while ((page = nfs_page_iter_page_get(&i)) != NULL) {
|
|
|
|
if (last_page != page) {
|
|
|
|
pageused++;
|
|
|
|
if (pageused > pagecount)
|
|
|
|
goto full;
|
|
|
|
*pages++ = last_page = page;
|
|
|
|
}
|
2014-08-14 21:39:32 +00:00
|
|
|
}
|
2014-05-06 13:12:36 +00:00
|
|
|
}
|
2023-01-19 21:33:36 +00:00
|
|
|
full:
|
2015-12-04 18:03:17 +00:00
|
|
|
if (WARN_ON_ONCE(pageused != pagecount)) {
|
|
|
|
nfs_pgio_error(hdr);
|
|
|
|
desc->pg_error = -EINVAL;
|
|
|
|
return desc->pg_error;
|
|
|
|
}
|
2014-05-06 13:12:36 +00:00
|
|
|
|
|
|
|
if ((desc->pg_ioflags & FLUSH_COND_STABLE) &&
|
|
|
|
(desc->pg_moreio || nfs_reqs_to_commit(&cinfo)))
|
|
|
|
desc->pg_ioflags &= ~FLUSH_COND_STABLE;
|
|
|
|
|
|
|
|
/* Set up the argument struct */
|
2023-01-19 21:33:36 +00:00
|
|
|
nfs_pgio_rpcsetup(hdr, pg_base, mirror->pg_count, desc->pg_ioflags,
|
|
|
|
&cinfo);
|
2014-05-06 13:12:36 +00:00
|
|
|
desc->pg_rpc_callops = &nfs_pgio_common_ops;
|
|
|
|
return 0;
|
|
|
|
}
|
2014-05-15 15:56:52 +00:00
|
|
|
EXPORT_SYMBOL_GPL(nfs_generic_pgio);
|
2014-05-06 13:12:36 +00:00
|
|
|
|
2014-05-06 13:12:40 +00:00
|
|
|
static int nfs_generic_pg_pgios(struct nfs_pageio_descriptor *desc)
|
2014-05-06 13:12:39 +00:00
|
|
|
{
|
|
|
|
struct nfs_pgio_header *hdr;
|
|
|
|
int ret;
|
2021-06-24 03:28:51 +00:00
|
|
|
unsigned short task_flags = 0;
|
2014-05-06 13:12:39 +00:00
|
|
|
|
2014-06-09 15:48:33 +00:00
|
|
|
hdr = nfs_pgio_header_alloc(desc->pg_rw_ops);
|
|
|
|
if (!hdr) {
|
2015-12-04 18:03:17 +00:00
|
|
|
desc->pg_error = -ENOMEM;
|
|
|
|
return desc->pg_error;
|
2014-05-06 13:12:39 +00:00
|
|
|
}
|
2014-06-09 15:48:33 +00:00
|
|
|
nfs_pgheader_init(desc, hdr, nfs_pgio_header_free);
|
2014-05-06 13:12:39 +00:00
|
|
|
ret = nfs_generic_pgio(desc, hdr);
|
2021-06-24 03:28:51 +00:00
|
|
|
if (ret == 0) {
|
|
|
|
if (NFS_SERVER(hdr->inode)->nfs_client->cl_minorversion)
|
|
|
|
task_flags = RPC_TASK_MOVEABLE;
|
2014-05-15 15:56:53 +00:00
|
|
|
ret = nfs_initiate_pgio(NFS_CLIENT(hdr->inode),
|
2014-06-13 15:02:25 +00:00
|
|
|
hdr,
|
|
|
|
hdr->cred,
|
|
|
|
NFS_PROTO(hdr->inode),
|
2014-06-09 20:12:20 +00:00
|
|
|
desc->pg_rpc_callops,
|
2020-05-13 13:55:36 +00:00
|
|
|
desc->pg_ioflags,
|
2021-06-24 03:28:51 +00:00
|
|
|
RPC_TASK_CRED_NOREF | task_flags);
|
|
|
|
}
|
2014-05-06 13:12:39 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-08-19 14:10:34 +00:00
|
|
|
static struct nfs_pgio_mirror *
|
|
|
|
nfs_pageio_alloc_mirrors(struct nfs_pageio_descriptor *desc,
|
|
|
|
unsigned int mirror_count)
|
|
|
|
{
|
|
|
|
struct nfs_pgio_mirror *ret;
|
|
|
|
unsigned int i;
|
|
|
|
|
|
|
|
kfree(desc->pg_mirrors_dynamic);
|
|
|
|
desc->pg_mirrors_dynamic = NULL;
|
|
|
|
if (mirror_count == 1)
|
|
|
|
return desc->pg_mirrors_static;
|
2022-03-21 17:48:36 +00:00
|
|
|
ret = kmalloc_array(mirror_count, sizeof(*ret), nfs_io_gfp_mask());
|
2017-08-19 14:10:34 +00:00
|
|
|
if (ret != NULL) {
|
|
|
|
for (i = 0; i < mirror_count; i++)
|
|
|
|
nfs_pageio_mirror_init(&ret[i], desc->pg_bsize);
|
|
|
|
desc->pg_mirrors_dynamic = ret;
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2014-09-19 14:55:07 +00:00
|
|
|
/*
|
|
|
|
* nfs_pageio_setup_mirroring - determine if mirroring is to be used
|
|
|
|
* by calling the pg_get_mirror_count op
|
|
|
|
*/
|
2017-08-19 14:10:34 +00:00
|
|
|
static void nfs_pageio_setup_mirroring(struct nfs_pageio_descriptor *pgio,
|
2014-09-19 14:55:07 +00:00
|
|
|
struct nfs_page *req)
|
|
|
|
{
|
2017-08-19 14:10:34 +00:00
|
|
|
unsigned int mirror_count = 1;
|
2014-09-19 14:55:07 +00:00
|
|
|
|
2017-08-19 14:10:34 +00:00
|
|
|
if (pgio->pg_ops->pg_get_mirror_count)
|
|
|
|
mirror_count = pgio->pg_ops->pg_get_mirror_count(pgio, req);
|
|
|
|
if (mirror_count == pgio->pg_mirror_count || pgio->pg_error < 0)
|
|
|
|
return;
|
2014-09-19 14:55:07 +00:00
|
|
|
|
2017-08-19 14:10:34 +00:00
|
|
|
if (!mirror_count || mirror_count > NFS_PAGEIO_DESCRIPTOR_MIRROR_MAX) {
|
|
|
|
pgio->pg_error = -EINVAL;
|
|
|
|
return;
|
|
|
|
}
|
2014-09-19 14:55:07 +00:00
|
|
|
|
2017-08-19 14:10:34 +00:00
|
|
|
pgio->pg_mirrors = nfs_pageio_alloc_mirrors(pgio, mirror_count);
|
|
|
|
if (pgio->pg_mirrors == NULL) {
|
|
|
|
pgio->pg_error = -ENOMEM;
|
|
|
|
pgio->pg_mirrors = pgio->pg_mirrors_static;
|
|
|
|
mirror_count = 1;
|
|
|
|
}
|
2014-09-19 14:55:07 +00:00
|
|
|
pgio->pg_mirror_count = mirror_count;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nfs_pageio_cleanup_mirroring(struct nfs_pageio_descriptor *pgio)
|
|
|
|
{
|
|
|
|
pgio->pg_mirror_count = 1;
|
|
|
|
pgio->pg_mirror_idx = 0;
|
|
|
|
pgio->pg_mirrors = pgio->pg_mirrors_static;
|
|
|
|
kfree(pgio->pg_mirrors_dynamic);
|
|
|
|
pgio->pg_mirrors_dynamic = NULL;
|
|
|
|
}
|
|
|
|
|
2013-09-06 15:09:38 +00:00
|
|
|
static bool nfs_match_lock_context(const struct nfs_lock_context *l1,
|
|
|
|
const struct nfs_lock_context *l2)
|
|
|
|
{
|
2016-10-13 04:26:47 +00:00
|
|
|
return l1->lockowner == l2->lockowner;
|
2013-09-06 15:09:38 +00:00
|
|
|
}
|
|
|
|
|
2023-01-19 21:33:37 +00:00
|
|
|
static bool nfs_page_is_contiguous(const struct nfs_page *prev,
|
|
|
|
const struct nfs_page *req)
|
|
|
|
{
|
|
|
|
size_t prev_end = prev->wb_pgbase + prev->wb_bytes;
|
|
|
|
|
|
|
|
if (req_offset(req) != req_offset(prev) + prev->wb_bytes)
|
|
|
|
return false;
|
|
|
|
if (req->wb_pgbase == 0)
|
|
|
|
return prev_end == nfs_page_max_length(prev);
|
|
|
|
if (req->wb_pgbase == prev_end) {
|
|
|
|
struct folio *folio = nfs_page_to_folio(req);
|
|
|
|
if (folio)
|
|
|
|
return folio == nfs_page_to_folio(prev);
|
|
|
|
return req->wb_page == prev->wb_page;
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2007-04-02 22:48:28 +00:00
|
|
|
/**
|
2020-03-31 22:27:26 +00:00
|
|
|
* nfs_coalesce_size - test two requests for compatibility
|
2007-04-02 22:48:28 +00:00
|
|
|
* @prev: pointer to nfs_page
|
|
|
|
* @req: pointer to nfs_page
|
2019-02-18 18:32:38 +00:00
|
|
|
* @pgio: pointer to nfs_pagio_descriptor
|
2007-04-02 22:48:28 +00:00
|
|
|
*
|
|
|
|
* The nfs_page structures 'prev' and 'req' are compared to ensure that the
|
|
|
|
* page data area they describe is contiguous, and that their RPC
|
|
|
|
* credentials, NFSv4 open state, and lockowners are the same.
|
|
|
|
*
|
2020-03-31 22:27:26 +00:00
|
|
|
* Returns size of the request that can be coalesced
|
2007-04-02 22:48:28 +00:00
|
|
|
*/
|
2020-03-31 22:27:26 +00:00
|
|
|
static unsigned int nfs_coalesce_size(struct nfs_page *prev,
|
2011-05-25 18:03:56 +00:00
|
|
|
struct nfs_page *req,
|
|
|
|
struct nfs_pageio_descriptor *pgio)
|
2007-04-02 22:48:28 +00:00
|
|
|
{
|
2015-01-16 20:05:55 +00:00
|
|
|
struct file_lock_context *flctx;
|
2014-05-15 15:56:43 +00:00
|
|
|
|
2014-05-15 15:56:44 +00:00
|
|
|
if (prev) {
|
2019-04-07 17:59:11 +00:00
|
|
|
if (!nfs_match_open_context(nfs_req_openctx(req), nfs_req_openctx(prev)))
|
2020-03-31 22:27:26 +00:00
|
|
|
return 0;
|
2022-11-16 14:55:36 +00:00
|
|
|
flctx = locks_inode_context(d_inode(nfs_req_openctx(req)->dentry));
|
2015-01-16 20:05:55 +00:00
|
|
|
if (flctx != NULL &&
|
|
|
|
!(list_empty_careful(&flctx->flc_posix) &&
|
|
|
|
list_empty_careful(&flctx->flc_flock)) &&
|
2015-01-16 20:05:55 +00:00
|
|
|
!nfs_match_lock_context(req->wb_lock_context,
|
|
|
|
prev->wb_lock_context))
|
2020-03-31 22:27:26 +00:00
|
|
|
return 0;
|
2023-01-19 21:33:37 +00:00
|
|
|
if (!nfs_page_is_contiguous(prev, req))
|
2020-03-31 22:27:26 +00:00
|
|
|
return 0;
|
2014-05-15 15:56:44 +00:00
|
|
|
}
|
2020-03-31 22:27:26 +00:00
|
|
|
return pgio->pg_ops->pg_test(pgio, prev, req);
|
2007-04-02 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2007-04-02 22:48:28 +00:00
|
|
|
* nfs_pageio_do_add_request - Attempt to coalesce a request into a page list.
|
2007-04-02 22:48:28 +00:00
|
|
|
* @desc: destination io descriptor
|
|
|
|
* @req: request
|
|
|
|
*
|
2020-03-31 22:27:26 +00:00
|
|
|
* If the request 'req' was successfully coalesced into the existing list
|
|
|
|
* of pages 'desc', it returns the size of req.
|
2007-04-02 22:48:28 +00:00
|
|
|
*/
|
2020-03-31 22:27:26 +00:00
|
|
|
static unsigned int
|
|
|
|
nfs_pageio_do_add_request(struct nfs_pageio_descriptor *desc,
|
|
|
|
struct nfs_page *req)
|
2007-04-02 22:48:28 +00:00
|
|
|
{
|
2014-11-10 00:35:35 +00:00
|
|
|
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
|
2014-05-15 15:56:44 +00:00
|
|
|
struct nfs_page *prev = NULL;
|
2020-03-31 22:27:26 +00:00
|
|
|
unsigned int size;
|
2014-09-19 14:55:07 +00:00
|
|
|
|
2021-05-25 14:23:05 +00:00
|
|
|
if (list_empty(&mirror->pg_list)) {
|
2011-06-10 17:30:23 +00:00
|
|
|
if (desc->pg_ops->pg_init)
|
|
|
|
desc->pg_ops->pg_init(desc, req);
|
2015-12-03 18:57:48 +00:00
|
|
|
if (desc->pg_error < 0)
|
|
|
|
return 0;
|
2014-09-19 14:55:07 +00:00
|
|
|
mirror->pg_base = req->wb_pgbase;
|
2021-05-25 14:23:05 +00:00
|
|
|
mirror->pg_count = 0;
|
|
|
|
mirror->pg_recoalesce = 0;
|
|
|
|
} else
|
|
|
|
prev = nfs_list_entry(mirror->pg_list.prev);
|
2019-04-07 17:59:08 +00:00
|
|
|
|
|
|
|
if (desc->pg_maxretrans && req->wb_nio > desc->pg_maxretrans) {
|
|
|
|
if (NFS_SERVER(desc->pg_inode)->flags & NFS_MOUNT_SOFTERR)
|
|
|
|
desc->pg_error = -ETIMEDOUT;
|
|
|
|
else
|
|
|
|
desc->pg_error = -EIO;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-03-31 22:27:26 +00:00
|
|
|
size = nfs_coalesce_size(prev, req, desc);
|
|
|
|
if (size < req->wb_bytes)
|
|
|
|
return size;
|
2019-02-18 16:35:54 +00:00
|
|
|
nfs_list_move_request(req, &mirror->pg_list);
|
2014-09-19 14:55:07 +00:00
|
|
|
mirror->pg_count += req->wb_bytes;
|
2020-03-31 22:27:26 +00:00
|
|
|
return req->wb_bytes;
|
2007-04-02 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
2007-04-02 22:48:28 +00:00
|
|
|
/*
|
|
|
|
* Helper for nfs_pageio_add_request and nfs_pageio_complete
|
|
|
|
*/
|
|
|
|
static void nfs_pageio_doio(struct nfs_pageio_descriptor *desc)
|
|
|
|
{
|
2014-11-10 00:35:35 +00:00
|
|
|
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
|
2014-09-19 14:55:07 +00:00
|
|
|
|
|
|
|
if (!list_empty(&mirror->pg_list)) {
|
2011-06-10 17:30:23 +00:00
|
|
|
int error = desc->pg_ops->pg_doio(desc);
|
2007-04-02 22:48:28 +00:00
|
|
|
if (error < 0)
|
|
|
|
desc->pg_error = error;
|
2021-05-25 15:26:35 +00:00
|
|
|
if (list_empty(&mirror->pg_list))
|
2014-09-19 14:55:07 +00:00
|
|
|
mirror->pg_bytes_written += mirror->pg_count;
|
2007-04-02 22:48:28 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-02-13 14:21:38 +00:00
|
|
|
static void
|
|
|
|
nfs_pageio_cleanup_request(struct nfs_pageio_descriptor *desc,
|
|
|
|
struct nfs_page *req)
|
|
|
|
{
|
|
|
|
LIST_HEAD(head);
|
|
|
|
|
2019-02-18 16:35:54 +00:00
|
|
|
nfs_list_move_request(req, &head);
|
2019-02-13 15:39:39 +00:00
|
|
|
desc->pg_completion_ops->error_cleanup(&head, desc->pg_error);
|
2019-02-13 14:21:38 +00:00
|
|
|
}
|
|
|
|
|
2007-04-02 22:48:28 +00:00
|
|
|
/**
|
2021-03-05 01:29:50 +00:00
|
|
|
* __nfs_pageio_add_request - Attempt to coalesce a request into a page list.
|
2007-04-02 22:48:28 +00:00
|
|
|
* @desc: destination io descriptor
|
|
|
|
* @req: request
|
|
|
|
*
|
2014-05-15 15:56:45 +00:00
|
|
|
* This may split a request into subrequests which are all part of the
|
2020-03-31 22:27:26 +00:00
|
|
|
* same page group. If so, it will submit @req as the last one, to ensure
|
|
|
|
* the pointer to @req is still valid in case of failure.
|
2014-05-15 15:56:45 +00:00
|
|
|
*
|
2007-04-02 22:48:28 +00:00
|
|
|
* Returns true if the request 'req' was successfully coalesced into the
|
|
|
|
* existing list of pages 'desc'.
|
|
|
|
*/
|
2011-07-12 17:42:02 +00:00
|
|
|
static int __nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
|
2007-04-02 22:48:28 +00:00
|
|
|
struct nfs_page *req)
|
2007-04-02 22:48:28 +00:00
|
|
|
{
|
2014-11-10 00:35:35 +00:00
|
|
|
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
|
2014-05-15 15:56:45 +00:00
|
|
|
struct nfs_page *subreq;
|
2020-03-31 22:27:26 +00:00
|
|
|
unsigned int size, subreq_size;
|
2014-05-15 15:56:45 +00:00
|
|
|
|
2017-07-17 14:54:14 +00:00
|
|
|
nfs_page_group_lock(req);
|
2014-05-15 15:56:45 +00:00
|
|
|
|
|
|
|
subreq = req;
|
2020-03-31 22:27:26 +00:00
|
|
|
subreq_size = subreq->wb_bytes;
|
|
|
|
for(;;) {
|
|
|
|
size = nfs_pageio_do_add_request(desc, subreq);
|
|
|
|
if (size == subreq_size) {
|
|
|
|
/* We successfully submitted a request */
|
|
|
|
if (subreq == req)
|
|
|
|
break;
|
|
|
|
req->wb_pgbase += size;
|
|
|
|
req->wb_bytes -= size;
|
|
|
|
req->wb_offset += size;
|
|
|
|
subreq_size = req->wb_bytes;
|
|
|
|
subreq = req;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (WARN_ON_ONCE(subreq != req)) {
|
|
|
|
nfs_page_group_unlock(req);
|
|
|
|
nfs_pageio_cleanup_request(desc, subreq);
|
|
|
|
subreq = req;
|
|
|
|
subreq_size = req->wb_bytes;
|
|
|
|
nfs_page_group_lock(req);
|
|
|
|
}
|
|
|
|
if (!size) {
|
|
|
|
/* Can't coalesce any more, so do I/O */
|
2014-05-15 15:56:45 +00:00
|
|
|
nfs_page_group_unlock(req);
|
|
|
|
desc->pg_moreio = 1;
|
|
|
|
nfs_pageio_doio(desc);
|
2019-02-13 14:21:38 +00:00
|
|
|
if (desc->pg_error < 0 || mirror->pg_recoalesce)
|
2020-03-31 22:27:26 +00:00
|
|
|
return 0;
|
2014-05-15 15:56:45 +00:00
|
|
|
/* retry add_request for this subreq */
|
2017-07-17 14:54:14 +00:00
|
|
|
nfs_page_group_lock(req);
|
2014-05-15 15:56:45 +00:00
|
|
|
continue;
|
|
|
|
}
|
2020-03-31 22:27:26 +00:00
|
|
|
subreq = nfs_create_subreq(req, req->wb_pgbase,
|
|
|
|
req->wb_offset, size);
|
|
|
|
if (IS_ERR(subreq))
|
|
|
|
goto err_ptr;
|
|
|
|
subreq_size = size;
|
|
|
|
}
|
2014-05-15 15:56:45 +00:00
|
|
|
|
|
|
|
nfs_page_group_unlock(req);
|
2007-04-02 22:48:28 +00:00
|
|
|
return 1;
|
2014-05-29 15:38:15 +00:00
|
|
|
err_ptr:
|
|
|
|
desc->pg_error = PTR_ERR(subreq);
|
|
|
|
nfs_page_group_unlock(req);
|
|
|
|
return 0;
|
2007-04-02 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
2011-07-12 17:42:02 +00:00
|
|
|
static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc)
|
|
|
|
{
|
2014-11-10 00:35:35 +00:00
|
|
|
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
|
2011-07-12 17:42:02 +00:00
|
|
|
LIST_HEAD(head);
|
|
|
|
|
|
|
|
do {
|
2014-09-19 14:55:07 +00:00
|
|
|
list_splice_init(&mirror->pg_list, &head);
|
2022-03-26 01:51:03 +00:00
|
|
|
mirror->pg_recoalesce = 0;
|
2014-09-19 14:55:07 +00:00
|
|
|
|
2011-07-12 17:42:02 +00:00
|
|
|
while (!list_empty(&head)) {
|
|
|
|
struct nfs_page *req;
|
|
|
|
|
|
|
|
req = list_first_entry(&head, struct nfs_page, wb_list);
|
|
|
|
if (__nfs_pageio_add_request(desc, req))
|
|
|
|
continue;
|
2015-07-27 14:23:19 +00:00
|
|
|
if (desc->pg_error < 0) {
|
|
|
|
list_splice_tail(&head, &mirror->pg_list);
|
|
|
|
mirror->pg_recoalesce = 1;
|
2011-07-12 17:42:02 +00:00
|
|
|
return 0;
|
2015-07-27 14:23:19 +00:00
|
|
|
}
|
2011-07-12 17:42:02 +00:00
|
|
|
break;
|
|
|
|
}
|
2014-09-19 14:55:07 +00:00
|
|
|
} while (mirror->pg_recoalesce);
|
2011-07-12 17:42:02 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2014-09-19 14:55:07 +00:00
|
|
|
static int nfs_pageio_add_request_mirror(struct nfs_pageio_descriptor *desc,
|
2011-07-12 17:42:02 +00:00
|
|
|
struct nfs_page *req)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
do {
|
|
|
|
ret = __nfs_pageio_add_request(desc, req);
|
|
|
|
if (ret)
|
|
|
|
break;
|
|
|
|
if (desc->pg_error < 0)
|
|
|
|
break;
|
|
|
|
ret = nfs_do_recoalesce(desc);
|
|
|
|
} while (ret);
|
2014-09-19 14:55:07 +00:00
|
|
|
|
2011-07-12 17:42:02 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2018-10-18 19:01:48 +00:00
|
|
|
static void nfs_pageio_error_cleanup(struct nfs_pageio_descriptor *desc)
|
|
|
|
{
|
|
|
|
u32 midx;
|
|
|
|
struct nfs_pgio_mirror *mirror;
|
|
|
|
|
|
|
|
if (!desc->pg_error)
|
|
|
|
return;
|
|
|
|
|
|
|
|
for (midx = 0; midx < desc->pg_mirror_count; midx++) {
|
2020-11-15 22:37:37 +00:00
|
|
|
mirror = nfs_pgio_get_mirror(desc, midx);
|
2019-02-13 15:39:39 +00:00
|
|
|
desc->pg_completion_ops->error_cleanup(&mirror->pg_list,
|
|
|
|
desc->pg_error);
|
2018-10-18 19:01:48 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-09-19 14:55:07 +00:00
|
|
|
int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
|
|
|
|
struct nfs_page *req)
|
|
|
|
{
|
|
|
|
u32 midx;
|
|
|
|
unsigned int pgbase, offset, bytes;
|
2020-03-31 22:27:26 +00:00
|
|
|
struct nfs_page *dupreq;
|
2014-09-19 14:55:07 +00:00
|
|
|
|
|
|
|
pgbase = req->wb_pgbase;
|
|
|
|
offset = req->wb_offset;
|
|
|
|
bytes = req->wb_bytes;
|
|
|
|
|
|
|
|
nfs_pageio_setup_mirroring(desc, req);
|
2015-12-03 18:57:48 +00:00
|
|
|
if (desc->pg_error < 0)
|
2015-12-04 17:59:56 +00:00
|
|
|
goto out_failed;
|
2014-09-19 14:55:07 +00:00
|
|
|
|
2020-03-29 23:55:05 +00:00
|
|
|
/* Create the mirror instances first, and fire them off */
|
|
|
|
for (midx = 1; midx < desc->pg_mirror_count; midx++) {
|
|
|
|
nfs_page_group_lock(req);
|
|
|
|
|
2020-03-31 22:27:26 +00:00
|
|
|
dupreq = nfs_create_subreq(req,
|
2020-03-29 23:55:05 +00:00
|
|
|
pgbase, offset, bytes);
|
|
|
|
|
|
|
|
nfs_page_group_unlock(req);
|
|
|
|
if (IS_ERR(dupreq)) {
|
|
|
|
desc->pg_error = PTR_ERR(dupreq);
|
|
|
|
goto out_failed;
|
|
|
|
}
|
2014-09-19 14:55:07 +00:00
|
|
|
|
2020-11-15 22:37:37 +00:00
|
|
|
nfs_pgio_set_current_mirror(desc, midx);
|
2014-09-19 14:55:07 +00:00
|
|
|
if (!nfs_pageio_add_request_mirror(desc, dupreq))
|
2019-02-13 14:21:38 +00:00
|
|
|
goto out_cleanup_subreq;
|
2014-09-19 14:55:07 +00:00
|
|
|
}
|
|
|
|
|
2020-11-15 22:37:37 +00:00
|
|
|
nfs_pgio_set_current_mirror(desc, 0);
|
2020-03-29 23:55:05 +00:00
|
|
|
if (!nfs_pageio_add_request_mirror(desc, req))
|
|
|
|
goto out_failed;
|
|
|
|
|
2014-09-19 14:55:07 +00:00
|
|
|
return 1;
|
2015-12-04 17:59:56 +00:00
|
|
|
|
2019-02-13 14:21:38 +00:00
|
|
|
out_cleanup_subreq:
|
2020-03-29 23:55:05 +00:00
|
|
|
nfs_pageio_cleanup_request(desc, dupreq);
|
2015-12-04 17:59:56 +00:00
|
|
|
out_failed:
|
2018-10-18 19:01:48 +00:00
|
|
|
nfs_pageio_error_cleanup(desc);
|
2015-12-04 17:59:56 +00:00
|
|
|
return 0;
|
2014-09-19 14:55:07 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* nfs_pageio_complete_mirror - Complete I/O on the current mirror of an
|
|
|
|
* nfs_pageio_descriptor
|
|
|
|
* @desc: pointer to io descriptor
|
2015-06-18 11:37:13 +00:00
|
|
|
* @mirror_idx: pointer to mirror index
|
2014-09-19 14:55:07 +00:00
|
|
|
*/
|
|
|
|
static void nfs_pageio_complete_mirror(struct nfs_pageio_descriptor *desc,
|
|
|
|
u32 mirror_idx)
|
|
|
|
{
|
2020-11-15 22:37:37 +00:00
|
|
|
struct nfs_pgio_mirror *mirror;
|
|
|
|
u32 restore_idx;
|
|
|
|
|
|
|
|
restore_idx = nfs_pgio_set_current_mirror(desc, mirror_idx);
|
|
|
|
mirror = nfs_pgio_current_mirror(desc);
|
2014-09-19 14:55:07 +00:00
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
nfs_pageio_doio(desc);
|
2019-02-15 21:08:25 +00:00
|
|
|
if (desc->pg_error < 0 || !mirror->pg_recoalesce)
|
2014-09-19 14:55:07 +00:00
|
|
|
break;
|
|
|
|
if (!nfs_do_recoalesce(desc))
|
|
|
|
break;
|
|
|
|
}
|
2020-11-15 22:37:37 +00:00
|
|
|
nfs_pgio_set_current_mirror(desc, restore_idx);
|
2014-09-19 14:55:07 +00:00
|
|
|
}
|
|
|
|
|
2014-06-09 15:48:38 +00:00
|
|
|
/*
|
|
|
|
* nfs_pageio_resend - Transfer requests to new descriptor and resend
|
|
|
|
* @hdr - the pgio header to move request from
|
|
|
|
* @desc - the pageio descriptor to add requests to
|
|
|
|
*
|
|
|
|
* Try to move each request (nfs_page) from @hdr to @desc then attempt
|
|
|
|
* to send them.
|
|
|
|
*
|
|
|
|
* Returns 0 on success and < 0 on error.
|
|
|
|
*/
|
|
|
|
int nfs_pageio_resend(struct nfs_pageio_descriptor *desc,
|
|
|
|
struct nfs_pgio_header *hdr)
|
|
|
|
{
|
2019-08-12 19:19:54 +00:00
|
|
|
LIST_HEAD(pages);
|
2014-06-09 15:48:38 +00:00
|
|
|
|
2017-06-20 23:35:37 +00:00
|
|
|
desc->pg_io_completion = hdr->io_completion;
|
2014-06-09 15:48:38 +00:00
|
|
|
desc->pg_dreq = hdr->dreq;
|
NFS: Convert buffered read paths to use netfs when fscache is enabled
Convert the NFS buffered read code paths to corresponding netfs APIs,
but only when fscache is configured and enabled.
The netfs API defines struct netfs_request_ops which must be filled
in by the network filesystem. For NFS, we only need to define 5 of
the functions, the main one being the issue_read() function.
The issue_read() function is called by the netfs layer when a read
cannot be fulfilled locally, and must be sent to the server (either
the cache is not active, or it is active but the data is not available).
Once the read from the server is complete, netfs requires a call to
netfs_subreq_terminated() which conveys either how many bytes were read
successfully, or an error. Note that issue_read() is called with a
structure, netfs_io_subrequest, which defines the IO requested, and
contains a start and a length (both in bytes), and assumes the underlying
netfs will return a either an error on the whole region, or the number
of bytes successfully read.
The NFS IO path is page based and the main APIs are the pgio APIs defined
in pagelist.c. For the pgio APIs, there is no way for the caller to
know how many RPCs will be sent and how the pages will be broken up
into underlying RPCs, each of which will have their own completion and
return code. In contrast, netfs is subrequest based, a single
subrequest may contain multiple pages, and a single subrequest is
initiated with issue_read() and terminated with netfs_subreq_terminated().
Thus, to utilze the netfs APIs, NFS needs some way to accommodate
the netfs API requirement on the single response to the whole
subrequest, while also minimizing disruptive changes to the NFS
pgio layer.
The approach taken with this patch is to allocate a small structure
for each nfs_netfs_issue_read() call, store the final error and number
of bytes successfully transferred in the structure, and update these values
as each RPC completes. The refcount on the structure is used as a marker
for the last RPC completion, is incremented in nfs_netfs_read_initiate(),
and decremented inside nfs_netfs_read_completion(), when a nfs_pgio_header
contains a valid pointer to the data. On the final put (which signals
the final outstanding RPC is complete) in nfs_netfs_read_completion(),
call netfs_subreq_terminated() with either the final error value (if
one or more READs complete with an error) or the number of bytes
successfully transferred (if all RPCs complete successfully). Note
that when all RPCs complete successfully, the number of bytes transferred
is capped to the length of the subrequest. Capping the transferred length
to the subrequest length prevents "Subreq overread" warnings from netfs.
This is due to the "aligned_len" in nfs_pageio_add_page(), and the
corner case where NFS requests a full page at the end of the file,
even when i_size reflects only a partial page (NFS overread).
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
Tested-by: Daire Byrne <daire@dneg.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2023-02-20 13:43:06 +00:00
|
|
|
nfs_netfs_set_pageio_descriptor(desc, hdr);
|
2019-08-12 19:19:54 +00:00
|
|
|
list_splice_init(&hdr->pages, &pages);
|
|
|
|
while (!list_empty(&pages)) {
|
|
|
|
struct nfs_page *req = nfs_list_entry(pages.next);
|
2014-06-09 15:48:38 +00:00
|
|
|
|
|
|
|
if (!nfs_pageio_add_request(desc, req))
|
2019-08-12 19:19:54 +00:00
|
|
|
break;
|
2014-06-09 15:48:38 +00:00
|
|
|
}
|
|
|
|
nfs_pageio_complete(desc);
|
2019-08-12 19:19:54 +00:00
|
|
|
if (!list_empty(&pages)) {
|
|
|
|
int err = desc->pg_error < 0 ? desc->pg_error : -EIO;
|
|
|
|
hdr->completion_ops->error_cleanup(&pages, err);
|
2019-08-12 22:04:36 +00:00
|
|
|
nfs_set_pgio_error(hdr, err, hdr->io_start);
|
2019-08-12 19:19:54 +00:00
|
|
|
return err;
|
2014-06-09 15:48:38 +00:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(nfs_pageio_resend);
|
2011-07-12 17:42:02 +00:00
|
|
|
|
2007-04-02 22:48:28 +00:00
|
|
|
/**
|
2014-09-10 19:44:18 +00:00
|
|
|
* nfs_pageio_complete - Complete I/O then cleanup an nfs_pageio_descriptor
|
2007-04-02 22:48:28 +00:00
|
|
|
* @desc: pointer to io descriptor
|
|
|
|
*/
|
|
|
|
void nfs_pageio_complete(struct nfs_pageio_descriptor *desc)
|
|
|
|
{
|
2014-09-19 14:55:07 +00:00
|
|
|
u32 midx;
|
|
|
|
|
|
|
|
for (midx = 0; midx < desc->pg_mirror_count; midx++)
|
|
|
|
nfs_pageio_complete_mirror(desc, midx);
|
2014-09-10 19:44:18 +00:00
|
|
|
|
2018-10-18 19:01:48 +00:00
|
|
|
if (desc->pg_error < 0)
|
|
|
|
nfs_pageio_error_cleanup(desc);
|
2014-09-10 19:44:18 +00:00
|
|
|
if (desc->pg_ops->pg_cleanup)
|
|
|
|
desc->pg_ops->pg_cleanup(desc);
|
2014-09-19 14:55:07 +00:00
|
|
|
nfs_pageio_cleanup_mirroring(desc);
|
2007-04-02 22:48:28 +00:00
|
|
|
}
|
|
|
|
|
2007-05-20 14:18:27 +00:00
|
|
|
/**
|
|
|
|
* nfs_pageio_cond_complete - Conditional I/O completion
|
|
|
|
* @desc: pointer to io descriptor
|
|
|
|
* @index: page index
|
|
|
|
*
|
|
|
|
* It is important to ensure that processes don't try to take locks
|
|
|
|
* on non-contiguous ranges of pages as that might deadlock. This
|
|
|
|
* function should be called before attempting to wait on a locked
|
|
|
|
* nfs_page. It will complete the I/O if the page index 'index'
|
|
|
|
* is not contiguous with the existing list of pages in 'desc'.
|
|
|
|
*/
|
|
|
|
void nfs_pageio_cond_complete(struct nfs_pageio_descriptor *desc, pgoff_t index)
|
|
|
|
{
|
2014-09-19 14:55:07 +00:00
|
|
|
struct nfs_pgio_mirror *mirror;
|
|
|
|
struct nfs_page *prev;
|
2023-01-19 21:33:39 +00:00
|
|
|
struct folio *folio;
|
2014-09-19 14:55:07 +00:00
|
|
|
u32 midx;
|
|
|
|
|
|
|
|
for (midx = 0; midx < desc->pg_mirror_count; midx++) {
|
2020-11-15 22:37:37 +00:00
|
|
|
mirror = nfs_pgio_get_mirror(desc, midx);
|
2014-09-19 14:55:07 +00:00
|
|
|
if (!list_empty(&mirror->pg_list)) {
|
|
|
|
prev = nfs_list_entry(mirror->pg_list.prev);
|
2023-01-19 21:33:39 +00:00
|
|
|
folio = nfs_page_to_folio(prev);
|
|
|
|
if (folio) {
|
|
|
|
if (index == folio_next_index(folio))
|
|
|
|
continue;
|
|
|
|
} else if (index == prev->wb_index + 1)
|
|
|
|
continue;
|
|
|
|
nfs_pageio_complete(desc);
|
|
|
|
break;
|
2014-09-19 14:55:07 +00:00
|
|
|
}
|
2007-05-20 14:18:27 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-03-30 00:06:45 +00:00
|
|
|
/*
|
|
|
|
* nfs_pageio_stop_mirroring - stop using mirroring (set mirror count to 1)
|
|
|
|
*/
|
|
|
|
void nfs_pageio_stop_mirroring(struct nfs_pageio_descriptor *pgio)
|
|
|
|
{
|
|
|
|
nfs_pageio_complete(pgio);
|
|
|
|
}
|
|
|
|
|
NFS: Split fs/nfs/inode.c
As fs/nfs/inode.c is rather large, heterogenous and unwieldy, the attached
patch splits it up into a number of files:
(*) fs/nfs/inode.c
Strictly inode specific functions.
(*) fs/nfs/super.c
Superblock management functions for NFS and NFS4, normal access, clones
and referrals. The NFS4 superblock functions _could_ move out into a
separate conditionally compiled file, but it's probably not worth it as
there're so many common bits.
(*) fs/nfs/namespace.c
Some namespace-specific functions have been moved here.
(*) fs/nfs/nfs4namespace.c
NFS4-specific namespace functions (this could be merged into the previous
file). This file is conditionally compiled.
(*) fs/nfs/internal.h
Inter-file declarations, plus a few simple utility functions moved from
fs/nfs/inode.c.
Additionally, all the in-.c-file externs have been moved here, and those
files they were moved from now includes this file.
For the most part, the functions have not been changed, only some multiplexor
functions have changed significantly.
I've also:
(*) Added some extra banner comments above some functions.
(*) Rearranged the function order within the files to be more logical and
better grouped (IMO), though someone may prefer a different order.
(*) Reduced the number of #ifdefs in .c files.
(*) Added missing __init and __exit directives.
Signed-Off-By: David Howells <dhowells@redhat.com>
2006-06-09 13:34:33 +00:00
|
|
|
int __init nfs_init_nfspagecache(void)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
nfs_page_cachep = kmem_cache_create("nfs_page",
|
|
|
|
sizeof(struct nfs_page),
|
|
|
|
0, SLAB_HWCACHE_ALIGN,
|
2007-07-20 01:11:58 +00:00
|
|
|
NULL);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (nfs_page_cachep == NULL)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2006-06-27 19:59:15 +00:00
|
|
|
void nfs_destroy_nfspagecache(void)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2006-09-27 08:49:40 +00:00
|
|
|
kmem_cache_destroy(nfs_page_cachep);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2014-05-06 13:12:36 +00:00
|
|
|
static const struct rpc_call_ops nfs_pgio_common_ops = {
|
2014-05-06 13:12:33 +00:00
|
|
|
.rpc_call_prepare = nfs_pgio_prepare,
|
|
|
|
.rpc_call_done = nfs_pgio_result,
|
|
|
|
.rpc_release = nfs_pgio_release,
|
|
|
|
};
|
2014-05-06 13:12:40 +00:00
|
|
|
|
|
|
|
const struct nfs_pageio_ops nfs_pgio_rw_ops = {
|
|
|
|
.pg_test = nfs_generic_pg_test,
|
|
|
|
.pg_doio = nfs_generic_pg_pgios,
|
|
|
|
};
|