CacheFiles: A cache that backs onto a mounted filesystem
Add an FS-Cache cache-backend that permits a mounted filesystem to be used as a
backing store for the cache.
CacheFiles uses a userspace daemon to do some of the cache management - such as
reaping stale nodes and culling. This is called cachefilesd and lives in
/sbin. The source for the daemon can be downloaded from:
http://people.redhat.com/~dhowells/cachefs/cachefilesd.c
And an example configuration from:
http://people.redhat.com/~dhowells/cachefs/cachefilesd.conf
The filesystem and data integrity of the cache are only as good as those of the
filesystem providing the backing services. Note that CacheFiles does not
attempt to journal anything since the journalling interfaces of the various
filesystems are very specific in nature.
CacheFiles creates a misc character device - "/dev/cachefiles" - that is used
to communication with the daemon. Only one thing may have this open at once,
and whilst it is open, a cache is at least partially in existence. The daemon
opens this and sends commands down it to control the cache.
CacheFiles is currently limited to a single cache.
CacheFiles attempts to maintain at least a certain percentage of free space on
the filesystem, shrinking the cache by culling the objects it contains to make
space if necessary - see the "Cache Culling" section. This means it can be
placed on the same medium as a live set of data, and will expand to make use of
spare space and automatically contract when the set of data requires more
space.
============
REQUIREMENTS
============
The use of CacheFiles and its daemon requires the following features to be
available in the system and in the cache filesystem:
- dnotify.
- extended attributes (xattrs).
- openat() and friends.
- bmap() support on files in the filesystem (FIBMAP ioctl).
- The use of bmap() to detect a partial page at the end of the file.
It is strongly recommended that the "dir_index" option is enabled on Ext3
filesystems being used as a cache.
=============
CONFIGURATION
=============
The cache is configured by a script in /etc/cachefilesd.conf. These commands
set up cache ready for use. The following script commands are available:
(*) brun <N>%
(*) bcull <N>%
(*) bstop <N>%
(*) frun <N>%
(*) fcull <N>%
(*) fstop <N>%
Configure the culling limits. Optional. See the section on culling
The defaults are 7% (run), 5% (cull) and 1% (stop) respectively.
The commands beginning with a 'b' are file space (block) limits, those
beginning with an 'f' are file count limits.
(*) dir <path>
Specify the directory containing the root of the cache. Mandatory.
(*) tag <name>
Specify a tag to FS-Cache to use in distinguishing multiple caches.
Optional. The default is "CacheFiles".
(*) debug <mask>
Specify a numeric bitmask to control debugging in the kernel module.
Optional. The default is zero (all off). The following values can be
OR'd into the mask to collect various information:
1 Turn on trace of function entry (_enter() macros)
2 Turn on trace of function exit (_leave() macros)
4 Turn on trace of internal debug points (_debug())
This mask can also be set through sysfs, eg:
echo 5 >/sys/modules/cachefiles/parameters/debug
==================
STARTING THE CACHE
==================
The cache is started by running the daemon. The daemon opens the cache device,
configures the cache and tells it to begin caching. At that point the cache
binds to fscache and the cache becomes live.
The daemon is run as follows:
/sbin/cachefilesd [-d]* [-s] [-n] [-f <configfile>]
The flags are:
(*) -d
Increase the debugging level. This can be specified multiple times and
is cumulative with itself.
(*) -s
Send messages to stderr instead of syslog.
(*) -n
Don't daemonise and go into background.
(*) -f <configfile>
Use an alternative configuration file rather than the default one.
===============
THINGS TO AVOID
===============
Do not mount other things within the cache as this will cause problems. The
kernel module contains its own very cut-down path walking facility that ignores
mountpoints, but the daemon can't avoid them.
Do not create, rename or unlink files and directories in the cache whilst the
cache is active, as this may cause the state to become uncertain.
Renaming files in the cache might make objects appear to be other objects (the
filename is part of the lookup key).
Do not change or remove the extended attributes attached to cache files by the
cache as this will cause the cache state management to get confused.
Do not create files or directories in the cache, lest the cache get confused or
serve incorrect data.
Do not chmod files in the cache. The module creates things with minimal
permissions to prevent random users being able to access them directly.
=============
CACHE CULLING
=============
The cache may need culling occasionally to make space. This involves
discarding objects from the cache that have been used less recently than
anything else. Culling is based on the access time of data objects. Empty
directories are culled if not in use.
Cache culling is done on the basis of the percentage of blocks and the
percentage of files available in the underlying filesystem. There are six
"limits":
(*) brun
(*) frun
If the amount of free space and the number of available files in the cache
rises above both these limits, then culling is turned off.
(*) bcull
(*) fcull
If the amount of available space or the number of available files in the
cache falls below either of these limits, then culling is started.
(*) bstop
(*) fstop
If the amount of available space or the number of available files in the
cache falls below either of these limits, then no further allocation of
disk space or files is permitted until culling has raised things above
these limits again.
These must be configured thusly:
0 <= bstop < bcull < brun < 100
0 <= fstop < fcull < frun < 100
Note that these are percentages of available space and available files, and do
_not_ appear as 100 minus the percentage displayed by the "df" program.
The userspace daemon scans the cache to build up a table of cullable objects.
These are then culled in least recently used order. A new scan of the cache is
started as soon as space is made in the table. Objects will be skipped if
their atimes have changed or if the kernel module says it is still using them.
===============
CACHE STRUCTURE
===============
The CacheFiles module will create two directories in the directory it was
given:
(*) cache/
(*) graveyard/
The active cache objects all reside in the first directory. The CacheFiles
kernel module moves any retired or culled objects that it can't simply unlink
to the graveyard from which the daemon will actually delete them.
The daemon uses dnotify to monitor the graveyard directory, and will delete
anything that appears therein.
The module represents index objects as directories with the filename "I..." or
"J...". Note that the "cache/" directory is itself a special index.
Data objects are represented as files if they have no children, or directories
if they do. Their filenames all begin "D..." or "E...". If represented as a
directory, data objects will have a file in the directory called "data" that
actually holds the data.
Special objects are similar to data objects, except their filenames begin
"S..." or "T...".
If an object has children, then it will be represented as a directory.
Immediately in the representative directory are a collection of directories
named for hash values of the child object keys with an '@' prepended. Into
this directory, if possible, will be placed the representations of the child
objects:
INDEX INDEX INDEX DATA FILES
========= ========== ================================= ================
cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400
cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...DB1ry
cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...N22ry
cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...FP1ry
If the key is so long that it exceeds NAME_MAX with the decorations added on to
it, then it will be cut into pieces, the first few of which will be used to
make a nest of directories, and the last one of which will be the objects
inside the last directory. The names of the intermediate directories will have
'+' prepended:
J1223/@23/+xy...z/+kl...m/Epqr
Note that keys are raw data, and not only may they exceed NAME_MAX in size,
they may also contain things like '/' and NUL characters, and so they may not
be suitable for turning directly into a filename.
To handle this, CacheFiles will use a suitably printable filename directly and
"base-64" encode ones that aren't directly suitable. The two versions of
object filenames indicate the encoding:
OBJECT TYPE PRINTABLE ENCODED
=============== =============== ===============
Index "I..." "J..."
Data "D..." "E..."
Special "S..." "T..."
Intermediate directories are always "@" or "+" as appropriate.
Each object in the cache has an extended attribute label that holds the object
type ID (required to distinguish special objects) and the auxiliary data from
the netfs. The latter is used to detect stale objects in the cache and update
or retire them.
Note that CacheFiles will erase from the cache any file it doesn't recognise or
any file of an incorrect type (such as a FIFO file or a device file).
==========================
SECURITY MODEL AND SELINUX
==========================
CacheFiles is implemented to deal properly with the LSM security features of
the Linux kernel and the SELinux facility.
One of the problems that CacheFiles faces is that it is generally acting on
behalf of a process, and running in that process's context, and that includes a
security context that is not appropriate for accessing the cache - either
because the files in the cache are inaccessible to that process, or because if
the process creates a file in the cache, that file may be inaccessible to other
processes.
The way CacheFiles works is to temporarily change the security context (fsuid,
fsgid and actor security label) that the process acts as - without changing the
security context of the process when it the target of an operation performed by
some other process (so signalling and suchlike still work correctly).
When the CacheFiles module is asked to bind to its cache, it:
(1) Finds the security label attached to the root cache directory and uses
that as the security label with which it will create files. By default,
this is:
cachefiles_var_t
(2) Finds the security label of the process which issued the bind request
(presumed to be the cachefilesd daemon), which by default will be:
cachefilesd_t
and asks LSM to supply a security ID as which it should act given the
daemon's label. By default, this will be:
cachefiles_kernel_t
SELinux transitions the daemon's security ID to the module's security ID
based on a rule of this form in the policy.
type_transition <daemon's-ID> kernel_t : process <module's-ID>;
For instance:
type_transition cachefilesd_t kernel_t : process cachefiles_kernel_t;
The module's security ID gives it permission to create, move and remove files
and directories in the cache, to find and access directories and files in the
cache, to set and access extended attributes on cache objects, and to read and
write files in the cache.
The daemon's security ID gives it only a very restricted set of permissions: it
may scan directories, stat files and erase files and directories. It may
not read or write files in the cache, and so it is precluded from accessing the
data cached therein; nor is it permitted to create new files in the cache.
There are policy source files available in:
http://people.redhat.com/~dhowells/fscache/cachefilesd-0.8.tar.bz2
and later versions. In that tarball, see the files:
cachefilesd.te
cachefilesd.fc
cachefilesd.if
They are built and installed directly by the RPM.
If a non-RPM based system is being used, then copy the above files to their own
directory and run:
make -f /usr/share/selinux/devel/Makefile
semodule -i cachefilesd.pp
You will need checkpolicy and selinux-policy-devel installed prior to the
build.
By default, the cache is located in /var/fscache, but if it is desirable that
it should be elsewhere, than either the above policy files must be altered, or
an auxiliary policy must be installed to label the alternate location of the
cache.
For instructions on how to add an auxiliary policy to enable the cache to be
located elsewhere when SELinux is in enforcing mode, please see:
/usr/share/doc/cachefilesd-*/move-cache.txt
When the cachefilesd rpm is installed; alternatively, the document can be found
in the sources.
==================
A NOTE ON SECURITY
==================
CacheFiles makes use of the split security in the task_struct. It allocates
its own task_security structure, and redirects current->act_as to point to it
when it acts on behalf of another process, in that process's context.
The reason it does this is that it calls vfs_mkdir() and suchlike rather than
bypassing security and calling inode ops directly. Therefore the VFS and LSM
may deny the CacheFiles access to the cache data because under some
circumstances the caching code is running in the security context of whatever
process issued the original syscall on the netfs.
Furthermore, should CacheFiles create a file or directory, the security
parameters with that object is created (UID, GID, security label) would be
derived from that process that issued the system call, thus potentially
preventing other processes from accessing the cache - including CacheFiles's
cache management daemon (cachefilesd).
What is required is to temporarily override the security of the process that
issued the system call. We can't, however, just do an in-place change of the
security data as that affects the process as an object, not just as a subject.
This means it may lose signals or ptrace events for example, and affects what
the process looks like in /proc.
So CacheFiles makes use of a logical split in the security between the
objective security (task->sec) and the subjective security (task->act_as). The
objective security holds the intrinsic security properties of a process and is
never overridden. This is what appears in /proc, and is what is used when a
process is the target of an operation by some other process (SIGKILL for
example).
The subjective security holds the active security properties of a process, and
may be overridden. This is not seen externally, and is used whan a process
acts upon another object, for example SIGKILLing another process or opening a
file.
LSM hooks exist that allow SELinux (or Smack or whatever) to reject a request
for CacheFiles to run in a context of a specific security label, or to create
files and directories with another security label.
This documentation is added by the patch to:
Documentation/filesystems/caching/cachefiles.txt
Signed-Off-By: David Howells <dhowells@redhat.com>
Acked-by: Steve Dickson <steved@redhat.com>
Acked-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Tested-by: Daire Byrne <Daire.Byrne@framestore.com>
2009-04-03 15:42:41 +00:00
|
|
|
/* Storage object read/write
|
|
|
|
*
|
|
|
|
* Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
|
|
|
|
* Written by David Howells (dhowells@redhat.com)
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public Licence
|
|
|
|
* as published by the Free Software Foundation; either version
|
|
|
|
* 2 of the Licence, or (at your option) any later version.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/mount.h>
|
|
|
|
#include <linux/file.h>
|
|
|
|
#include "internal.h"
|
|
|
|
|
|
|
|
/*
|
|
|
|
* detect wake up events generated by the unlocking of pages in which we're
|
|
|
|
* interested
|
|
|
|
* - we use this to detect read completion of backing pages
|
|
|
|
* - the caller holds the waitqueue lock
|
|
|
|
*/
|
|
|
|
static int cachefiles_read_waiter(wait_queue_t *wait, unsigned mode,
|
|
|
|
int sync, void *_key)
|
|
|
|
{
|
|
|
|
struct cachefiles_one_read *monitor =
|
|
|
|
container_of(wait, struct cachefiles_one_read, monitor);
|
|
|
|
struct cachefiles_object *object;
|
|
|
|
struct wait_bit_key *key = _key;
|
|
|
|
struct page *page = wait->private;
|
|
|
|
|
|
|
|
ASSERT(key);
|
|
|
|
|
|
|
|
_enter("{%lu},%u,%d,{%p,%u}",
|
|
|
|
monitor->netfs_page->index, mode, sync,
|
|
|
|
key->flags, key->bit_nr);
|
|
|
|
|
|
|
|
if (key->flags != &page->flags ||
|
|
|
|
key->bit_nr != PG_locked)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
_debug("--- monitor %p %lx ---", page, page->flags);
|
|
|
|
|
|
|
|
if (!PageUptodate(page) && !PageError(page))
|
|
|
|
dump_stack();
|
|
|
|
|
|
|
|
/* remove from the waitqueue */
|
|
|
|
list_del(&wait->task_list);
|
|
|
|
|
|
|
|
/* move onto the action list and queue for FS-Cache thread pool */
|
|
|
|
ASSERT(monitor->op);
|
|
|
|
|
|
|
|
object = container_of(monitor->op->op.object,
|
|
|
|
struct cachefiles_object, fscache);
|
|
|
|
|
|
|
|
spin_lock(&object->work_lock);
|
|
|
|
list_add_tail(&monitor->op_link, &monitor->op->to_do);
|
|
|
|
spin_unlock(&object->work_lock);
|
|
|
|
|
|
|
|
fscache_enqueue_retrieval(monitor->op);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* copy data from backing pages to netfs pages to complete a read operation
|
|
|
|
* - driven by FS-Cache's thread pool
|
|
|
|
*/
|
|
|
|
static void cachefiles_read_copier(struct fscache_operation *_op)
|
|
|
|
{
|
|
|
|
struct cachefiles_one_read *monitor;
|
|
|
|
struct cachefiles_object *object;
|
|
|
|
struct fscache_retrieval *op;
|
|
|
|
struct pagevec pagevec;
|
|
|
|
int error, max;
|
|
|
|
|
|
|
|
op = container_of(_op, struct fscache_retrieval, op);
|
|
|
|
object = container_of(op->op.object,
|
|
|
|
struct cachefiles_object, fscache);
|
|
|
|
|
|
|
|
_enter("{ino=%lu}", object->backer->d_inode->i_ino);
|
|
|
|
|
|
|
|
pagevec_init(&pagevec, 0);
|
|
|
|
|
|
|
|
max = 8;
|
|
|
|
spin_lock_irq(&object->work_lock);
|
|
|
|
|
|
|
|
while (!list_empty(&op->to_do)) {
|
|
|
|
monitor = list_entry(op->to_do.next,
|
|
|
|
struct cachefiles_one_read, op_link);
|
|
|
|
list_del(&monitor->op_link);
|
|
|
|
|
|
|
|
spin_unlock_irq(&object->work_lock);
|
|
|
|
|
|
|
|
_debug("- copy {%lu}", monitor->back_page->index);
|
|
|
|
|
|
|
|
error = -EIO;
|
|
|
|
if (PageUptodate(monitor->back_page)) {
|
|
|
|
copy_highpage(monitor->netfs_page, monitor->back_page);
|
|
|
|
|
|
|
|
pagevec_add(&pagevec, monitor->netfs_page);
|
|
|
|
fscache_mark_pages_cached(monitor->op, &pagevec);
|
|
|
|
error = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (error)
|
|
|
|
cachefiles_io_error_obj(
|
|
|
|
object,
|
|
|
|
"Readpage failed on backing file %lx",
|
|
|
|
(unsigned long) monitor->back_page->flags);
|
|
|
|
|
|
|
|
page_cache_release(monitor->back_page);
|
|
|
|
|
|
|
|
fscache_end_io(op, monitor->netfs_page, error);
|
|
|
|
page_cache_release(monitor->netfs_page);
|
|
|
|
fscache_put_retrieval(op);
|
|
|
|
kfree(monitor);
|
|
|
|
|
|
|
|
/* let the thread pool have some air occasionally */
|
|
|
|
max--;
|
|
|
|
if (max < 0 || need_resched()) {
|
|
|
|
if (!list_empty(&op->to_do))
|
|
|
|
fscache_enqueue_retrieval(op);
|
|
|
|
_leave(" [maxed out]");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
spin_lock_irq(&object->work_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
spin_unlock_irq(&object->work_lock);
|
|
|
|
_leave("");
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* read the corresponding page to the given set from the backing file
|
|
|
|
* - an uncertain page is simply discarded, to be tried again another time
|
|
|
|
*/
|
|
|
|
static int cachefiles_read_backing_file_one(struct cachefiles_object *object,
|
|
|
|
struct fscache_retrieval *op,
|
|
|
|
struct page *netpage,
|
|
|
|
struct pagevec *pagevec)
|
|
|
|
{
|
|
|
|
struct cachefiles_one_read *monitor;
|
|
|
|
struct address_space *bmapping;
|
|
|
|
struct page *newpage, *backpage;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
_enter("");
|
|
|
|
|
|
|
|
pagevec_reinit(pagevec);
|
|
|
|
|
|
|
|
_debug("read back %p{%lu,%d}",
|
|
|
|
netpage, netpage->index, page_count(netpage));
|
|
|
|
|
|
|
|
monitor = kzalloc(sizeof(*monitor), GFP_KERNEL);
|
|
|
|
if (!monitor)
|
|
|
|
goto nomem;
|
|
|
|
|
|
|
|
monitor->netfs_page = netpage;
|
|
|
|
monitor->op = fscache_get_retrieval(op);
|
|
|
|
|
|
|
|
init_waitqueue_func_entry(&monitor->monitor, cachefiles_read_waiter);
|
|
|
|
|
|
|
|
/* attempt to get hold of the backing page */
|
|
|
|
bmapping = object->backer->d_inode->i_mapping;
|
|
|
|
newpage = NULL;
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
backpage = find_get_page(bmapping, netpage->index);
|
|
|
|
if (backpage)
|
|
|
|
goto backing_page_already_present;
|
|
|
|
|
|
|
|
if (!newpage) {
|
|
|
|
newpage = page_cache_alloc_cold(bmapping);
|
|
|
|
if (!newpage)
|
|
|
|
goto nomem_monitor;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = add_to_page_cache(newpage, bmapping,
|
|
|
|
netpage->index, GFP_KERNEL);
|
|
|
|
if (ret == 0)
|
|
|
|
goto installed_new_backing_page;
|
|
|
|
if (ret != -EEXIST)
|
|
|
|
goto nomem_page;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* we've installed a new backing page, so now we need to add it
|
|
|
|
* to the LRU list and start it reading */
|
|
|
|
installed_new_backing_page:
|
|
|
|
_debug("- new %p", newpage);
|
|
|
|
|
|
|
|
backpage = newpage;
|
|
|
|
newpage = NULL;
|
|
|
|
|
|
|
|
page_cache_get(backpage);
|
|
|
|
pagevec_add(pagevec, backpage);
|
|
|
|
__pagevec_lru_add_file(pagevec);
|
|
|
|
|
|
|
|
read_backing_page:
|
|
|
|
ret = bmapping->a_ops->readpage(NULL, backpage);
|
|
|
|
if (ret < 0)
|
|
|
|
goto read_error;
|
|
|
|
|
|
|
|
/* set the monitor to transfer the data across */
|
|
|
|
monitor_backing_page:
|
|
|
|
_debug("- monitor add");
|
|
|
|
|
|
|
|
/* install the monitor */
|
|
|
|
page_cache_get(monitor->netfs_page);
|
|
|
|
page_cache_get(backpage);
|
|
|
|
monitor->back_page = backpage;
|
|
|
|
monitor->monitor.private = backpage;
|
|
|
|
add_page_wait_queue(backpage, &monitor->monitor);
|
|
|
|
monitor = NULL;
|
|
|
|
|
|
|
|
/* but the page may have been read before the monitor was installed, so
|
|
|
|
* the monitor may miss the event - so we have to ensure that we do get
|
|
|
|
* one in such a case */
|
|
|
|
if (trylock_page(backpage)) {
|
|
|
|
_debug("jumpstart %p {%lx}", backpage, backpage->flags);
|
|
|
|
unlock_page(backpage);
|
|
|
|
}
|
|
|
|
goto success;
|
|
|
|
|
|
|
|
/* if the backing page is already present, it can be in one of
|
|
|
|
* three states: read in progress, read failed or read okay */
|
|
|
|
backing_page_already_present:
|
|
|
|
_debug("- present");
|
|
|
|
|
|
|
|
if (newpage) {
|
|
|
|
page_cache_release(newpage);
|
|
|
|
newpage = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (PageError(backpage))
|
|
|
|
goto io_error;
|
|
|
|
|
|
|
|
if (PageUptodate(backpage))
|
|
|
|
goto backing_page_already_uptodate;
|
|
|
|
|
|
|
|
if (!trylock_page(backpage))
|
|
|
|
goto monitor_backing_page;
|
|
|
|
_debug("read %p {%lx}", backpage, backpage->flags);
|
|
|
|
goto read_backing_page;
|
|
|
|
|
|
|
|
/* the backing page is already up to date, attach the netfs
|
|
|
|
* page to the pagecache and LRU and copy the data across */
|
|
|
|
backing_page_already_uptodate:
|
|
|
|
_debug("- uptodate");
|
|
|
|
|
|
|
|
pagevec_add(pagevec, netpage);
|
|
|
|
fscache_mark_pages_cached(op, pagevec);
|
|
|
|
|
|
|
|
copy_highpage(netpage, backpage);
|
|
|
|
fscache_end_io(op, netpage, 0);
|
|
|
|
|
|
|
|
success:
|
|
|
|
_debug("success");
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
out:
|
|
|
|
if (backpage)
|
|
|
|
page_cache_release(backpage);
|
|
|
|
if (monitor) {
|
|
|
|
fscache_put_retrieval(monitor->op);
|
|
|
|
kfree(monitor);
|
|
|
|
}
|
|
|
|
_leave(" = %d", ret);
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
read_error:
|
|
|
|
_debug("read error %d", ret);
|
|
|
|
if (ret == -ENOMEM)
|
|
|
|
goto out;
|
|
|
|
io_error:
|
|
|
|
cachefiles_io_error_obj(object, "Page read error on backing file");
|
|
|
|
ret = -ENOBUFS;
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
nomem_page:
|
|
|
|
page_cache_release(newpage);
|
|
|
|
nomem_monitor:
|
|
|
|
fscache_put_retrieval(monitor->op);
|
|
|
|
kfree(monitor);
|
|
|
|
nomem:
|
|
|
|
_leave(" = -ENOMEM");
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* read a page from the cache or allocate a block in which to store it
|
|
|
|
* - cache withdrawal is prevented by the caller
|
|
|
|
* - returns -EINTR if interrupted
|
|
|
|
* - returns -ENOMEM if ran out of memory
|
|
|
|
* - returns -ENOBUFS if no buffers can be made available
|
|
|
|
* - returns -ENOBUFS if page is beyond EOF
|
|
|
|
* - if the page is backed by a block in the cache:
|
|
|
|
* - a read will be started which will call the callback on completion
|
|
|
|
* - 0 will be returned
|
|
|
|
* - else if the page is unbacked:
|
|
|
|
* - the metadata will be retained
|
|
|
|
* - -ENODATA will be returned
|
|
|
|
*/
|
|
|
|
int cachefiles_read_or_alloc_page(struct fscache_retrieval *op,
|
|
|
|
struct page *page,
|
|
|
|
gfp_t gfp)
|
|
|
|
{
|
|
|
|
struct cachefiles_object *object;
|
|
|
|
struct cachefiles_cache *cache;
|
|
|
|
struct pagevec pagevec;
|
|
|
|
struct inode *inode;
|
|
|
|
sector_t block0, block;
|
|
|
|
unsigned shift;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
object = container_of(op->op.object,
|
|
|
|
struct cachefiles_object, fscache);
|
|
|
|
cache = container_of(object->fscache.cache,
|
|
|
|
struct cachefiles_cache, cache);
|
|
|
|
|
|
|
|
_enter("{%p},{%lx},,,", object, page->index);
|
|
|
|
|
|
|
|
if (!object->backer)
|
|
|
|
return -ENOBUFS;
|
|
|
|
|
|
|
|
inode = object->backer->d_inode;
|
|
|
|
ASSERT(S_ISREG(inode->i_mode));
|
|
|
|
ASSERT(inode->i_mapping->a_ops->bmap);
|
|
|
|
ASSERT(inode->i_mapping->a_ops->readpages);
|
|
|
|
|
|
|
|
/* calculate the shift required to use bmap */
|
|
|
|
if (inode->i_sb->s_blocksize > PAGE_SIZE)
|
|
|
|
return -ENOBUFS;
|
|
|
|
|
|
|
|
shift = PAGE_SHIFT - inode->i_sb->s_blocksize_bits;
|
|
|
|
|
2009-11-19 18:11:04 +00:00
|
|
|
op->op.flags &= FSCACHE_OP_KEEP_FLAGS;
|
|
|
|
op->op.flags |= FSCACHE_OP_FAST;
|
CacheFiles: A cache that backs onto a mounted filesystem
Add an FS-Cache cache-backend that permits a mounted filesystem to be used as a
backing store for the cache.
CacheFiles uses a userspace daemon to do some of the cache management - such as
reaping stale nodes and culling. This is called cachefilesd and lives in
/sbin. The source for the daemon can be downloaded from:
http://people.redhat.com/~dhowells/cachefs/cachefilesd.c
And an example configuration from:
http://people.redhat.com/~dhowells/cachefs/cachefilesd.conf
The filesystem and data integrity of the cache are only as good as those of the
filesystem providing the backing services. Note that CacheFiles does not
attempt to journal anything since the journalling interfaces of the various
filesystems are very specific in nature.
CacheFiles creates a misc character device - "/dev/cachefiles" - that is used
to communication with the daemon. Only one thing may have this open at once,
and whilst it is open, a cache is at least partially in existence. The daemon
opens this and sends commands down it to control the cache.
CacheFiles is currently limited to a single cache.
CacheFiles attempts to maintain at least a certain percentage of free space on
the filesystem, shrinking the cache by culling the objects it contains to make
space if necessary - see the "Cache Culling" section. This means it can be
placed on the same medium as a live set of data, and will expand to make use of
spare space and automatically contract when the set of data requires more
space.
============
REQUIREMENTS
============
The use of CacheFiles and its daemon requires the following features to be
available in the system and in the cache filesystem:
- dnotify.
- extended attributes (xattrs).
- openat() and friends.
- bmap() support on files in the filesystem (FIBMAP ioctl).
- The use of bmap() to detect a partial page at the end of the file.
It is strongly recommended that the "dir_index" option is enabled on Ext3
filesystems being used as a cache.
=============
CONFIGURATION
=============
The cache is configured by a script in /etc/cachefilesd.conf. These commands
set up cache ready for use. The following script commands are available:
(*) brun <N>%
(*) bcull <N>%
(*) bstop <N>%
(*) frun <N>%
(*) fcull <N>%
(*) fstop <N>%
Configure the culling limits. Optional. See the section on culling
The defaults are 7% (run), 5% (cull) and 1% (stop) respectively.
The commands beginning with a 'b' are file space (block) limits, those
beginning with an 'f' are file count limits.
(*) dir <path>
Specify the directory containing the root of the cache. Mandatory.
(*) tag <name>
Specify a tag to FS-Cache to use in distinguishing multiple caches.
Optional. The default is "CacheFiles".
(*) debug <mask>
Specify a numeric bitmask to control debugging in the kernel module.
Optional. The default is zero (all off). The following values can be
OR'd into the mask to collect various information:
1 Turn on trace of function entry (_enter() macros)
2 Turn on trace of function exit (_leave() macros)
4 Turn on trace of internal debug points (_debug())
This mask can also be set through sysfs, eg:
echo 5 >/sys/modules/cachefiles/parameters/debug
==================
STARTING THE CACHE
==================
The cache is started by running the daemon. The daemon opens the cache device,
configures the cache and tells it to begin caching. At that point the cache
binds to fscache and the cache becomes live.
The daemon is run as follows:
/sbin/cachefilesd [-d]* [-s] [-n] [-f <configfile>]
The flags are:
(*) -d
Increase the debugging level. This can be specified multiple times and
is cumulative with itself.
(*) -s
Send messages to stderr instead of syslog.
(*) -n
Don't daemonise and go into background.
(*) -f <configfile>
Use an alternative configuration file rather than the default one.
===============
THINGS TO AVOID
===============
Do not mount other things within the cache as this will cause problems. The
kernel module contains its own very cut-down path walking facility that ignores
mountpoints, but the daemon can't avoid them.
Do not create, rename or unlink files and directories in the cache whilst the
cache is active, as this may cause the state to become uncertain.
Renaming files in the cache might make objects appear to be other objects (the
filename is part of the lookup key).
Do not change or remove the extended attributes attached to cache files by the
cache as this will cause the cache state management to get confused.
Do not create files or directories in the cache, lest the cache get confused or
serve incorrect data.
Do not chmod files in the cache. The module creates things with minimal
permissions to prevent random users being able to access them directly.
=============
CACHE CULLING
=============
The cache may need culling occasionally to make space. This involves
discarding objects from the cache that have been used less recently than
anything else. Culling is based on the access time of data objects. Empty
directories are culled if not in use.
Cache culling is done on the basis of the percentage of blocks and the
percentage of files available in the underlying filesystem. There are six
"limits":
(*) brun
(*) frun
If the amount of free space and the number of available files in the cache
rises above both these limits, then culling is turned off.
(*) bcull
(*) fcull
If the amount of available space or the number of available files in the
cache falls below either of these limits, then culling is started.
(*) bstop
(*) fstop
If the amount of available space or the number of available files in the
cache falls below either of these limits, then no further allocation of
disk space or files is permitted until culling has raised things above
these limits again.
These must be configured thusly:
0 <= bstop < bcull < brun < 100
0 <= fstop < fcull < frun < 100
Note that these are percentages of available space and available files, and do
_not_ appear as 100 minus the percentage displayed by the "df" program.
The userspace daemon scans the cache to build up a table of cullable objects.
These are then culled in least recently used order. A new scan of the cache is
started as soon as space is made in the table. Objects will be skipped if
their atimes have changed or if the kernel module says it is still using them.
===============
CACHE STRUCTURE
===============
The CacheFiles module will create two directories in the directory it was
given:
(*) cache/
(*) graveyard/
The active cache objects all reside in the first directory. The CacheFiles
kernel module moves any retired or culled objects that it can't simply unlink
to the graveyard from which the daemon will actually delete them.
The daemon uses dnotify to monitor the graveyard directory, and will delete
anything that appears therein.
The module represents index objects as directories with the filename "I..." or
"J...". Note that the "cache/" directory is itself a special index.
Data objects are represented as files if they have no children, or directories
if they do. Their filenames all begin "D..." or "E...". If represented as a
directory, data objects will have a file in the directory called "data" that
actually holds the data.
Special objects are similar to data objects, except their filenames begin
"S..." or "T...".
If an object has children, then it will be represented as a directory.
Immediately in the representative directory are a collection of directories
named for hash values of the child object keys with an '@' prepended. Into
this directory, if possible, will be placed the representations of the child
objects:
INDEX INDEX INDEX DATA FILES
========= ========== ================================= ================
cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400
cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...DB1ry
cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...N22ry
cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...FP1ry
If the key is so long that it exceeds NAME_MAX with the decorations added on to
it, then it will be cut into pieces, the first few of which will be used to
make a nest of directories, and the last one of which will be the objects
inside the last directory. The names of the intermediate directories will have
'+' prepended:
J1223/@23/+xy...z/+kl...m/Epqr
Note that keys are raw data, and not only may they exceed NAME_MAX in size,
they may also contain things like '/' and NUL characters, and so they may not
be suitable for turning directly into a filename.
To handle this, CacheFiles will use a suitably printable filename directly and
"base-64" encode ones that aren't directly suitable. The two versions of
object filenames indicate the encoding:
OBJECT TYPE PRINTABLE ENCODED
=============== =============== ===============
Index "I..." "J..."
Data "D..." "E..."
Special "S..." "T..."
Intermediate directories are always "@" or "+" as appropriate.
Each object in the cache has an extended attribute label that holds the object
type ID (required to distinguish special objects) and the auxiliary data from
the netfs. The latter is used to detect stale objects in the cache and update
or retire them.
Note that CacheFiles will erase from the cache any file it doesn't recognise or
any file of an incorrect type (such as a FIFO file or a device file).
==========================
SECURITY MODEL AND SELINUX
==========================
CacheFiles is implemented to deal properly with the LSM security features of
the Linux kernel and the SELinux facility.
One of the problems that CacheFiles faces is that it is generally acting on
behalf of a process, and running in that process's context, and that includes a
security context that is not appropriate for accessing the cache - either
because the files in the cache are inaccessible to that process, or because if
the process creates a file in the cache, that file may be inaccessible to other
processes.
The way CacheFiles works is to temporarily change the security context (fsuid,
fsgid and actor security label) that the process acts as - without changing the
security context of the process when it the target of an operation performed by
some other process (so signalling and suchlike still work correctly).
When the CacheFiles module is asked to bind to its cache, it:
(1) Finds the security label attached to the root cache directory and uses
that as the security label with which it will create files. By default,
this is:
cachefiles_var_t
(2) Finds the security label of the process which issued the bind request
(presumed to be the cachefilesd daemon), which by default will be:
cachefilesd_t
and asks LSM to supply a security ID as which it should act given the
daemon's label. By default, this will be:
cachefiles_kernel_t
SELinux transitions the daemon's security ID to the module's security ID
based on a rule of this form in the policy.
type_transition <daemon's-ID> kernel_t : process <module's-ID>;
For instance:
type_transition cachefilesd_t kernel_t : process cachefiles_kernel_t;
The module's security ID gives it permission to create, move and remove files
and directories in the cache, to find and access directories and files in the
cache, to set and access extended attributes on cache objects, and to read and
write files in the cache.
The daemon's security ID gives it only a very restricted set of permissions: it
may scan directories, stat files and erase files and directories. It may
not read or write files in the cache, and so it is precluded from accessing the
data cached therein; nor is it permitted to create new files in the cache.
There are policy source files available in:
http://people.redhat.com/~dhowells/fscache/cachefilesd-0.8.tar.bz2
and later versions. In that tarball, see the files:
cachefilesd.te
cachefilesd.fc
cachefilesd.if
They are built and installed directly by the RPM.
If a non-RPM based system is being used, then copy the above files to their own
directory and run:
make -f /usr/share/selinux/devel/Makefile
semodule -i cachefilesd.pp
You will need checkpolicy and selinux-policy-devel installed prior to the
build.
By default, the cache is located in /var/fscache, but if it is desirable that
it should be elsewhere, than either the above policy files must be altered, or
an auxiliary policy must be installed to label the alternate location of the
cache.
For instructions on how to add an auxiliary policy to enable the cache to be
located elsewhere when SELinux is in enforcing mode, please see:
/usr/share/doc/cachefilesd-*/move-cache.txt
When the cachefilesd rpm is installed; alternatively, the document can be found
in the sources.
==================
A NOTE ON SECURITY
==================
CacheFiles makes use of the split security in the task_struct. It allocates
its own task_security structure, and redirects current->act_as to point to it
when it acts on behalf of another process, in that process's context.
The reason it does this is that it calls vfs_mkdir() and suchlike rather than
bypassing security and calling inode ops directly. Therefore the VFS and LSM
may deny the CacheFiles access to the cache data because under some
circumstances the caching code is running in the security context of whatever
process issued the original syscall on the netfs.
Furthermore, should CacheFiles create a file or directory, the security
parameters with that object is created (UID, GID, security label) would be
derived from that process that issued the system call, thus potentially
preventing other processes from accessing the cache - including CacheFiles's
cache management daemon (cachefilesd).
What is required is to temporarily override the security of the process that
issued the system call. We can't, however, just do an in-place change of the
security data as that affects the process as an object, not just as a subject.
This means it may lose signals or ptrace events for example, and affects what
the process looks like in /proc.
So CacheFiles makes use of a logical split in the security between the
objective security (task->sec) and the subjective security (task->act_as). The
objective security holds the intrinsic security properties of a process and is
never overridden. This is what appears in /proc, and is what is used when a
process is the target of an operation by some other process (SIGKILL for
example).
The subjective security holds the active security properties of a process, and
may be overridden. This is not seen externally, and is used whan a process
acts upon another object, for example SIGKILLing another process or opening a
file.
LSM hooks exist that allow SELinux (or Smack or whatever) to reject a request
for CacheFiles to run in a context of a specific security label, or to create
files and directories with another security label.
This documentation is added by the patch to:
Documentation/filesystems/caching/cachefiles.txt
Signed-Off-By: David Howells <dhowells@redhat.com>
Acked-by: Steve Dickson <steved@redhat.com>
Acked-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Tested-by: Daire Byrne <Daire.Byrne@framestore.com>
2009-04-03 15:42:41 +00:00
|
|
|
op->op.processor = cachefiles_read_copier;
|
|
|
|
|
|
|
|
pagevec_init(&pagevec, 0);
|
|
|
|
|
|
|
|
/* we assume the absence or presence of the first block is a good
|
|
|
|
* enough indication for the page as a whole
|
|
|
|
* - TODO: don't use bmap() for this as it is _not_ actually good
|
|
|
|
* enough for this as it doesn't indicate errors, but it's all we've
|
|
|
|
* got for the moment
|
|
|
|
*/
|
|
|
|
block0 = page->index;
|
|
|
|
block0 <<= shift;
|
|
|
|
|
|
|
|
block = inode->i_mapping->a_ops->bmap(inode->i_mapping, block0);
|
|
|
|
_debug("%llx -> %llx",
|
|
|
|
(unsigned long long) block0,
|
|
|
|
(unsigned long long) block);
|
|
|
|
|
|
|
|
if (block) {
|
|
|
|
/* submit the apparently valid page to the backing fs to be
|
|
|
|
* read from disk */
|
|
|
|
ret = cachefiles_read_backing_file_one(object, op, page,
|
|
|
|
&pagevec);
|
|
|
|
} else if (cachefiles_has_space(cache, 0, 1) == 0) {
|
|
|
|
/* there's space in the cache we can use */
|
|
|
|
pagevec_add(&pagevec, page);
|
|
|
|
fscache_mark_pages_cached(op, &pagevec);
|
|
|
|
ret = -ENODATA;
|
|
|
|
} else {
|
|
|
|
ret = -ENOBUFS;
|
|
|
|
}
|
|
|
|
|
|
|
|
_leave(" = %d", ret);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* read the corresponding pages to the given set from the backing file
|
|
|
|
* - any uncertain pages are simply discarded, to be tried again another time
|
|
|
|
*/
|
|
|
|
static int cachefiles_read_backing_file(struct cachefiles_object *object,
|
|
|
|
struct fscache_retrieval *op,
|
|
|
|
struct list_head *list,
|
|
|
|
struct pagevec *mark_pvec)
|
|
|
|
{
|
|
|
|
struct cachefiles_one_read *monitor = NULL;
|
|
|
|
struct address_space *bmapping = object->backer->d_inode->i_mapping;
|
|
|
|
struct pagevec lru_pvec;
|
|
|
|
struct page *newpage = NULL, *netpage, *_n, *backpage = NULL;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
_enter("");
|
|
|
|
|
|
|
|
pagevec_init(&lru_pvec, 0);
|
|
|
|
|
|
|
|
list_for_each_entry_safe(netpage, _n, list, lru) {
|
|
|
|
list_del(&netpage->lru);
|
|
|
|
|
|
|
|
_debug("read back %p{%lu,%d}",
|
|
|
|
netpage, netpage->index, page_count(netpage));
|
|
|
|
|
|
|
|
if (!monitor) {
|
|
|
|
monitor = kzalloc(sizeof(*monitor), GFP_KERNEL);
|
|
|
|
if (!monitor)
|
|
|
|
goto nomem;
|
|
|
|
|
|
|
|
monitor->op = fscache_get_retrieval(op);
|
|
|
|
init_waitqueue_func_entry(&monitor->monitor,
|
|
|
|
cachefiles_read_waiter);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
backpage = find_get_page(bmapping, netpage->index);
|
|
|
|
if (backpage)
|
|
|
|
goto backing_page_already_present;
|
|
|
|
|
|
|
|
if (!newpage) {
|
|
|
|
newpage = page_cache_alloc_cold(bmapping);
|
|
|
|
if (!newpage)
|
|
|
|
goto nomem;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = add_to_page_cache(newpage, bmapping,
|
|
|
|
netpage->index, GFP_KERNEL);
|
|
|
|
if (ret == 0)
|
|
|
|
goto installed_new_backing_page;
|
|
|
|
if (ret != -EEXIST)
|
|
|
|
goto nomem;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* we've installed a new backing page, so now we need to add it
|
|
|
|
* to the LRU list and start it reading */
|
|
|
|
installed_new_backing_page:
|
|
|
|
_debug("- new %p", newpage);
|
|
|
|
|
|
|
|
backpage = newpage;
|
|
|
|
newpage = NULL;
|
|
|
|
|
|
|
|
page_cache_get(backpage);
|
|
|
|
if (!pagevec_add(&lru_pvec, backpage))
|
|
|
|
__pagevec_lru_add_file(&lru_pvec);
|
|
|
|
|
|
|
|
reread_backing_page:
|
|
|
|
ret = bmapping->a_ops->readpage(NULL, backpage);
|
|
|
|
if (ret < 0)
|
|
|
|
goto read_error;
|
|
|
|
|
|
|
|
/* add the netfs page to the pagecache and LRU, and set the
|
|
|
|
* monitor to transfer the data across */
|
|
|
|
monitor_backing_page:
|
|
|
|
_debug("- monitor add");
|
|
|
|
|
|
|
|
ret = add_to_page_cache(netpage, op->mapping, netpage->index,
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (ret < 0) {
|
|
|
|
if (ret == -EEXIST) {
|
|
|
|
page_cache_release(netpage);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
goto nomem;
|
|
|
|
}
|
|
|
|
|
|
|
|
page_cache_get(netpage);
|
|
|
|
if (!pagevec_add(&lru_pvec, netpage))
|
|
|
|
__pagevec_lru_add_file(&lru_pvec);
|
|
|
|
|
|
|
|
/* install a monitor */
|
|
|
|
page_cache_get(netpage);
|
|
|
|
monitor->netfs_page = netpage;
|
|
|
|
|
|
|
|
page_cache_get(backpage);
|
|
|
|
monitor->back_page = backpage;
|
|
|
|
monitor->monitor.private = backpage;
|
|
|
|
add_page_wait_queue(backpage, &monitor->monitor);
|
|
|
|
monitor = NULL;
|
|
|
|
|
|
|
|
/* but the page may have been read before the monitor was
|
|
|
|
* installed, so the monitor may miss the event - so we have to
|
|
|
|
* ensure that we do get one in such a case */
|
|
|
|
if (trylock_page(backpage)) {
|
|
|
|
_debug("2unlock %p {%lx}", backpage, backpage->flags);
|
|
|
|
unlock_page(backpage);
|
|
|
|
}
|
|
|
|
|
|
|
|
page_cache_release(backpage);
|
|
|
|
backpage = NULL;
|
|
|
|
|
|
|
|
page_cache_release(netpage);
|
|
|
|
netpage = NULL;
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* if the backing page is already present, it can be in one of
|
|
|
|
* three states: read in progress, read failed or read okay */
|
|
|
|
backing_page_already_present:
|
|
|
|
_debug("- present %p", backpage);
|
|
|
|
|
|
|
|
if (PageError(backpage))
|
|
|
|
goto io_error;
|
|
|
|
|
|
|
|
if (PageUptodate(backpage))
|
|
|
|
goto backing_page_already_uptodate;
|
|
|
|
|
|
|
|
_debug("- not ready %p{%lx}", backpage, backpage->flags);
|
|
|
|
|
|
|
|
if (!trylock_page(backpage))
|
|
|
|
goto monitor_backing_page;
|
|
|
|
|
|
|
|
if (PageError(backpage)) {
|
|
|
|
_debug("error %lx", backpage->flags);
|
|
|
|
unlock_page(backpage);
|
|
|
|
goto io_error;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (PageUptodate(backpage))
|
|
|
|
goto backing_page_already_uptodate_unlock;
|
|
|
|
|
|
|
|
/* we've locked a page that's neither up to date nor erroneous,
|
|
|
|
* so we need to attempt to read it again */
|
|
|
|
goto reread_backing_page;
|
|
|
|
|
|
|
|
/* the backing page is already up to date, attach the netfs
|
|
|
|
* page to the pagecache and LRU and copy the data across */
|
|
|
|
backing_page_already_uptodate_unlock:
|
|
|
|
_debug("uptodate %lx", backpage->flags);
|
|
|
|
unlock_page(backpage);
|
|
|
|
backing_page_already_uptodate:
|
|
|
|
_debug("- uptodate");
|
|
|
|
|
|
|
|
ret = add_to_page_cache(netpage, op->mapping, netpage->index,
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (ret < 0) {
|
|
|
|
if (ret == -EEXIST) {
|
|
|
|
page_cache_release(netpage);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
goto nomem;
|
|
|
|
}
|
|
|
|
|
|
|
|
copy_highpage(netpage, backpage);
|
|
|
|
|
|
|
|
page_cache_release(backpage);
|
|
|
|
backpage = NULL;
|
|
|
|
|
|
|
|
if (!pagevec_add(mark_pvec, netpage))
|
|
|
|
fscache_mark_pages_cached(op, mark_pvec);
|
|
|
|
|
|
|
|
page_cache_get(netpage);
|
|
|
|
if (!pagevec_add(&lru_pvec, netpage))
|
|
|
|
__pagevec_lru_add_file(&lru_pvec);
|
|
|
|
|
|
|
|
fscache_end_io(op, netpage, 0);
|
|
|
|
page_cache_release(netpage);
|
|
|
|
netpage = NULL;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
netpage = NULL;
|
|
|
|
|
|
|
|
_debug("out");
|
|
|
|
|
|
|
|
out:
|
|
|
|
/* tidy up */
|
|
|
|
pagevec_lru_add_file(&lru_pvec);
|
|
|
|
|
|
|
|
if (newpage)
|
|
|
|
page_cache_release(newpage);
|
|
|
|
if (netpage)
|
|
|
|
page_cache_release(netpage);
|
|
|
|
if (backpage)
|
|
|
|
page_cache_release(backpage);
|
|
|
|
if (monitor) {
|
|
|
|
fscache_put_retrieval(op);
|
|
|
|
kfree(monitor);
|
|
|
|
}
|
|
|
|
|
|
|
|
list_for_each_entry_safe(netpage, _n, list, lru) {
|
|
|
|
list_del(&netpage->lru);
|
|
|
|
page_cache_release(netpage);
|
|
|
|
}
|
|
|
|
|
|
|
|
_leave(" = %d", ret);
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
nomem:
|
|
|
|
_debug("nomem");
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
read_error:
|
|
|
|
_debug("read error %d", ret);
|
|
|
|
if (ret == -ENOMEM)
|
|
|
|
goto out;
|
|
|
|
io_error:
|
|
|
|
cachefiles_io_error_obj(object, "Page read error on backing file");
|
|
|
|
ret = -ENOBUFS;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* read a list of pages from the cache or allocate blocks in which to store
|
|
|
|
* them
|
|
|
|
*/
|
|
|
|
int cachefiles_read_or_alloc_pages(struct fscache_retrieval *op,
|
|
|
|
struct list_head *pages,
|
|
|
|
unsigned *nr_pages,
|
|
|
|
gfp_t gfp)
|
|
|
|
{
|
|
|
|
struct cachefiles_object *object;
|
|
|
|
struct cachefiles_cache *cache;
|
|
|
|
struct list_head backpages;
|
|
|
|
struct pagevec pagevec;
|
|
|
|
struct inode *inode;
|
|
|
|
struct page *page, *_n;
|
|
|
|
unsigned shift, nrbackpages;
|
|
|
|
int ret, ret2, space;
|
|
|
|
|
|
|
|
object = container_of(op->op.object,
|
|
|
|
struct cachefiles_object, fscache);
|
|
|
|
cache = container_of(object->fscache.cache,
|
|
|
|
struct cachefiles_cache, cache);
|
|
|
|
|
|
|
|
_enter("{OBJ%x,%d},,%d,,",
|
|
|
|
object->fscache.debug_id, atomic_read(&op->op.usage),
|
|
|
|
*nr_pages);
|
|
|
|
|
|
|
|
if (!object->backer)
|
|
|
|
return -ENOBUFS;
|
|
|
|
|
|
|
|
space = 1;
|
|
|
|
if (cachefiles_has_space(cache, 0, *nr_pages) < 0)
|
|
|
|
space = 0;
|
|
|
|
|
|
|
|
inode = object->backer->d_inode;
|
|
|
|
ASSERT(S_ISREG(inode->i_mode));
|
|
|
|
ASSERT(inode->i_mapping->a_ops->bmap);
|
|
|
|
ASSERT(inode->i_mapping->a_ops->readpages);
|
|
|
|
|
|
|
|
/* calculate the shift required to use bmap */
|
|
|
|
if (inode->i_sb->s_blocksize > PAGE_SIZE)
|
|
|
|
return -ENOBUFS;
|
|
|
|
|
|
|
|
shift = PAGE_SHIFT - inode->i_sb->s_blocksize_bits;
|
|
|
|
|
|
|
|
pagevec_init(&pagevec, 0);
|
|
|
|
|
2009-11-19 18:11:04 +00:00
|
|
|
op->op.flags &= FSCACHE_OP_KEEP_FLAGS;
|
|
|
|
op->op.flags |= FSCACHE_OP_FAST;
|
CacheFiles: A cache that backs onto a mounted filesystem
Add an FS-Cache cache-backend that permits a mounted filesystem to be used as a
backing store for the cache.
CacheFiles uses a userspace daemon to do some of the cache management - such as
reaping stale nodes and culling. This is called cachefilesd and lives in
/sbin. The source for the daemon can be downloaded from:
http://people.redhat.com/~dhowells/cachefs/cachefilesd.c
And an example configuration from:
http://people.redhat.com/~dhowells/cachefs/cachefilesd.conf
The filesystem and data integrity of the cache are only as good as those of the
filesystem providing the backing services. Note that CacheFiles does not
attempt to journal anything since the journalling interfaces of the various
filesystems are very specific in nature.
CacheFiles creates a misc character device - "/dev/cachefiles" - that is used
to communication with the daemon. Only one thing may have this open at once,
and whilst it is open, a cache is at least partially in existence. The daemon
opens this and sends commands down it to control the cache.
CacheFiles is currently limited to a single cache.
CacheFiles attempts to maintain at least a certain percentage of free space on
the filesystem, shrinking the cache by culling the objects it contains to make
space if necessary - see the "Cache Culling" section. This means it can be
placed on the same medium as a live set of data, and will expand to make use of
spare space and automatically contract when the set of data requires more
space.
============
REQUIREMENTS
============
The use of CacheFiles and its daemon requires the following features to be
available in the system and in the cache filesystem:
- dnotify.
- extended attributes (xattrs).
- openat() and friends.
- bmap() support on files in the filesystem (FIBMAP ioctl).
- The use of bmap() to detect a partial page at the end of the file.
It is strongly recommended that the "dir_index" option is enabled on Ext3
filesystems being used as a cache.
=============
CONFIGURATION
=============
The cache is configured by a script in /etc/cachefilesd.conf. These commands
set up cache ready for use. The following script commands are available:
(*) brun <N>%
(*) bcull <N>%
(*) bstop <N>%
(*) frun <N>%
(*) fcull <N>%
(*) fstop <N>%
Configure the culling limits. Optional. See the section on culling
The defaults are 7% (run), 5% (cull) and 1% (stop) respectively.
The commands beginning with a 'b' are file space (block) limits, those
beginning with an 'f' are file count limits.
(*) dir <path>
Specify the directory containing the root of the cache. Mandatory.
(*) tag <name>
Specify a tag to FS-Cache to use in distinguishing multiple caches.
Optional. The default is "CacheFiles".
(*) debug <mask>
Specify a numeric bitmask to control debugging in the kernel module.
Optional. The default is zero (all off). The following values can be
OR'd into the mask to collect various information:
1 Turn on trace of function entry (_enter() macros)
2 Turn on trace of function exit (_leave() macros)
4 Turn on trace of internal debug points (_debug())
This mask can also be set through sysfs, eg:
echo 5 >/sys/modules/cachefiles/parameters/debug
==================
STARTING THE CACHE
==================
The cache is started by running the daemon. The daemon opens the cache device,
configures the cache and tells it to begin caching. At that point the cache
binds to fscache and the cache becomes live.
The daemon is run as follows:
/sbin/cachefilesd [-d]* [-s] [-n] [-f <configfile>]
The flags are:
(*) -d
Increase the debugging level. This can be specified multiple times and
is cumulative with itself.
(*) -s
Send messages to stderr instead of syslog.
(*) -n
Don't daemonise and go into background.
(*) -f <configfile>
Use an alternative configuration file rather than the default one.
===============
THINGS TO AVOID
===============
Do not mount other things within the cache as this will cause problems. The
kernel module contains its own very cut-down path walking facility that ignores
mountpoints, but the daemon can't avoid them.
Do not create, rename or unlink files and directories in the cache whilst the
cache is active, as this may cause the state to become uncertain.
Renaming files in the cache might make objects appear to be other objects (the
filename is part of the lookup key).
Do not change or remove the extended attributes attached to cache files by the
cache as this will cause the cache state management to get confused.
Do not create files or directories in the cache, lest the cache get confused or
serve incorrect data.
Do not chmod files in the cache. The module creates things with minimal
permissions to prevent random users being able to access them directly.
=============
CACHE CULLING
=============
The cache may need culling occasionally to make space. This involves
discarding objects from the cache that have been used less recently than
anything else. Culling is based on the access time of data objects. Empty
directories are culled if not in use.
Cache culling is done on the basis of the percentage of blocks and the
percentage of files available in the underlying filesystem. There are six
"limits":
(*) brun
(*) frun
If the amount of free space and the number of available files in the cache
rises above both these limits, then culling is turned off.
(*) bcull
(*) fcull
If the amount of available space or the number of available files in the
cache falls below either of these limits, then culling is started.
(*) bstop
(*) fstop
If the amount of available space or the number of available files in the
cache falls below either of these limits, then no further allocation of
disk space or files is permitted until culling has raised things above
these limits again.
These must be configured thusly:
0 <= bstop < bcull < brun < 100
0 <= fstop < fcull < frun < 100
Note that these are percentages of available space and available files, and do
_not_ appear as 100 minus the percentage displayed by the "df" program.
The userspace daemon scans the cache to build up a table of cullable objects.
These are then culled in least recently used order. A new scan of the cache is
started as soon as space is made in the table. Objects will be skipped if
their atimes have changed or if the kernel module says it is still using them.
===============
CACHE STRUCTURE
===============
The CacheFiles module will create two directories in the directory it was
given:
(*) cache/
(*) graveyard/
The active cache objects all reside in the first directory. The CacheFiles
kernel module moves any retired or culled objects that it can't simply unlink
to the graveyard from which the daemon will actually delete them.
The daemon uses dnotify to monitor the graveyard directory, and will delete
anything that appears therein.
The module represents index objects as directories with the filename "I..." or
"J...". Note that the "cache/" directory is itself a special index.
Data objects are represented as files if they have no children, or directories
if they do. Their filenames all begin "D..." or "E...". If represented as a
directory, data objects will have a file in the directory called "data" that
actually holds the data.
Special objects are similar to data objects, except their filenames begin
"S..." or "T...".
If an object has children, then it will be represented as a directory.
Immediately in the representative directory are a collection of directories
named for hash values of the child object keys with an '@' prepended. Into
this directory, if possible, will be placed the representations of the child
objects:
INDEX INDEX INDEX DATA FILES
========= ========== ================================= ================
cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400
cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...DB1ry
cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...N22ry
cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...FP1ry
If the key is so long that it exceeds NAME_MAX with the decorations added on to
it, then it will be cut into pieces, the first few of which will be used to
make a nest of directories, and the last one of which will be the objects
inside the last directory. The names of the intermediate directories will have
'+' prepended:
J1223/@23/+xy...z/+kl...m/Epqr
Note that keys are raw data, and not only may they exceed NAME_MAX in size,
they may also contain things like '/' and NUL characters, and so they may not
be suitable for turning directly into a filename.
To handle this, CacheFiles will use a suitably printable filename directly and
"base-64" encode ones that aren't directly suitable. The two versions of
object filenames indicate the encoding:
OBJECT TYPE PRINTABLE ENCODED
=============== =============== ===============
Index "I..." "J..."
Data "D..." "E..."
Special "S..." "T..."
Intermediate directories are always "@" or "+" as appropriate.
Each object in the cache has an extended attribute label that holds the object
type ID (required to distinguish special objects) and the auxiliary data from
the netfs. The latter is used to detect stale objects in the cache and update
or retire them.
Note that CacheFiles will erase from the cache any file it doesn't recognise or
any file of an incorrect type (such as a FIFO file or a device file).
==========================
SECURITY MODEL AND SELINUX
==========================
CacheFiles is implemented to deal properly with the LSM security features of
the Linux kernel and the SELinux facility.
One of the problems that CacheFiles faces is that it is generally acting on
behalf of a process, and running in that process's context, and that includes a
security context that is not appropriate for accessing the cache - either
because the files in the cache are inaccessible to that process, or because if
the process creates a file in the cache, that file may be inaccessible to other
processes.
The way CacheFiles works is to temporarily change the security context (fsuid,
fsgid and actor security label) that the process acts as - without changing the
security context of the process when it the target of an operation performed by
some other process (so signalling and suchlike still work correctly).
When the CacheFiles module is asked to bind to its cache, it:
(1) Finds the security label attached to the root cache directory and uses
that as the security label with which it will create files. By default,
this is:
cachefiles_var_t
(2) Finds the security label of the process which issued the bind request
(presumed to be the cachefilesd daemon), which by default will be:
cachefilesd_t
and asks LSM to supply a security ID as which it should act given the
daemon's label. By default, this will be:
cachefiles_kernel_t
SELinux transitions the daemon's security ID to the module's security ID
based on a rule of this form in the policy.
type_transition <daemon's-ID> kernel_t : process <module's-ID>;
For instance:
type_transition cachefilesd_t kernel_t : process cachefiles_kernel_t;
The module's security ID gives it permission to create, move and remove files
and directories in the cache, to find and access directories and files in the
cache, to set and access extended attributes on cache objects, and to read and
write files in the cache.
The daemon's security ID gives it only a very restricted set of permissions: it
may scan directories, stat files and erase files and directories. It may
not read or write files in the cache, and so it is precluded from accessing the
data cached therein; nor is it permitted to create new files in the cache.
There are policy source files available in:
http://people.redhat.com/~dhowells/fscache/cachefilesd-0.8.tar.bz2
and later versions. In that tarball, see the files:
cachefilesd.te
cachefilesd.fc
cachefilesd.if
They are built and installed directly by the RPM.
If a non-RPM based system is being used, then copy the above files to their own
directory and run:
make -f /usr/share/selinux/devel/Makefile
semodule -i cachefilesd.pp
You will need checkpolicy and selinux-policy-devel installed prior to the
build.
By default, the cache is located in /var/fscache, but if it is desirable that
it should be elsewhere, than either the above policy files must be altered, or
an auxiliary policy must be installed to label the alternate location of the
cache.
For instructions on how to add an auxiliary policy to enable the cache to be
located elsewhere when SELinux is in enforcing mode, please see:
/usr/share/doc/cachefilesd-*/move-cache.txt
When the cachefilesd rpm is installed; alternatively, the document can be found
in the sources.
==================
A NOTE ON SECURITY
==================
CacheFiles makes use of the split security in the task_struct. It allocates
its own task_security structure, and redirects current->act_as to point to it
when it acts on behalf of another process, in that process's context.
The reason it does this is that it calls vfs_mkdir() and suchlike rather than
bypassing security and calling inode ops directly. Therefore the VFS and LSM
may deny the CacheFiles access to the cache data because under some
circumstances the caching code is running in the security context of whatever
process issued the original syscall on the netfs.
Furthermore, should CacheFiles create a file or directory, the security
parameters with that object is created (UID, GID, security label) would be
derived from that process that issued the system call, thus potentially
preventing other processes from accessing the cache - including CacheFiles's
cache management daemon (cachefilesd).
What is required is to temporarily override the security of the process that
issued the system call. We can't, however, just do an in-place change of the
security data as that affects the process as an object, not just as a subject.
This means it may lose signals or ptrace events for example, and affects what
the process looks like in /proc.
So CacheFiles makes use of a logical split in the security between the
objective security (task->sec) and the subjective security (task->act_as). The
objective security holds the intrinsic security properties of a process and is
never overridden. This is what appears in /proc, and is what is used when a
process is the target of an operation by some other process (SIGKILL for
example).
The subjective security holds the active security properties of a process, and
may be overridden. This is not seen externally, and is used whan a process
acts upon another object, for example SIGKILLing another process or opening a
file.
LSM hooks exist that allow SELinux (or Smack or whatever) to reject a request
for CacheFiles to run in a context of a specific security label, or to create
files and directories with another security label.
This documentation is added by the patch to:
Documentation/filesystems/caching/cachefiles.txt
Signed-Off-By: David Howells <dhowells@redhat.com>
Acked-by: Steve Dickson <steved@redhat.com>
Acked-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Tested-by: Daire Byrne <Daire.Byrne@framestore.com>
2009-04-03 15:42:41 +00:00
|
|
|
op->op.processor = cachefiles_read_copier;
|
|
|
|
|
|
|
|
INIT_LIST_HEAD(&backpages);
|
|
|
|
nrbackpages = 0;
|
|
|
|
|
|
|
|
ret = space ? -ENODATA : -ENOBUFS;
|
|
|
|
list_for_each_entry_safe(page, _n, pages, lru) {
|
|
|
|
sector_t block0, block;
|
|
|
|
|
|
|
|
/* we assume the absence or presence of the first block is a
|
|
|
|
* good enough indication for the page as a whole
|
|
|
|
* - TODO: don't use bmap() for this as it is _not_ actually
|
|
|
|
* good enough for this as it doesn't indicate errors, but
|
|
|
|
* it's all we've got for the moment
|
|
|
|
*/
|
|
|
|
block0 = page->index;
|
|
|
|
block0 <<= shift;
|
|
|
|
|
|
|
|
block = inode->i_mapping->a_ops->bmap(inode->i_mapping,
|
|
|
|
block0);
|
|
|
|
_debug("%llx -> %llx",
|
|
|
|
(unsigned long long) block0,
|
|
|
|
(unsigned long long) block);
|
|
|
|
|
|
|
|
if (block) {
|
|
|
|
/* we have data - add it to the list to give to the
|
|
|
|
* backing fs */
|
|
|
|
list_move(&page->lru, &backpages);
|
|
|
|
(*nr_pages)--;
|
|
|
|
nrbackpages++;
|
|
|
|
} else if (space && pagevec_add(&pagevec, page) == 0) {
|
|
|
|
fscache_mark_pages_cached(op, &pagevec);
|
|
|
|
ret = -ENODATA;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (pagevec_count(&pagevec) > 0)
|
|
|
|
fscache_mark_pages_cached(op, &pagevec);
|
|
|
|
|
|
|
|
if (list_empty(pages))
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
/* submit the apparently valid pages to the backing fs to be read from
|
|
|
|
* disk */
|
|
|
|
if (nrbackpages > 0) {
|
|
|
|
ret2 = cachefiles_read_backing_file(object, op, &backpages,
|
|
|
|
&pagevec);
|
|
|
|
if (ret2 == -ENOMEM || ret2 == -EINTR)
|
|
|
|
ret = ret2;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (pagevec_count(&pagevec) > 0)
|
|
|
|
fscache_mark_pages_cached(op, &pagevec);
|
|
|
|
|
|
|
|
_leave(" = %d [nr=%u%s]",
|
|
|
|
ret, *nr_pages, list_empty(pages) ? " empty" : "");
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* allocate a block in the cache in which to store a page
|
|
|
|
* - cache withdrawal is prevented by the caller
|
|
|
|
* - returns -EINTR if interrupted
|
|
|
|
* - returns -ENOMEM if ran out of memory
|
|
|
|
* - returns -ENOBUFS if no buffers can be made available
|
|
|
|
* - returns -ENOBUFS if page is beyond EOF
|
|
|
|
* - otherwise:
|
|
|
|
* - the metadata will be retained
|
|
|
|
* - 0 will be returned
|
|
|
|
*/
|
|
|
|
int cachefiles_allocate_page(struct fscache_retrieval *op,
|
|
|
|
struct page *page,
|
|
|
|
gfp_t gfp)
|
|
|
|
{
|
|
|
|
struct cachefiles_object *object;
|
|
|
|
struct cachefiles_cache *cache;
|
|
|
|
struct pagevec pagevec;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
object = container_of(op->op.object,
|
|
|
|
struct cachefiles_object, fscache);
|
|
|
|
cache = container_of(object->fscache.cache,
|
|
|
|
struct cachefiles_cache, cache);
|
|
|
|
|
|
|
|
_enter("%p,{%lx},", object, page->index);
|
|
|
|
|
|
|
|
ret = cachefiles_has_space(cache, 0, 1);
|
|
|
|
if (ret == 0) {
|
|
|
|
pagevec_init(&pagevec, 0);
|
|
|
|
pagevec_add(&pagevec, page);
|
|
|
|
fscache_mark_pages_cached(op, &pagevec);
|
|
|
|
} else {
|
|
|
|
ret = -ENOBUFS;
|
|
|
|
}
|
|
|
|
|
|
|
|
_leave(" = %d", ret);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* allocate blocks in the cache in which to store a set of pages
|
|
|
|
* - cache withdrawal is prevented by the caller
|
|
|
|
* - returns -EINTR if interrupted
|
|
|
|
* - returns -ENOMEM if ran out of memory
|
|
|
|
* - returns -ENOBUFS if some buffers couldn't be made available
|
|
|
|
* - returns -ENOBUFS if some pages are beyond EOF
|
|
|
|
* - otherwise:
|
|
|
|
* - -ENODATA will be returned
|
|
|
|
* - metadata will be retained for any page marked
|
|
|
|
*/
|
|
|
|
int cachefiles_allocate_pages(struct fscache_retrieval *op,
|
|
|
|
struct list_head *pages,
|
|
|
|
unsigned *nr_pages,
|
|
|
|
gfp_t gfp)
|
|
|
|
{
|
|
|
|
struct cachefiles_object *object;
|
|
|
|
struct cachefiles_cache *cache;
|
|
|
|
struct pagevec pagevec;
|
|
|
|
struct page *page;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
object = container_of(op->op.object,
|
|
|
|
struct cachefiles_object, fscache);
|
|
|
|
cache = container_of(object->fscache.cache,
|
|
|
|
struct cachefiles_cache, cache);
|
|
|
|
|
|
|
|
_enter("%p,,,%d,", object, *nr_pages);
|
|
|
|
|
|
|
|
ret = cachefiles_has_space(cache, 0, *nr_pages);
|
|
|
|
if (ret == 0) {
|
|
|
|
pagevec_init(&pagevec, 0);
|
|
|
|
|
|
|
|
list_for_each_entry(page, pages, lru) {
|
|
|
|
if (pagevec_add(&pagevec, page) == 0)
|
|
|
|
fscache_mark_pages_cached(op, &pagevec);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (pagevec_count(&pagevec) > 0)
|
|
|
|
fscache_mark_pages_cached(op, &pagevec);
|
|
|
|
ret = -ENODATA;
|
|
|
|
} else {
|
|
|
|
ret = -ENOBUFS;
|
|
|
|
}
|
|
|
|
|
|
|
|
_leave(" = %d", ret);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* request a page be stored in the cache
|
|
|
|
* - cache withdrawal is prevented by the caller
|
|
|
|
* - this request may be ignored if there's no cache block available, in which
|
|
|
|
* case -ENOBUFS will be returned
|
|
|
|
* - if the op is in progress, 0 will be returned
|
|
|
|
*/
|
|
|
|
int cachefiles_write_page(struct fscache_storage *op, struct page *page)
|
|
|
|
{
|
|
|
|
struct cachefiles_object *object;
|
|
|
|
struct cachefiles_cache *cache;
|
|
|
|
mm_segment_t old_fs;
|
|
|
|
struct file *file;
|
|
|
|
loff_t pos;
|
|
|
|
void *data;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ASSERT(op != NULL);
|
|
|
|
ASSERT(page != NULL);
|
|
|
|
|
|
|
|
object = container_of(op->op.object,
|
|
|
|
struct cachefiles_object, fscache);
|
|
|
|
|
|
|
|
_enter("%p,%p{%lx},,,", object, page, page->index);
|
|
|
|
|
|
|
|
if (!object->backer) {
|
|
|
|
_leave(" = -ENOBUFS");
|
|
|
|
return -ENOBUFS;
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT(S_ISREG(object->backer->d_inode->i_mode));
|
|
|
|
|
|
|
|
cache = container_of(object->fscache.cache,
|
|
|
|
struct cachefiles_cache, cache);
|
|
|
|
|
|
|
|
/* write the page to the backing filesystem and let it store it in its
|
|
|
|
* own time */
|
|
|
|
dget(object->backer);
|
|
|
|
mntget(cache->mnt);
|
|
|
|
file = dentry_open(object->backer, cache->mnt, O_RDWR,
|
|
|
|
cache->cache_cred);
|
|
|
|
if (IS_ERR(file)) {
|
|
|
|
ret = PTR_ERR(file);
|
|
|
|
} else {
|
|
|
|
ret = -EIO;
|
|
|
|
if (file->f_op->write) {
|
|
|
|
pos = (loff_t) page->index << PAGE_SHIFT;
|
|
|
|
data = kmap(page);
|
|
|
|
old_fs = get_fs();
|
|
|
|
set_fs(KERNEL_DS);
|
|
|
|
ret = file->f_op->write(
|
|
|
|
file, (const void __user *) data, PAGE_SIZE,
|
|
|
|
&pos);
|
|
|
|
set_fs(old_fs);
|
|
|
|
kunmap(page);
|
|
|
|
if (ret != PAGE_SIZE)
|
|
|
|
ret = -EIO;
|
|
|
|
}
|
|
|
|
fput(file);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ret < 0) {
|
|
|
|
if (ret == -EIO)
|
|
|
|
cachefiles_io_error_obj(
|
|
|
|
object, "Write page to backing file failed");
|
|
|
|
ret = -ENOBUFS;
|
|
|
|
}
|
|
|
|
|
|
|
|
_leave(" = %d", ret);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* detach a backing block from a page
|
|
|
|
* - cache withdrawal is prevented by the caller
|
|
|
|
*/
|
|
|
|
void cachefiles_uncache_page(struct fscache_object *_object, struct page *page)
|
|
|
|
{
|
|
|
|
struct cachefiles_object *object;
|
|
|
|
struct cachefiles_cache *cache;
|
|
|
|
|
|
|
|
object = container_of(_object, struct cachefiles_object, fscache);
|
|
|
|
cache = container_of(object->fscache.cache,
|
|
|
|
struct cachefiles_cache, cache);
|
|
|
|
|
|
|
|
_enter("%p,{%lu}", object, page->index);
|
|
|
|
|
|
|
|
spin_unlock(&object->fscache.cookie->lock);
|
|
|
|
}
|