linux/fs/squashfs/decompressor.h

55 lines
1.3 KiB
C
Raw Normal View History

/* SPDX-License-Identifier: GPL-2.0-or-later */
#ifndef DECOMPRESSOR_H
#define DECOMPRESSOR_H
/*
* Squashfs - a compressed read only filesystem for Linux
*
* Copyright (c) 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009
* Phillip Lougher <phillip@squashfs.org.uk>
*
* decompressor.h
*/
#include <linux/bio.h>
struct squashfs_decompressor {
void *(*init)(struct squashfs_sb_info *, void *);
void *(*comp_opts)(struct squashfs_sb_info *, void *, int);
void (*free)(void *);
int (*decompress)(struct squashfs_sb_info *, void *,
struct bio *, int, int, struct squashfs_page_actor *);
int id;
char *name;
squashfs: extend "page actor" to handle missing pages Patch series "Squashfs: handle missing pages decompressing into page cache". This patchset enables Squashfs to handle missing pages when directly decompressing datablocks into the page cache. Previously if the full set of pages needed was not available, Squashfs would have to fall back to using an intermediate buffer (the older method), which is slower, involving a memcopy, and it introduces contention on a shared buffer. The first patch extends the "page actor" code to handle missing pages. The second patch updates Squashfs_readpage_block() to use the new functionality, and removes the code that falls back to using an intermediate buffer. This patchset is independent of the readahead work, and it is standalone. It can be merged on its own. But the readahead patch for efficiency also needs this patch-set. This patch (of 2): This patch extends the "page actor" code to handle missing pages. Previously if the full set of pages needed to decompress a Squashfs datablock was unavailable, this would cause decompression to fail on the missing pages. In this case direct decompression into the page cache could not be achieved and the code would fall back to using the older intermediate buffer method. With this patch, direct decompression into the page cache can be achieved with missing pages. For "multi-shot" decompressors (zlib, xz, zstd), the page actor will allocate a temporary buffer which is passed to the decompressor, and then freed by the page actor. For "single shot" decompressors (lz4, lzo) which decompress into a contiguous "bounce buffer", and which is then copied into the page cache, it would be pointless to allocate a temporary buffer, memcpy into it, and then free it. For these decompressors -ENOMEM is returned, which signifies that the memcpy for that page should be skipped. This also happens if the data block is uncompressed. Link: https://lkml.kernel.org/r/20220611032133.5743-1-phillip@squashfs.org.uk Link: https://lkml.kernel.org/r/20220611032133.5743-2-phillip@squashfs.org.uk Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Hsin-Yi Wang <hsinyi@chromium.org> Cc: Xiongwei Song <Xiongwei.Song@windriver.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-06-11 03:21:32 +00:00
int alloc_buffer;
int supported;
};
static inline void *squashfs_comp_opts(struct squashfs_sb_info *msblk,
void *buff, int length)
{
return msblk->decompressor->comp_opts ?
msblk->decompressor->comp_opts(msblk, buff, length) : NULL;
}
#ifdef CONFIG_SQUASHFS_XZ
extern const struct squashfs_decompressor squashfs_xz_comp_ops;
#endif
#ifdef CONFIG_SQUASHFS_LZ4
extern const struct squashfs_decompressor squashfs_lz4_comp_ops;
#endif
#ifdef CONFIG_SQUASHFS_LZO
extern const struct squashfs_decompressor squashfs_lzo_comp_ops;
#endif
#ifdef CONFIG_SQUASHFS_ZLIB
extern const struct squashfs_decompressor squashfs_zlib_comp_ops;
#endif
squashfs: Add zstd support Add zstd compression and decompression support to SquashFS. zstd is a great fit for SquashFS because it can compress at ratios approaching xz, while decompressing twice as fast as zlib. For SquashFS in particular, it can decompress as fast as lzo and lz4. It also has the flexibility to turn down the compression ratio for faster compression times. The compression benchmark is run on the file tree from the SquashFS archive found in ubuntu-16.10-desktop-amd64.iso [1]. It uses `mksquashfs` with the default block size (128 KB) and and various compression algorithms/levels. xz and zstd are also benchmarked with 256 KB blocks. The decompression benchmark times how long it takes to `tar` the file tree into `/dev/null`. See the benchmark file in the upstream zstd source repository located under `contrib/linux-kernel/squashfs-benchmark.sh` [2] for details. I ran the benchmarks on a Ubuntu 14.04 VM with 2 cores and 4 GiB of RAM. The VM is running on a MacBook Pro with a 3.1 GHz Intel Core i7 processor, 16 GB of RAM, and a SSD. | Method | Ratio | Compression MB/s | Decompression MB/s | |----------------|-------|------------------|--------------------| | gzip | 2.92 | 15 | 128 | | lzo | 2.64 | 9.5 | 217 | | lz4 | 2.12 | 94 | 218 | | xz | 3.43 | 5.5 | 35 | | xz 256 KB | 3.53 | 5.4 | 40 | | zstd 1 | 2.71 | 96 | 210 | | zstd 5 | 2.93 | 69 | 198 | | zstd 10 | 3.01 | 41 | 225 | | zstd 15 | 3.13 | 11.4 | 224 | | zstd 16 256 KB | 3.24 | 8.1 | 210 | This patch was written by Sean Purcell <me@seanp.xyz>, but I will be taking over the submission process. [1] http://releases.ubuntu.com/16.10/ [2] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/squashfs-benchmark.sh zstd source repository: https://github.com/facebook/zstd Signed-off-by: Sean Purcell <me@seanp.xyz> Signed-off-by: Nick Terrell <terrelln@fb.com> Signed-off-by: Chris Mason <clm@fb.com> Acked-by: Phillip Lougher <phillip@squashfs.org.uk>
2017-08-10 02:42:36 +00:00
#ifdef CONFIG_SQUASHFS_ZSTD
extern const struct squashfs_decompressor squashfs_zstd_comp_ops;
#endif
#endif