On our backend we can do something similar to LIO where we can enable and
disable UNMAP support on the fly. In the SCSI/block layer we can detect
this by just doing a rescan. However, LIO cannot detect this change because
we only check during the initial configuration. This patch allows UNMAP
detection to also happen when the user tries to turn it on.
Link: https://lore.kernel.org/r/20220628200230.15052-6-michael.christie@oracle.com
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Mike Christie <michael.christie@oracle.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Add acls/{ACL}/attrib/authentication attribute that controls authentication
for particular ACL. By default, this attribute inherits a value of the
authentication attribute of the target port group to keep backward
compatibility.
Authentication attribute has 3 states:
"0" - authentication is turned off for this ACL
"1" - authentication is required for this ACL
"-1" - authentication is inherited from TPG
Link: https://lore.kernel.org/r/20220523095905.26070-4-d.bogdanov@yadro.com
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Reviewed-by: Konstantin Shelekhin <k.shelekhin@yadro.com>
Reviewed-by: Mike Christie <michael.christie@oracle.com>
Signed-off-by: Dmitry Bogdanov <d.bogdanov@yadro.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Pull SCSI updates from James Bottomley:
"This consists of a small set of driver updates (lpfc, ufs, mpt3sas
mpi3mr, iscsi target). Apart from that this is mostly small fixes with
very few core changes (the biggest one being VPD caching)"
* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (177 commits)
scsi: target: tcmu: Avoid holding XArray lock when calling lock_page
scsi: elx: efct: Remove NULL check after calling container_of()
scsi: dpt_i2o: Drop redundant spinlock initialization
scsi: qedf: Remove redundant variable op
scsi: hisi_sas: Fix memory ordering in hisi_sas_task_deliver()
scsi: fnic: Replace DMA mask of 64 bits with 47 bits
scsi: mpi3mr: Add target device related sysfs attributes
scsi: mpi3mr: Add shost related sysfs attributes
scsi: elx: efct: Remove redundant memset() statement
scsi: megaraid_sas: Remove redundant memset() statement
scsi: mpi3mr: Return error if dma_alloc_coherent() fails
scsi: hisi_sas: Fix rescan after deleting a disk
scsi: hisi_sas: Use sas_ata_wait_after_reset() in IT nexus reset
scsi: libsas: Refactor sas_ata_hard_reset()
scsi: mpt3sas: Update driver version to 42.100.00.00
scsi: mpt3sas: Fix junk chars displayed while printing ChipName
scsi: ipr: Use kobj_to_dev()
scsi: mpi3mr: Fix a NULL vs IS_ERR() bug in mpi3mr_bsg_init()
scsi: bnx2fc: Avoid using get_cpu() in bnx2fc_cmd_alloc()
scsi: libfc: Remove get_cpu() semantics in fc_exch_em_alloc()
...
Pull block updates from Jens Axboe:
"Here are the core block changes for 5.19. This contains:
- blk-throttle accounting fix (Laibin)
- Series removing redundant assignments (Michal)
- Expose bio cache via the bio_set, so that DM can use it (Mike)
- Finish off the bio allocation interface cleanups by dealing with
the weirdest member of the family. bio_kmalloc combines a kmalloc
for the bio and bio_vecs with a hidden bio_init call and magic
cleanup semantics (Christoph)
- Clean up the block layer API so that APIs consumed by file systems
are (almost) only struct block_device based, so that file systems
don't have to poke into block layer internals like the
request_queue (Christoph)
- Clean up the blk_execute_rq* API (Christoph)
- Clean up various lose end in the blk-cgroup code to make it easier
to follow in preparation of reworking the blkcg assignment for bios
(Christoph)
- Fix use-after-free issues in BFQ when processes with merged queues
get moved to different cgroups (Jan)
- BFQ fixes (Jan)
- Various fixes and cleanups (Bart, Chengming, Fanjun, Julia, Ming,
Wolfgang, me)"
* tag 'for-5.19/block-2022-05-22' of git://git.kernel.dk/linux-block: (83 commits)
blk-mq: fix typo in comment
bfq: Remove bfq_requeue_request_body()
bfq: Remove superfluous conversion from RQ_BIC()
bfq: Allow current waker to defend against a tentative one
bfq: Relax waker detection for shared queues
blk-cgroup: delete rcu_read_lock_held() WARN_ON_ONCE()
blk-throttle: Set BIO_THROTTLED when bio has been throttled
blk-cgroup: Remove unnecessary rcu_read_lock/unlock()
blk-cgroup: always terminate io.stat lines
block, bfq: make bfq_has_work() more accurate
block, bfq: protect 'bfqd->queued' by 'bfqd->lock'
block: cleanup the VM accounting in submit_bio
block: Fix the bio.bi_opf comment
block: reorder the REQ_ flags
blk-iocost: combine local_stat and desc_stat to stat
block: improve the error message from bio_check_eod
block: allow passing a NULL bdev to bio_alloc_clone/bio_init_clone
block: remove superfluous calls to blkcg_bio_issue_init
kthread: unexport kthread_blkcg
blk-cgroup: cleanup blkcg_maybe_throttle_current
...
In tcmu_blocks_release(), lock_page() is called to prevent a race causing
possible data corruption. Since lock_page() might sleep, calling it while
holding XArray lock is a bug.
To fix this, replace the xas_for_each() call with xa_for_each_range().
Since the latter does its own handling of XArray locking, the xas_lock()
and xas_unlock() calls around the original loop are no longer necessary.
The switch to xa_for_each_range() slows down the loop slightly. This is
acceptable since tcmu_blocks_release() is not relevant for performance.
Link: https://lore.kernel.org/r/20220517192913.21405-1-bostroesser@gmail.com
Fixes: bb9b9eb0ae ("scsi: target: tcmu: Fix possible data corruption")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Bodo Stroesser <bostroesser@gmail.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
In commit d72d827f2f, I used 'cpumask_t' incorrectly:
void iscsit_thread_get_cpumask(struct iscsi_conn *conn)
{
int ord, cpu;
cpumask_t conn_allowed_cpumask;
......
}
static ssize_t lio_target_wwn_cpus_allowed_list_store(
struct config_item *item, const char *page, size_t count)
{
int ret;
char *orig;
cpumask_t new_allowed_cpumask;
......
}
The correct pattern should be as follows:
cpumask_var_t mask;
if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
return -ENOMEM;
... use 'mask' here ...
free_cpumask_var(mask);
Link: https://lore.kernel.org/r/20220516054721.1548-1-mingzhe.zou@easystack.cn
Fixes: d72d827f2f ("scsi: target: Add iscsi/cpus_allowed_list in configfs")
Reported-by: Test Bot <zgrieee@gmail.com>
Reviewed-by: Mike Christie <michael.christie@oracle.com>
Signed-off-by: Mingzhe Zou <mingzhe.zou@easystack.cn>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
When tcmu_vma_fault() gets a page successfully, before the current context
completes page fault procedure, find_free_blocks() may run and call
unmap_mapping_range() to unmap the page. Assume that when
find_free_blocks() initially completes and the previous page fault
procedure starts to run again and completes, then one truncated page has
been mapped to userspace. But note that tcmu_vma_fault() has gotten a
refcount for the page so any other subsystem won't be able to use the page
unless the userspace address is unmapped later.
If another command subsequently runs and needs to extend dbi_thresh it may
reuse the corresponding slot for the previous page in data_bitmap. Then
though we'll allocate new page for this slot in data_area, no page fault
will happen because we have a valid map and the real request's data will be
lost.
Filesystem implementations will also run into this issue but they usually
lock the page when vm_operations_struct->fault gets a page and unlock the
page after finish_fault() completes. For truncate filesystems lock pages in
truncate_inode_pages() to protect against racing wrt. page faults.
To fix this possible data corruption scenario we can apply a method similar
to the filesystems. For pages that are to be freed, tcmu_blocks_release()
locks and unlocks. Make tcmu_vma_fault() also lock found page under
cmdr_lock. At the same time, since tcmu_vma_fault() gets an extra page
refcount, tcmu_blocks_release() won't free pages if pages are in page fault
procedure, which means it is safe to call tcmu_blocks_release() before
unmap_mapping_range().
With these changes tcmu_blocks_release() will wait for all page faults to
be completed before calling unmap_mapping_range(). And later, if
unmap_mapping_range() is called, it will ensure stale mappings are removed.
Link: https://lore.kernel.org/r/20220421023735.9018-1-xiaoguang.wang@linux.alibaba.com
Reviewed-by: Bodo Stroesser <bostroesser@gmail.com>
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
With tape devices, the SCF_TREAT_READ_AS_NORMAL flag is used by the target
subsystem to mark commands which have both data to return as well as sense
data. But with pscsi, SCF_TREAT_READ_AS_NORMAL can be set even if there is
no data to return. The SCF_TREAT_READ_AS_NORMAL flag causes the target core
to call iscsit data-in callbacks even if there is no data, which iscsit
does not support. This results in iscsit going into an error state
requiring recovery and being unable to complete the command to the
initiator.
This issue can be resolved by fixing pscsi to only set
SCF_TREAT_READ_AS_NORMAL if there is valid data to return alongside the
sense data.
Link: https://lore.kernel.org/r/20220427183250.291881-1-djeffery@redhat.com
Fixes: bd81372065 ("scsi: target: transport should handle st FM/EOM/ILI reads")
Reported-by: Scott Hamilton <scott.hamilton@atos.net>
Tested-by: Laurence Oberman <loberman@redhat.com>
Reviewed-by: Laurence Oberman <loberman@redhat.com>
Signed-off-by: David Jeffery <djeffery@redhat.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Pull in 5.18 fixes branch which contains a bunch of fixes required for
the lpfc driver update.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
For block devices, the SCSI target drivers implements UNMAP as calls to
blkdev_issue_discard, which does not guarantee zeroing just because
Write Zeroes is supported.
Note that this does not affect the file backed path which uses
fallocate to punch holes.
Fixes: 2237498f0b ("target/iblock: Convert WRITE_SAME to blkdev_issue_zeroout")
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20220415045258.199825-2-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
The target driver prevents the users from changing the database root
directory if a target module like ib_srpt has been registered. This makes
it difficult for users to set their preferred database directory if the
module gets loaded during the system boot.
Let the users modify dbroot if there are no registered devices
Link: https://lore.kernel.org/r/20220328103940.19977-1-mlombard@redhat.com
Reviewed-by: Mike Christie <michael.christie@oracle.com>
Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
The RX/TX threads for iSCSI connection can be scheduled to any online CPUs,
and will not be rescheduled.
When binding other heavy load threads along with iSCSI connection RX/TX
thread to the same CPU, the iSCSI performance will be worse.
Add iscsi/cpus_allowed_list in configfs. The available CPU set of iSCSI
connection RX/TX threads is allowed_cpus & online_cpus. If it is modified,
all RX/TX threads will be rescheduled.
Link: https://lore.kernel.org/r/20220301075500.14266-1-mingzhe.zou@easystack.cn
Reviewed-by: Mike Christie <michael.christie@oracle.com>
Signed-off-by: Mingzhe Zou <mingzhe.zou@easystack.cn>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Now that each scsi_request is backed by a scsi_cmnd, there is no need to
indirect the CDB storage. Change all submitters of SCSI passthrough
requests to store the CDB information directly in the scsi_cmnd, and while
doing so allocate the full 32 bytes that cover all Linux supported SCSI
hosts instead of requiring dynamic allocation for > 16 byte CDBs. On
64-bit systems this does not change the size of the scsi_cmnd at all, while
on 32-bit systems it slightly increases it for now, but that increase will
be made up by the removal of the remaining scsi_request fields.
Link: https://lore.kernel.org/r/20220224175552.988286-4-hch@lst.de
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: John Garry <john.garry@huawei.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Make cmd_ring_size changeable similar to the way it is done for
max_data_area_mb. The reason is that our tcmu client will create thousands
of tcmu instances, and this will consume lots of mem with default 8Mb cmd
ring size for every backstore.
One can change the value by typing:
echo "cmd_ring_size_mb=N" > control
The "N" is a integer between 1 to 8, if set 1, the cmd ring can hold about
6k cmds(tcmu_cmd_entry about 176 byte) at least.
The value is printed when doing:
cat info
In addition, a new readonly attribute 'cmd_ring_size_mb' returns the value
in read.
Link: https://lore.kernel.org/r/1644978109-14885-1-git-send-email-kanie@linux.alibaba.com
Reviewed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Reviewed-by: Bodo Stroesser <bostroesser@gmail.com>
Signed-off-by: Guixin Liu <kanie@linux.alibaba.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Pass the block_device and operation that we plan to use this bio for to
bio_alloc to optimize the assignment. NULL/0 can be passed, both for the
passthrough case on a raw request_queue and to temporarily avoid
refactoring some nasty code.
Also move the gfp_mask argument after the nr_vecs argument for a much
more logical calling convention matching what most of the kernel does.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20220124091107.642561-18-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Pass the block_device and operation that we plan to use this bio for to
bio_alloc_bioset to optimize the assigment. NULL/0 can be passed, both
for the passthrough case on a raw request_queue and to temporarily avoid
refactoring some nasty code.
Also move the gfp_mask argument after the nr_vecs argument for a much
more logical calling convention matching what most of the kernel does.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20220124091107.642561-16-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>