VFIO updates for v6.0-rc1

- Cleanup use of extern in function prototypes (Alex Williamson)
 
  - Simplify bus_type usage and convert to device IOMMU interfaces
    (Robin Murphy)
 
  - Check missed return value and fix comment typos (Bo Liu)
 
  - Split migration ops from device ops and fix races in mlx5 migration
    support (Yishai Hadas)
 
  - Fix missed return value check in noiommu support (Liam Ni)
 
  - Hardening to clear buffer pointer to avoid use-after-free (Schspa Shi)
 
  - Remove requirement that only the same mm can unmap a previously
    mapped range (Li Zhe)
 
  - Adjust semaphore release vs device open counter (Yi Liu)
 
  - Remove unused arg from SPAPR support code (Deming Wang)
 
  - Rework vfio-ccw driver to better fit new mdev framework (Eric Farman,
    Michael Kawano)
 
  - Replace DMA unmap notifier with callbacks (Jason Gunthorpe)
 
  - Clarify SPAPR support comment relative to iommu_ops (Alexey Kardashevskiy)
 
  - Revise page pinning API towards compatibility with future iommufd support
    (Nicolin Chen)
 
  - Resolve issues in vfio-ccw, including use of DMA unmap callback
    (Eric Farman)
 -----BEGIN PGP SIGNATURE-----
 
 iQJPBAABCAA5FiEEQvbATlQL0amee4qQI5ubbjuwiyIFAmLqvYMbHGFsZXgud2ls
 bGlhbXNvbkByZWRoYXQuY29tAAoJECObm247sIsiHM0P/1n/bszel20PRC7x+NLI
 P7b/0aonW4Qtei2HORwowmaznb4NgRE5GCm5RU+a9+AwQKnK44j3lqy0skcfgZXr
 f4viFlxOyd0H4blOhUZ+FuPNkUMAyz6HerzvJ9jQFG426pL5vr7UKWBuJPYB5RCT
 4jEy3EUTSH8/Zt8ApLysFTyR64xN3Sk7vSUcj9rEhu5T3FWq8t9+jb3tE/HW/Xaw
 pMwdC+ctYzYaBD/oA7Ns2IebNS9AUIUjKMXC25oCmc83WGgGOqgLB2mAthQ2NKB5
 5capKBYuYl7PWERvpGpsPILEWvR6m+Rxh8r4Pqjcoyfq4k7vp+A/AFKiD7AEYBdy
 BtfLWO59w6vuRQ5XXOa6Hu4ef6BcMvH4StrHxlHkKcgI4PJA0QscIXiJPQSt7Crr
 m+kCNgPPgrfZDu7lmZTiWbXOYSkJR3Mxkhf2iNHudW9SsJT9pUAVEiGVVA/kC1Y/
 fNBziRQeVF6JUW8M4pveXEWEbA8iE1HQeJA6aVRonxAkJk1KBaQgm/GKJlPXCHIR
 R6lI90NXZHz/3ndIX1znKOm0qli+8auX/FH8iWUffZxGmtINOGGMYebD6YxFdCCJ
 sWalL8vlQNCams2MZdovu/5BowXWtwOMm6KNG9RXSyWIWZEcNVbAzhTr+rrDdHZd
 AJiUNCGO9UlO9FZM+ntfQTSr
 =4BE8
 -----END PGP SIGNATURE-----

Merge tag 'vfio-v6.0-rc1' of https://github.com/awilliam/linux-vfio

Pull VFIO updates from Alex Williamson:

 - Cleanup use of extern in function prototypes (Alex Williamson)

 - Simplify bus_type usage and convert to device IOMMU interfaces (Robin
   Murphy)

 - Check missed return value and fix comment typos (Bo Liu)

 - Split migration ops from device ops and fix races in mlx5 migration
   support (Yishai Hadas)

 - Fix missed return value check in noiommu support (Liam Ni)

 - Hardening to clear buffer pointer to avoid use-after-free (Schspa
   Shi)

 - Remove requirement that only the same mm can unmap a previously
   mapped range (Li Zhe)

 - Adjust semaphore release vs device open counter (Yi Liu)

 - Remove unused arg from SPAPR support code (Deming Wang)

 - Rework vfio-ccw driver to better fit new mdev framework (Eric Farman,
   Michael Kawano)

 - Replace DMA unmap notifier with callbacks (Jason Gunthorpe)

 - Clarify SPAPR support comment relative to iommu_ops (Alexey
   Kardashevskiy)

 - Revise page pinning API towards compatibility with future iommufd
   support (Nicolin Chen)

 - Resolve issues in vfio-ccw, including use of DMA unmap callback (Eric
   Farman)

* tag 'vfio-v6.0-rc1' of https://github.com/awilliam/linux-vfio: (40 commits)
  vfio/pci: fix the wrong word
  vfio/ccw: Check return code from subchannel quiesce
  vfio/ccw: Remove FSM Close from remove handlers
  vfio/ccw: Add length to DMA_UNMAP checks
  vfio: Replace phys_pfn with pages for vfio_pin_pages()
  vfio/ccw: Add kmap_local_page() for memcpy
  vfio: Rename user_iova of vfio_dma_rw()
  vfio/ccw: Change pa_pfn list to pa_iova list
  vfio/ap: Change saved_pfn to saved_iova
  vfio: Pass in starting IOVA to vfio_pin/unpin_pages API
  vfio/ccw: Only pass in contiguous pages
  vfio/ap: Pass in physical address of ind to ap_aqic()
  drm/i915/gvt: Replace roundup with DIV_ROUND_UP
  vfio: Make vfio_unpin_pages() return void
  vfio/spapr_tce: Fix the comment
  vfio: Replace the iommu notifier with a device list
  vfio: Replace the DMA unmapping notifier with a callback
  vfio/ccw: Move FSM open/close to MDEV open/close
  vfio/ccw: Refactor vfio_ccw_mdev_reset
  vfio/ccw: Create a CLOSE FSM event
  ...
This commit is contained in:
Linus Torvalds 2022-08-06 08:59:35 -07:00
commit a9cf69d0e7
29 changed files with 659 additions and 768 deletions

View File

@ -112,11 +112,11 @@ to register and unregister itself with the core driver:
* Register::
extern int mdev_register_driver(struct mdev_driver *drv);
int mdev_register_driver(struct mdev_driver *drv);
* Unregister::
extern void mdev_unregister_driver(struct mdev_driver *drv);
void mdev_unregister_driver(struct mdev_driver *drv);
The mediated bus driver's probe function should create a vfio_device on top of
the mdev_device and connect it to an appropriate implementation of
@ -125,8 +125,8 @@ vfio_device_ops.
When a driver wants to add the GUID creation sysfs to an existing device it has
probe'd to then it should call::
extern int mdev_register_device(struct device *dev,
struct mdev_driver *mdev_driver);
int mdev_register_device(struct device *dev,
struct mdev_driver *mdev_driver);
This will provide the 'mdev_supported_types/XX/create' files which can then be
used to trigger the creation of a mdev_device. The created mdev_device will be
@ -134,7 +134,7 @@ attached to the specified driver.
When the driver needs to remove itself it calls::
extern void mdev_unregister_device(struct device *dev);
void mdev_unregister_device(struct device *dev);
Which will unbind and destroy all the created mdevs and remove the sysfs files.
@ -260,10 +260,10 @@ Translation APIs for Mediated Devices
The following APIs are provided for translating user pfn to host pfn in a VFIO
driver::
int vfio_pin_pages(struct vfio_device *device, unsigned long *user_pfn,
int npage, int prot, unsigned long *phys_pfn);
int vfio_pin_pages(struct vfio_device *device, dma_addr_t iova,
int npage, int prot, struct page **pages);
int vfio_unpin_pages(struct vfio_device *device, unsigned long *user_pfn,
void vfio_unpin_pages(struct vfio_device *device, dma_addr_t iova,
int npage);
These functions call back into the back-end IOMMU module by using the pin_pages

View File

@ -227,13 +227,13 @@ struct ap_qirq_ctrl {
* ap_aqic(): Control interruption for a specific AP.
* @qid: The AP queue number
* @qirqctrl: struct ap_qirq_ctrl (64 bit value)
* @ind: The notification indicator byte
* @pa_ind: Physical address of the notification indicator byte
*
* Returns AP queue status.
*/
static inline struct ap_queue_status ap_aqic(ap_qid_t qid,
struct ap_qirq_ctrl qirqctrl,
void *ind)
phys_addr_t pa_ind)
{
unsigned long reg0 = qid | (3UL << 24); /* fc 3UL is AQIC */
union {
@ -241,7 +241,7 @@ static inline struct ap_queue_status ap_aqic(ap_qid_t qid,
struct ap_qirq_ctrl qirqctrl;
struct ap_queue_status status;
} reg1;
unsigned long reg2 = virt_to_phys(ind);
unsigned long reg2 = pa_ind;
reg1.qirqctrl = qirqctrl;

View File

@ -226,7 +226,6 @@ struct intel_vgpu {
unsigned long nr_cache_entries;
struct mutex cache_lock;
struct notifier_block iommu_notifier;
atomic_t released;
struct kvm_page_track_notifier_node track_node;

View File

@ -231,57 +231,38 @@ static void intel_gvt_cleanup_vgpu_type_groups(struct intel_gvt *gvt)
static void gvt_unpin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
unsigned long size)
{
struct drm_i915_private *i915 = vgpu->gvt->gt->i915;
int total_pages;
int npage;
int ret;
total_pages = roundup(size, PAGE_SIZE) / PAGE_SIZE;
for (npage = 0; npage < total_pages; npage++) {
unsigned long cur_gfn = gfn + npage;
ret = vfio_unpin_pages(&vgpu->vfio_device, &cur_gfn, 1);
drm_WARN_ON(&i915->drm, ret != 1);
}
vfio_unpin_pages(&vgpu->vfio_device, gfn << PAGE_SHIFT,
DIV_ROUND_UP(size, PAGE_SIZE));
}
/* Pin a normal or compound guest page for dma. */
static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
unsigned long size, struct page **page)
{
unsigned long base_pfn = 0;
int total_pages;
int total_pages = DIV_ROUND_UP(size, PAGE_SIZE);
struct page *base_page = NULL;
int npage;
int ret;
total_pages = roundup(size, PAGE_SIZE) / PAGE_SIZE;
/*
* We pin the pages one-by-one to avoid allocating a big arrary
* on stack to hold pfns.
*/
for (npage = 0; npage < total_pages; npage++) {
unsigned long cur_gfn = gfn + npage;
unsigned long pfn;
dma_addr_t cur_iova = (gfn + npage) << PAGE_SHIFT;
struct page *cur_page;
ret = vfio_pin_pages(&vgpu->vfio_device, &cur_gfn, 1,
IOMMU_READ | IOMMU_WRITE, &pfn);
ret = vfio_pin_pages(&vgpu->vfio_device, cur_iova, 1,
IOMMU_READ | IOMMU_WRITE, &cur_page);
if (ret != 1) {
gvt_vgpu_err("vfio_pin_pages failed for gfn 0x%lx, ret %d\n",
cur_gfn, ret);
goto err;
}
if (!pfn_valid(pfn)) {
gvt_vgpu_err("pfn 0x%lx is not mem backed\n", pfn);
npage++;
ret = -EFAULT;
gvt_vgpu_err("vfio_pin_pages failed for iova %pad, ret %d\n",
&cur_iova, ret);
goto err;
}
if (npage == 0)
base_pfn = pfn;
else if (base_pfn + npage != pfn) {
base_page = cur_page;
else if (base_page + npage != cur_page) {
gvt_vgpu_err("The pages are not continuous\n");
ret = -EINVAL;
npage++;
@ -289,7 +270,7 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
}
}
*page = pfn_to_page(base_pfn);
*page = base_page;
return 0;
err:
gvt_unpin_guest_page(vgpu, gfn, npage * PAGE_SIZE);
@ -729,34 +710,25 @@ int intel_gvt_set_edid(struct intel_vgpu *vgpu, int port_num)
return ret;
}
static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
unsigned long action, void *data)
static void intel_vgpu_dma_unmap(struct vfio_device *vfio_dev, u64 iova,
u64 length)
{
struct intel_vgpu *vgpu =
container_of(nb, struct intel_vgpu, iommu_notifier);
struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
struct gvt_dma *entry;
u64 iov_pfn = iova >> PAGE_SHIFT;
u64 end_iov_pfn = iov_pfn + length / PAGE_SIZE;
if (action == VFIO_IOMMU_NOTIFY_DMA_UNMAP) {
struct vfio_iommu_type1_dma_unmap *unmap = data;
struct gvt_dma *entry;
unsigned long iov_pfn, end_iov_pfn;
mutex_lock(&vgpu->cache_lock);
for (; iov_pfn < end_iov_pfn; iov_pfn++) {
entry = __gvt_cache_find_gfn(vgpu, iov_pfn);
if (!entry)
continue;
iov_pfn = unmap->iova >> PAGE_SHIFT;
end_iov_pfn = iov_pfn + unmap->size / PAGE_SIZE;
mutex_lock(&vgpu->cache_lock);
for (; iov_pfn < end_iov_pfn; iov_pfn++) {
entry = __gvt_cache_find_gfn(vgpu, iov_pfn);
if (!entry)
continue;
gvt_dma_unmap_page(vgpu, entry->gfn, entry->dma_addr,
entry->size);
__gvt_cache_remove_entry(vgpu, entry);
}
mutex_unlock(&vgpu->cache_lock);
gvt_dma_unmap_page(vgpu, entry->gfn, entry->dma_addr,
entry->size);
__gvt_cache_remove_entry(vgpu, entry);
}
return NOTIFY_OK;
mutex_unlock(&vgpu->cache_lock);
}
static bool __kvmgt_vgpu_exist(struct intel_vgpu *vgpu)
@ -783,36 +755,20 @@ out:
static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
{
struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
unsigned long events;
int ret;
vgpu->iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY, &events,
&vgpu->iommu_notifier);
if (ret != 0) {
gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n",
ret);
goto out;
}
ret = -EEXIST;
if (vgpu->attached)
goto undo_iommu;
return -EEXIST;
ret = -ESRCH;
if (!vgpu->vfio_device.kvm ||
vgpu->vfio_device.kvm->mm != current->mm) {
gvt_vgpu_err("KVM is required to use Intel vGPU\n");
goto undo_iommu;
return -ESRCH;
}
kvm_get_kvm(vgpu->vfio_device.kvm);
ret = -EEXIST;
if (__kvmgt_vgpu_exist(vgpu))
goto undo_iommu;
return -EEXIST;
vgpu->attached = true;
@ -831,12 +787,6 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
atomic_set(&vgpu->released, 0);
return 0;
undo_iommu:
vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
&vgpu->iommu_notifier);
out:
return ret;
}
static void intel_vgpu_release_msi_eventfd_ctx(struct intel_vgpu *vgpu)
@ -853,8 +803,6 @@ static void intel_vgpu_release_msi_eventfd_ctx(struct intel_vgpu *vgpu)
static void intel_vgpu_close_device(struct vfio_device *vfio_dev)
{
struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
struct drm_i915_private *i915 = vgpu->gvt->gt->i915;
int ret;
if (!vgpu->attached)
return;
@ -864,11 +812,6 @@ static void intel_vgpu_close_device(struct vfio_device *vfio_dev)
intel_gvt_release_vgpu(vgpu);
ret = vfio_unregister_notifier(&vgpu->vfio_device, VFIO_IOMMU_NOTIFY,
&vgpu->iommu_notifier);
drm_WARN(&i915->drm, ret,
"vfio_unregister_notifier for iommu failed: %d\n", ret);
debugfs_remove(debugfs_lookup(KVMGT_DEBUGFS_FILENAME, vgpu->debugfs));
kvm_page_track_unregister_notifier(vgpu->vfio_device.kvm,
@ -1610,6 +1553,7 @@ static const struct vfio_device_ops intel_vgpu_dev_ops = {
.write = intel_vgpu_write,
.mmap = intel_vgpu_mmap,
.ioctl = intel_vgpu_ioctl,
.dma_unmap = intel_vgpu_dma_unmap,
};
static int intel_vgpu_probe(struct mdev_device *mdev)

View File

@ -8,7 +8,6 @@
*/
#include <linux/vfio.h>
#include <linux/mdev.h>
#include "vfio_ccw_private.h"

View File

@ -11,6 +11,7 @@
#include <linux/ratelimit.h>
#include <linux/mm.h>
#include <linux/slab.h>
#include <linux/highmem.h>
#include <linux/iommu.h>
#include <linux/vfio.h>
#include <asm/idals.h>
@ -18,13 +19,11 @@
#include "vfio_ccw_cp.h"
#include "vfio_ccw_private.h"
struct pfn_array {
/* Starting guest physical I/O address. */
unsigned long pa_iova;
/* Array that stores PFNs of the pages need to pin. */
unsigned long *pa_iova_pfn;
/* Array that receives PFNs of the pages pinned. */
unsigned long *pa_pfn;
struct page_array {
/* Array that stores pages need to pin. */
dma_addr_t *pa_iova;
/* Array that receives the pinned pages. */
struct page **pa_page;
/* Number of pages pinned from @pa_iova. */
int pa_nr;
};
@ -37,116 +36,158 @@ struct ccwchain {
/* Count of the valid ccws in chain. */
int ch_len;
/* Pinned PAGEs for the original data. */
struct pfn_array *ch_pa;
struct page_array *ch_pa;
};
/*
* pfn_array_alloc() - alloc memory for PFNs
* @pa: pfn_array on which to perform the operation
* page_array_alloc() - alloc memory for page array
* @pa: page_array on which to perform the operation
* @iova: target guest physical address
* @len: number of bytes that should be pinned from @iova
*
* Attempt to allocate memory for PFNs.
* Attempt to allocate memory for page array.
*
* Usage of pfn_array:
* We expect (pa_nr == 0) and (pa_iova_pfn == NULL), any field in
* Usage of page_array:
* We expect (pa_nr == 0) and (pa_iova == NULL), any field in
* this structure will be filled in by this function.
*
* Returns:
* 0 if PFNs are allocated
* -EINVAL if pa->pa_nr is not initially zero, or pa->pa_iova_pfn is not NULL
* 0 if page array is allocated
* -EINVAL if pa->pa_nr is not initially zero, or pa->pa_iova is not NULL
* -ENOMEM if alloc failed
*/
static int pfn_array_alloc(struct pfn_array *pa, u64 iova, unsigned int len)
static int page_array_alloc(struct page_array *pa, u64 iova, unsigned int len)
{
int i;
if (pa->pa_nr || pa->pa_iova_pfn)
if (pa->pa_nr || pa->pa_iova)
return -EINVAL;
pa->pa_iova = iova;
pa->pa_nr = ((iova & ~PAGE_MASK) + len + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
if (!pa->pa_nr)
return -EINVAL;
pa->pa_iova_pfn = kcalloc(pa->pa_nr,
sizeof(*pa->pa_iova_pfn) +
sizeof(*pa->pa_pfn),
GFP_KERNEL);
if (unlikely(!pa->pa_iova_pfn)) {
pa->pa_iova = kcalloc(pa->pa_nr,
sizeof(*pa->pa_iova) + sizeof(*pa->pa_page),
GFP_KERNEL);
if (unlikely(!pa->pa_iova)) {
pa->pa_nr = 0;
return -ENOMEM;
}
pa->pa_pfn = pa->pa_iova_pfn + pa->pa_nr;
pa->pa_page = (struct page **)&pa->pa_iova[pa->pa_nr];
pa->pa_iova_pfn[0] = pa->pa_iova >> PAGE_SHIFT;
pa->pa_pfn[0] = -1ULL;
pa->pa_iova[0] = iova;
pa->pa_page[0] = NULL;
for (i = 1; i < pa->pa_nr; i++) {
pa->pa_iova_pfn[i] = pa->pa_iova_pfn[i - 1] + 1;
pa->pa_pfn[i] = -1ULL;
pa->pa_iova[i] = pa->pa_iova[i - 1] + PAGE_SIZE;
pa->pa_page[i] = NULL;
}
return 0;
}
/*
* pfn_array_pin() - Pin user pages in memory
* @pa: pfn_array on which to perform the operation
* page_array_unpin() - Unpin user pages in memory
* @pa: page_array on which to perform the operation
* @vdev: the vfio device to perform the operation
* @pa_nr: number of user pages to unpin
*
* Only unpin if any pages were pinned to begin with, i.e. pa_nr > 0,
* otherwise only clear pa->pa_nr
*/
static void page_array_unpin(struct page_array *pa,
struct vfio_device *vdev, int pa_nr)
{
int unpinned = 0, npage = 1;
while (unpinned < pa_nr) {
dma_addr_t *first = &pa->pa_iova[unpinned];
dma_addr_t *last = &first[npage];
if (unpinned + npage < pa_nr &&
*first + npage * PAGE_SIZE == *last) {
npage++;
continue;
}
vfio_unpin_pages(vdev, *first, npage);
unpinned += npage;
npage = 1;
}
pa->pa_nr = 0;
}
/*
* page_array_pin() - Pin user pages in memory
* @pa: page_array on which to perform the operation
* @mdev: the mediated device to perform pin operations
*
* Returns number of pages pinned upon success.
* If the pin request partially succeeds, or fails completely,
* all pages are left unpinned and a negative error value is returned.
*/
static int pfn_array_pin(struct pfn_array *pa, struct vfio_device *vdev)
static int page_array_pin(struct page_array *pa, struct vfio_device *vdev)
{
int pinned = 0, npage = 1;
int ret = 0;
ret = vfio_pin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr,
IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
while (pinned < pa->pa_nr) {
dma_addr_t *first = &pa->pa_iova[pinned];
dma_addr_t *last = &first[npage];
if (ret < 0) {
goto err_out;
} else if (ret > 0 && ret != pa->pa_nr) {
vfio_unpin_pages(vdev, pa->pa_iova_pfn, ret);
ret = -EINVAL;
goto err_out;
if (pinned + npage < pa->pa_nr &&
*first + npage * PAGE_SIZE == *last) {
npage++;
continue;
}
ret = vfio_pin_pages(vdev, *first, npage,
IOMMU_READ | IOMMU_WRITE,
&pa->pa_page[pinned]);
if (ret < 0) {
goto err_out;
} else if (ret > 0 && ret != npage) {
pinned += ret;
ret = -EINVAL;
goto err_out;
}
pinned += npage;
npage = 1;
}
return ret;
err_out:
pa->pa_nr = 0;
page_array_unpin(pa, vdev, pinned);
return ret;
}
/* Unpin the pages before releasing the memory. */
static void pfn_array_unpin_free(struct pfn_array *pa, struct vfio_device *vdev)
static void page_array_unpin_free(struct page_array *pa, struct vfio_device *vdev)
{
/* Only unpin if any pages were pinned to begin with */
if (pa->pa_nr)
vfio_unpin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr);
pa->pa_nr = 0;
kfree(pa->pa_iova_pfn);
page_array_unpin(pa, vdev, pa->pa_nr);
kfree(pa->pa_iova);
}
static bool pfn_array_iova_pinned(struct pfn_array *pa, unsigned long iova)
static bool page_array_iova_pinned(struct page_array *pa, u64 iova, u64 length)
{
unsigned long iova_pfn = iova >> PAGE_SHIFT;
u64 iova_pfn_start = iova >> PAGE_SHIFT;
u64 iova_pfn_end = (iova + length - 1) >> PAGE_SHIFT;
u64 pfn;
int i;
for (i = 0; i < pa->pa_nr; i++)
if (pa->pa_iova_pfn[i] == iova_pfn)
for (i = 0; i < pa->pa_nr; i++) {
pfn = pa->pa_iova[i] >> PAGE_SHIFT;
if (pfn >= iova_pfn_start && pfn <= iova_pfn_end)
return true;
}
return false;
}
/* Create the list of IDAL words for a pfn_array. */
static inline void pfn_array_idal_create_words(
struct pfn_array *pa,
unsigned long *idaws)
/* Create the list of IDAL words for a page_array. */
static inline void page_array_idal_create_words(struct page_array *pa,
unsigned long *idaws)
{
int i;
@ -159,10 +200,10 @@ static inline void pfn_array_idal_create_words(
*/
for (i = 0; i < pa->pa_nr; i++)
idaws[i] = pa->pa_pfn[i] << PAGE_SHIFT;
idaws[i] = page_to_phys(pa->pa_page[i]);
/* Adjust the first IDAW, since it may not start on a page boundary */
idaws[0] += pa->pa_iova & (PAGE_SIZE - 1);
idaws[0] += pa->pa_iova[0] & (PAGE_SIZE - 1);
}
static void convert_ccw0_to_ccw1(struct ccw1 *source, unsigned long len)
@ -194,24 +235,24 @@ static void convert_ccw0_to_ccw1(struct ccw1 *source, unsigned long len)
static long copy_from_iova(struct vfio_device *vdev, void *to, u64 iova,
unsigned long n)
{
struct pfn_array pa = {0};
u64 from;
struct page_array pa = {0};
int i, ret;
unsigned long l, m;
ret = pfn_array_alloc(&pa, iova, n);
ret = page_array_alloc(&pa, iova, n);
if (ret < 0)
return ret;
ret = pfn_array_pin(&pa, vdev);
ret = page_array_pin(&pa, vdev);
if (ret < 0) {
pfn_array_unpin_free(&pa, vdev);
page_array_unpin_free(&pa, vdev);
return ret;
}
l = n;
for (i = 0; i < pa.pa_nr; i++) {
from = pa.pa_pfn[i] << PAGE_SHIFT;
void *from = kmap_local_page(pa.pa_page[i]);
m = PAGE_SIZE;
if (i == 0) {
from += iova & (PAGE_SIZE - 1);
@ -219,14 +260,15 @@ static long copy_from_iova(struct vfio_device *vdev, void *to, u64 iova,
}
m = min(l, m);
memcpy(to + (n - l), (void *)from, m);
memcpy(to + (n - l), from, m);
kunmap_local(from);
l -= m;
if (l == 0)
break;
}
pfn_array_unpin_free(&pa, vdev);
page_array_unpin_free(&pa, vdev);
return l;
}
@ -329,7 +371,7 @@ static struct ccwchain *ccwchain_alloc(struct channel_program *cp, int len)
chain->ch_ccw = (struct ccw1 *)data;
data = (u8 *)(chain->ch_ccw) + sizeof(*chain->ch_ccw) * len;
chain->ch_pa = (struct pfn_array *)data;
chain->ch_pa = (struct page_array *)data;
chain->ch_len = len;
@ -513,7 +555,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
struct vfio_device *vdev =
&container_of(cp, struct vfio_ccw_private, cp)->vdev;
struct ccw1 *ccw;
struct pfn_array *pa;
struct page_array *pa;
u64 iova;
unsigned long *idaws;
int ret;
@ -547,13 +589,13 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
}
/*
* Allocate an array of pfn's for pages to pin/translate.
* Allocate an array of pages to pin/translate.
* The number of pages is actually the count of the idaws
* required for the data transfer, since we only only support
* 4K IDAWs today.
*/
pa = chain->ch_pa + idx;
ret = pfn_array_alloc(pa, iova, bytes);
ret = page_array_alloc(pa, iova, bytes);
if (ret < 0)
goto out_free_idaws;
@ -564,21 +606,21 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
goto out_unpin;
/*
* Copy guest IDAWs into pfn_array, in case the memory they
* Copy guest IDAWs into page_array, in case the memory they
* occupy is not contiguous.
*/
for (i = 0; i < idaw_nr; i++)
pa->pa_iova_pfn[i] = idaws[i] >> PAGE_SHIFT;
pa->pa_iova[i] = idaws[i];
} else {
/*
* No action is required here; the iova addresses in pfn_array
* were initialized sequentially in pfn_array_alloc() beginning
* No action is required here; the iova addresses in page_array
* were initialized sequentially in page_array_alloc() beginning
* with the contents of ccw->cda.
*/
}
if (ccw_does_data_transfer(ccw)) {
ret = pfn_array_pin(pa, vdev);
ret = page_array_pin(pa, vdev);
if (ret < 0)
goto out_unpin;
} else {
@ -588,13 +630,13 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
ccw->cda = (__u32) virt_to_phys(idaws);
ccw->flags |= CCW_FLAG_IDA;
/* Populate the IDAL with pinned/translated addresses from pfn */
pfn_array_idal_create_words(pa, idaws);
/* Populate the IDAL with pinned/translated addresses from page */
page_array_idal_create_words(pa, idaws);
return 0;
out_unpin:
pfn_array_unpin_free(pa, vdev);
page_array_unpin_free(pa, vdev);
out_free_idaws:
kfree(idaws);
out_init:
@ -700,7 +742,7 @@ void cp_free(struct channel_program *cp)
cp->initialized = false;
list_for_each_entry_safe(chain, temp, &cp->ccwchain_list, next) {
for (i = 0; i < chain->ch_len; i++) {
pfn_array_unpin_free(chain->ch_pa + i, vdev);
page_array_unpin_free(chain->ch_pa + i, vdev);
ccwchain_cda_free(chain, i);
}
ccwchain_free(chain);
@ -862,11 +904,12 @@ void cp_update_scsw(struct channel_program *cp, union scsw *scsw)
* cp_iova_pinned() - check if an iova is pinned for a ccw chain.
* @cp: channel_program on which to perform the operation
* @iova: the iova to check
* @length: the length to check from @iova
*
* If the @iova is currently pinned for the ccw chain, return true;
* else return false.
*/
bool cp_iova_pinned(struct channel_program *cp, u64 iova)
bool cp_iova_pinned(struct channel_program *cp, u64 iova, u64 length)
{
struct ccwchain *chain;
int i;
@ -876,7 +919,7 @@ bool cp_iova_pinned(struct channel_program *cp, u64 iova)
list_for_each_entry(chain, &cp->ccwchain_list, next) {
for (i = 0; i < chain->ch_len; i++)
if (pfn_array_iova_pinned(chain->ch_pa + i, iova))
if (page_array_iova_pinned(chain->ch_pa + i, iova, length))
return true;
}

View File

@ -41,11 +41,11 @@ struct channel_program {
struct ccw1 *guest_cp;
};
extern int cp_init(struct channel_program *cp, union orb *orb);
extern void cp_free(struct channel_program *cp);
extern int cp_prefetch(struct channel_program *cp);
extern union orb *cp_get_orb(struct channel_program *cp, u32 intparm, u8 lpm);
extern void cp_update_scsw(struct channel_program *cp, union scsw *scsw);
extern bool cp_iova_pinned(struct channel_program *cp, u64 iova);
int cp_init(struct channel_program *cp, union orb *orb);
void cp_free(struct channel_program *cp);
int cp_prefetch(struct channel_program *cp);
union orb *cp_get_orb(struct channel_program *cp, u32 intparm, u8 lpm);
void cp_update_scsw(struct channel_program *cp, union scsw *scsw);
bool cp_iova_pinned(struct channel_program *cp, u64 iova, u64 length);
#endif

View File

@ -14,7 +14,6 @@
#include <linux/init.h>
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/uuid.h>
#include <linux/mdev.h>
#include <asm/isc.h>
@ -42,13 +41,6 @@ int vfio_ccw_sch_quiesce(struct subchannel *sch)
DECLARE_COMPLETION_ONSTACK(completion);
int iretry, ret = 0;
spin_lock_irq(sch->lock);
if (!sch->schib.pmcw.ena)
goto out_unlock;
ret = cio_disable_subchannel(sch);
if (ret != -EBUSY)
goto out_unlock;
iretry = 255;
do {
@ -75,9 +67,7 @@ int vfio_ccw_sch_quiesce(struct subchannel *sch)
spin_lock_irq(sch->lock);
ret = cio_disable_subchannel(sch);
} while (ret == -EBUSY);
out_unlock:
private->state = VFIO_CCW_STATE_NOT_OPER;
spin_unlock_irq(sch->lock);
return ret;
}
@ -107,9 +97,10 @@ static void vfio_ccw_sch_io_todo(struct work_struct *work)
/*
* Reset to IDLE only if processing of a channel program
* has finished. Do not overwrite a possible processing
* state if the final interrupt was for HSCH or CSCH.
* state if the interrupt was unsolicited, or if the final
* interrupt was for HSCH or CSCH.
*/
if (private->mdev && cp_is_finished)
if (cp_is_finished)
private->state = VFIO_CCW_STATE_IDLE;
if (private->io_trigger)
@ -147,7 +138,7 @@ static struct vfio_ccw_private *vfio_ccw_alloc_private(struct subchannel *sch)
private->sch = sch;
mutex_init(&private->io_mutex);
private->state = VFIO_CCW_STATE_NOT_OPER;
private->state = VFIO_CCW_STATE_STANDBY;
INIT_LIST_HEAD(&private->crw);
INIT_WORK(&private->io_work, vfio_ccw_sch_io_todo);
INIT_WORK(&private->crw_work, vfio_ccw_crw_todo);
@ -231,26 +222,15 @@ static int vfio_ccw_sch_probe(struct subchannel *sch)
dev_set_drvdata(&sch->dev, private);
spin_lock_irq(sch->lock);
sch->isc = VFIO_CCW_ISC;
ret = cio_enable_subchannel(sch, (u32)(unsigned long)sch);
spin_unlock_irq(sch->lock);
ret = mdev_register_device(&sch->dev, &vfio_ccw_mdev_driver);
if (ret)
goto out_free;
private->state = VFIO_CCW_STATE_STANDBY;
ret = vfio_ccw_mdev_reg(sch);
if (ret)
goto out_disable;
VFIO_CCW_MSG_EVENT(4, "bound to subchannel %x.%x.%04x\n",
sch->schid.cssid, sch->schid.ssid,
sch->schid.sch_no);
return 0;
out_disable:
cio_disable_subchannel(sch);
out_free:
dev_set_drvdata(&sch->dev, NULL);
vfio_ccw_free_private(private);
@ -261,8 +241,7 @@ static void vfio_ccw_sch_remove(struct subchannel *sch)
{
struct vfio_ccw_private *private = dev_get_drvdata(&sch->dev);
vfio_ccw_sch_quiesce(sch);
vfio_ccw_mdev_unreg(sch);
mdev_unregister_device(&sch->dev);
dev_set_drvdata(&sch->dev, NULL);
@ -275,7 +254,10 @@ static void vfio_ccw_sch_remove(struct subchannel *sch)
static void vfio_ccw_sch_shutdown(struct subchannel *sch)
{
vfio_ccw_sch_quiesce(sch);
struct vfio_ccw_private *private = dev_get_drvdata(&sch->dev);
vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_CLOSE);
vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_NOT_OPER);
}
/**
@ -301,19 +283,11 @@ static int vfio_ccw_sch_event(struct subchannel *sch, int process)
if (work_pending(&sch->todo_work))
goto out_unlock;
if (cio_update_schib(sch)) {
vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_NOT_OPER);
rc = 0;
goto out_unlock;
}
private = dev_get_drvdata(&sch->dev);
if (private->state == VFIO_CCW_STATE_NOT_OPER) {
private->state = private->mdev ? VFIO_CCW_STATE_IDLE :
VFIO_CCW_STATE_STANDBY;
}
rc = 0;
if (cio_update_schib(sch))
vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_NOT_OPER);
out_unlock:
spin_unlock_irqrestore(sch->lock, flags);
@ -358,8 +332,8 @@ static int vfio_ccw_chp_event(struct subchannel *sch,
return 0;
trace_vfio_ccw_chp_event(private->sch->schid, mask, event);
VFIO_CCW_MSG_EVENT(2, "%pUl (%x.%x.%04x): mask=0x%x event=%d\n",
mdev_uuid(private->mdev), sch->schid.cssid,
VFIO_CCW_MSG_EVENT(2, "sch %x.%x.%04x: mask=0x%x event=%d\n",
sch->schid.cssid,
sch->schid.ssid, sch->schid.sch_no,
mask, event);

View File

@ -10,7 +10,8 @@
*/
#include <linux/vfio.h>
#include <linux/mdev.h>
#include <asm/isc.h>
#include "ioasm.h"
#include "vfio_ccw_private.h"
@ -161,8 +162,12 @@ static void fsm_notoper(struct vfio_ccw_private *private,
{
struct subchannel *sch = private->sch;
VFIO_CCW_TRACE_EVENT(2, "notoper");
VFIO_CCW_TRACE_EVENT(2, dev_name(&sch->dev));
VFIO_CCW_MSG_EVENT(2, "sch %x.%x.%04x: notoper event %x state %x\n",
sch->schid.cssid,
sch->schid.ssid,
sch->schid.sch_no,
event,
private->state);
/*
* TODO:
@ -170,6 +175,9 @@ static void fsm_notoper(struct vfio_ccw_private *private,
*/
css_sched_sch_todo(sch, SCH_TODO_UNREG);
private->state = VFIO_CCW_STATE_NOT_OPER;
/* This is usually handled during CLOSE event */
cp_free(&private->cp);
}
/*
@ -242,7 +250,6 @@ static void fsm_io_request(struct vfio_ccw_private *private,
union orb *orb;
union scsw *scsw = &private->scsw;
struct ccw_io_region *io_region = private->io_region;
struct mdev_device *mdev = private->mdev;
char *errstr = "request";
struct subchannel_id schid = get_schid(private);
@ -256,8 +263,8 @@ static void fsm_io_request(struct vfio_ccw_private *private,
if (orb->tm.b) {
io_region->ret_code = -EOPNOTSUPP;
VFIO_CCW_MSG_EVENT(2,
"%pUl (%x.%x.%04x): transport mode\n",
mdev_uuid(mdev), schid.cssid,
"sch %x.%x.%04x: transport mode\n",
schid.cssid,
schid.ssid, schid.sch_no);
errstr = "transport mode";
goto err_out;
@ -265,8 +272,8 @@ static void fsm_io_request(struct vfio_ccw_private *private,
io_region->ret_code = cp_init(&private->cp, orb);
if (io_region->ret_code) {
VFIO_CCW_MSG_EVENT(2,
"%pUl (%x.%x.%04x): cp_init=%d\n",
mdev_uuid(mdev), schid.cssid,
"sch %x.%x.%04x: cp_init=%d\n",
schid.cssid,
schid.ssid, schid.sch_no,
io_region->ret_code);
errstr = "cp init";
@ -276,8 +283,8 @@ static void fsm_io_request(struct vfio_ccw_private *private,
io_region->ret_code = cp_prefetch(&private->cp);
if (io_region->ret_code) {
VFIO_CCW_MSG_EVENT(2,
"%pUl (%x.%x.%04x): cp_prefetch=%d\n",
mdev_uuid(mdev), schid.cssid,
"sch %x.%x.%04x: cp_prefetch=%d\n",
schid.cssid,
schid.ssid, schid.sch_no,
io_region->ret_code);
errstr = "cp prefetch";
@ -289,8 +296,8 @@ static void fsm_io_request(struct vfio_ccw_private *private,
io_region->ret_code = fsm_io_helper(private);
if (io_region->ret_code) {
VFIO_CCW_MSG_EVENT(2,
"%pUl (%x.%x.%04x): fsm_io_helper=%d\n",
mdev_uuid(mdev), schid.cssid,
"sch %x.%x.%04x: fsm_io_helper=%d\n",
schid.cssid,
schid.ssid, schid.sch_no,
io_region->ret_code);
errstr = "cp fsm_io_helper";
@ -300,16 +307,16 @@ static void fsm_io_request(struct vfio_ccw_private *private,
return;
} else if (scsw->cmd.fctl & SCSW_FCTL_HALT_FUNC) {
VFIO_CCW_MSG_EVENT(2,
"%pUl (%x.%x.%04x): halt on io_region\n",
mdev_uuid(mdev), schid.cssid,
"sch %x.%x.%04x: halt on io_region\n",
schid.cssid,
schid.ssid, schid.sch_no);
/* halt is handled via the async cmd region */
io_region->ret_code = -EOPNOTSUPP;
goto err_out;
} else if (scsw->cmd.fctl & SCSW_FCTL_CLEAR_FUNC) {
VFIO_CCW_MSG_EVENT(2,
"%pUl (%x.%x.%04x): clear on io_region\n",
mdev_uuid(mdev), schid.cssid,
"sch %x.%x.%04x: clear on io_region\n",
schid.cssid,
schid.ssid, schid.sch_no);
/* clear is handled via the async cmd region */
io_region->ret_code = -EOPNOTSUPP;
@ -366,6 +373,54 @@ static void fsm_irq(struct vfio_ccw_private *private,
complete(private->completion);
}
static void fsm_open(struct vfio_ccw_private *private,
enum vfio_ccw_event event)
{
struct subchannel *sch = private->sch;
int ret;
spin_lock_irq(sch->lock);
sch->isc = VFIO_CCW_ISC;
ret = cio_enable_subchannel(sch, (u32)(unsigned long)sch);
if (ret)
goto err_unlock;
private->state = VFIO_CCW_STATE_IDLE;
spin_unlock_irq(sch->lock);
return;
err_unlock:
spin_unlock_irq(sch->lock);
vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_NOT_OPER);
}
static void fsm_close(struct vfio_ccw_private *private,
enum vfio_ccw_event event)
{
struct subchannel *sch = private->sch;
int ret;
spin_lock_irq(sch->lock);
if (!sch->schib.pmcw.ena)
goto err_unlock;
ret = cio_disable_subchannel(sch);
if (ret == -EBUSY)
ret = vfio_ccw_sch_quiesce(sch);
if (ret)
goto err_unlock;
private->state = VFIO_CCW_STATE_STANDBY;
spin_unlock_irq(sch->lock);
cp_free(&private->cp);
return;
err_unlock:
spin_unlock_irq(sch->lock);
vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_NOT_OPER);
}
/*
* Device statemachine
*/
@ -375,29 +430,39 @@ fsm_func_t *vfio_ccw_jumptable[NR_VFIO_CCW_STATES][NR_VFIO_CCW_EVENTS] = {
[VFIO_CCW_EVENT_IO_REQ] = fsm_io_error,
[VFIO_CCW_EVENT_ASYNC_REQ] = fsm_async_error,
[VFIO_CCW_EVENT_INTERRUPT] = fsm_disabled_irq,
[VFIO_CCW_EVENT_OPEN] = fsm_nop,
[VFIO_CCW_EVENT_CLOSE] = fsm_nop,
},
[VFIO_CCW_STATE_STANDBY] = {
[VFIO_CCW_EVENT_NOT_OPER] = fsm_notoper,
[VFIO_CCW_EVENT_IO_REQ] = fsm_io_error,
[VFIO_CCW_EVENT_ASYNC_REQ] = fsm_async_error,
[VFIO_CCW_EVENT_INTERRUPT] = fsm_irq,
[VFIO_CCW_EVENT_INTERRUPT] = fsm_disabled_irq,
[VFIO_CCW_EVENT_OPEN] = fsm_open,
[VFIO_CCW_EVENT_CLOSE] = fsm_notoper,
},
[VFIO_CCW_STATE_IDLE] = {
[VFIO_CCW_EVENT_NOT_OPER] = fsm_notoper,
[VFIO_CCW_EVENT_IO_REQ] = fsm_io_request,
[VFIO_CCW_EVENT_ASYNC_REQ] = fsm_async_request,
[VFIO_CCW_EVENT_INTERRUPT] = fsm_irq,
[VFIO_CCW_EVENT_OPEN] = fsm_notoper,
[VFIO_CCW_EVENT_CLOSE] = fsm_close,
},
[VFIO_CCW_STATE_CP_PROCESSING] = {
[VFIO_CCW_EVENT_NOT_OPER] = fsm_notoper,
[VFIO_CCW_EVENT_IO_REQ] = fsm_io_retry,
[VFIO_CCW_EVENT_ASYNC_REQ] = fsm_async_retry,
[VFIO_CCW_EVENT_INTERRUPT] = fsm_irq,
[VFIO_CCW_EVENT_OPEN] = fsm_notoper,
[VFIO_CCW_EVENT_CLOSE] = fsm_close,
},
[VFIO_CCW_STATE_CP_PENDING] = {
[VFIO_CCW_EVENT_NOT_OPER] = fsm_notoper,
[VFIO_CCW_EVENT_IO_REQ] = fsm_io_busy,
[VFIO_CCW_EVENT_ASYNC_REQ] = fsm_async_request,
[VFIO_CCW_EVENT_INTERRUPT] = fsm_irq,
[VFIO_CCW_EVENT_OPEN] = fsm_notoper,
[VFIO_CCW_EVENT_CLOSE] = fsm_close,
},
};

View File

@ -21,54 +21,28 @@ static const struct vfio_device_ops vfio_ccw_dev_ops;
static int vfio_ccw_mdev_reset(struct vfio_ccw_private *private)
{
struct subchannel *sch;
int ret;
sch = private->sch;
/*
* TODO:
* In the cureent stage, some things like "no I/O running" and "no
* interrupt pending" are clear, but we are not sure what other state
* we need to care about.
* There are still a lot more instructions need to be handled. We
* should come back here later.
* If the FSM state is seen as Not Operational after closing
* and re-opening the mdev, return an error.
*/
ret = vfio_ccw_sch_quiesce(sch);
if (ret)
return ret;
vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_CLOSE);
vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_OPEN);
if (private->state == VFIO_CCW_STATE_NOT_OPER)
return -EINVAL;
ret = cio_enable_subchannel(sch, (u32)(unsigned long)sch);
if (!ret)
private->state = VFIO_CCW_STATE_IDLE;
return ret;
return 0;
}
static int vfio_ccw_mdev_notifier(struct notifier_block *nb,
unsigned long action,
void *data)
static void vfio_ccw_dma_unmap(struct vfio_device *vdev, u64 iova, u64 length)
{
struct vfio_ccw_private *private =
container_of(nb, struct vfio_ccw_private, nb);
container_of(vdev, struct vfio_ccw_private, vdev);
/*
* Vendor drivers MUST unpin pages in response to an
* invalidation.
*/
if (action == VFIO_IOMMU_NOTIFY_DMA_UNMAP) {
struct vfio_iommu_type1_dma_unmap *unmap = data;
/* Drivers MUST unpin pages in response to an invalidation. */
if (!cp_iova_pinned(&private->cp, iova, length))
return;
if (!cp_iova_pinned(&private->cp, unmap->iova))
return NOTIFY_OK;
if (vfio_ccw_mdev_reset(private))
return NOTIFY_BAD;
cp_free(&private->cp);
return NOTIFY_OK;
}
return NOTIFY_DONE;
vfio_ccw_mdev_reset(private);
}
static ssize_t name_show(struct mdev_type *mtype,
@ -128,11 +102,8 @@ static int vfio_ccw_mdev_probe(struct mdev_device *mdev)
vfio_init_group_dev(&private->vdev, &mdev->dev,
&vfio_ccw_dev_ops);
private->mdev = mdev;
private->state = VFIO_CCW_STATE_IDLE;
VFIO_CCW_MSG_EVENT(2, "mdev %pUl, sch %x.%x.%04x: create\n",
mdev_uuid(mdev), private->sch->schid.cssid,
VFIO_CCW_MSG_EVENT(2, "sch %x.%x.%04x: create\n",
private->sch->schid.cssid,
private->sch->schid.ssid,
private->sch->schid.sch_no);
@ -145,8 +116,6 @@ static int vfio_ccw_mdev_probe(struct mdev_device *mdev)
err_atomic:
vfio_uninit_group_dev(&private->vdev);
atomic_inc(&private->avail);
private->mdev = NULL;
private->state = VFIO_CCW_STATE_IDLE;
return ret;
}
@ -154,23 +123,14 @@ static void vfio_ccw_mdev_remove(struct mdev_device *mdev)
{
struct vfio_ccw_private *private = dev_get_drvdata(mdev->dev.parent);
VFIO_CCW_MSG_EVENT(2, "mdev %pUl, sch %x.%x.%04x: remove\n",
mdev_uuid(mdev), private->sch->schid.cssid,
VFIO_CCW_MSG_EVENT(2, "sch %x.%x.%04x: remove\n",
private->sch->schid.cssid,
private->sch->schid.ssid,
private->sch->schid.sch_no);
vfio_unregister_group_dev(&private->vdev);
if ((private->state != VFIO_CCW_STATE_NOT_OPER) &&
(private->state != VFIO_CCW_STATE_STANDBY)) {
if (!vfio_ccw_sch_quiesce(private->sch))
private->state = VFIO_CCW_STATE_STANDBY;
/* The state will be NOT_OPER on error. */
}
vfio_uninit_group_dev(&private->vdev);
cp_free(&private->cp);
private->mdev = NULL;
atomic_inc(&private->avail);
}
@ -178,19 +138,15 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
{
struct vfio_ccw_private *private =
container_of(vdev, struct vfio_ccw_private, vdev);
unsigned long events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
int ret;
private->nb.notifier_call = vfio_ccw_mdev_notifier;
ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY,
&events, &private->nb);
if (ret)
return ret;
/* Device cannot simply be opened again from this state */
if (private->state == VFIO_CCW_STATE_NOT_OPER)
return -EINVAL;
ret = vfio_ccw_register_async_dev_regions(private);
if (ret)
goto out_unregister;
return ret;
ret = vfio_ccw_register_schib_dev_regions(private);
if (ret)
@ -200,11 +156,16 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
if (ret)
goto out_unregister;
vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_OPEN);
if (private->state == VFIO_CCW_STATE_NOT_OPER) {
ret = -EINVAL;
goto out_unregister;
}
return ret;
out_unregister:
vfio_ccw_unregister_dev_regions(private);
vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
return ret;
}
@ -213,16 +174,8 @@ static void vfio_ccw_mdev_close_device(struct vfio_device *vdev)
struct vfio_ccw_private *private =
container_of(vdev, struct vfio_ccw_private, vdev);
if ((private->state != VFIO_CCW_STATE_NOT_OPER) &&
(private->state != VFIO_CCW_STATE_STANDBY)) {
if (!vfio_ccw_mdev_reset(private))
private->state = VFIO_CCW_STATE_STANDBY;
/* The state will be NOT_OPER on error. */
}
cp_free(&private->cp);
vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_CLOSE);
vfio_ccw_unregister_dev_regions(private);
vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
}
static ssize_t vfio_ccw_mdev_read_io_region(struct vfio_ccw_private *private,
@ -645,6 +598,7 @@ static const struct vfio_device_ops vfio_ccw_dev_ops = {
.write = vfio_ccw_mdev_write,
.ioctl = vfio_ccw_mdev_ioctl,
.request = vfio_ccw_mdev_request,
.dma_unmap = vfio_ccw_dma_unmap,
};
struct mdev_driver vfio_ccw_mdev_driver = {
@ -657,13 +611,3 @@ struct mdev_driver vfio_ccw_mdev_driver = {
.remove = vfio_ccw_mdev_remove,
.supported_type_groups = mdev_type_groups,
};
int vfio_ccw_mdev_reg(struct subchannel *sch)
{
return mdev_register_device(&sch->dev, &vfio_ccw_mdev_driver);
}
void vfio_ccw_mdev_unreg(struct subchannel *sch)
{
mdev_unregister_device(&sch->dev);
}

View File

@ -73,8 +73,6 @@ struct vfio_ccw_crw {
* @state: internal state of the device
* @completion: synchronization helper of the I/O completion
* @avail: available for creating a mediated device
* @mdev: pointer to the mediated device
* @nb: notifier for vfio events
* @io_region: MMIO region to input/output I/O arguments/results
* @io_mutex: protect against concurrent update of I/O regions
* @region: additional regions for other subchannel operations
@ -97,8 +95,6 @@ struct vfio_ccw_private {
int state;
struct completion *completion;
atomic_t avail;
struct mdev_device *mdev;
struct notifier_block nb;
struct ccw_io_region *io_region;
struct mutex io_mutex;
struct vfio_ccw_region *region;
@ -119,10 +115,7 @@ struct vfio_ccw_private {
struct work_struct crw_work;
} __aligned(8);
extern int vfio_ccw_mdev_reg(struct subchannel *sch);
extern void vfio_ccw_mdev_unreg(struct subchannel *sch);
extern int vfio_ccw_sch_quiesce(struct subchannel *sch);
int vfio_ccw_sch_quiesce(struct subchannel *sch);
extern struct mdev_driver vfio_ccw_mdev_driver;
@ -147,6 +140,8 @@ enum vfio_ccw_event {
VFIO_CCW_EVENT_IO_REQ,
VFIO_CCW_EVENT_INTERRUPT,
VFIO_CCW_EVENT_ASYNC_REQ,
VFIO_CCW_EVENT_OPEN,
VFIO_CCW_EVENT_CLOSE,
/* last element! */
NR_VFIO_CCW_EVENTS
};
@ -158,7 +153,7 @@ typedef void (fsm_func_t)(struct vfio_ccw_private *, enum vfio_ccw_event);
extern fsm_func_t *vfio_ccw_jumptable[NR_VFIO_CCW_STATES][NR_VFIO_CCW_EVENTS];
static inline void vfio_ccw_fsm_event(struct vfio_ccw_private *private,
int event)
enum vfio_ccw_event event)
{
trace_vfio_ccw_fsm_event(private->sch->schid, private->state, event);
vfio_ccw_jumptable[private->state][event](private, event);

View File

@ -34,7 +34,7 @@ static int ap_queue_enable_irq(struct ap_queue *aq, void *ind)
qirqctrl.ir = 1;
qirqctrl.isc = AP_ISC;
status = ap_aqic(aq->qid, qirqctrl, ind);
status = ap_aqic(aq->qid, qirqctrl, virt_to_phys(ind));
switch (status.response_code) {
case AP_RESPONSE_NORMAL:
case AP_RESPONSE_OTHERWISE_CHANGED:

View File

@ -112,7 +112,7 @@ static void vfio_ap_wait_for_irqclear(int apqn)
*
* Unregisters the ISC in the GIB when the saved ISC not invalid.
* Unpins the guest's page holding the NIB when it exists.
* Resets the saved_pfn and saved_isc to invalid values.
* Resets the saved_iova and saved_isc to invalid values.
*/
static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
{
@ -123,9 +123,9 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
kvm_s390_gisc_unregister(q->matrix_mdev->kvm, q->saved_isc);
q->saved_isc = VFIO_AP_ISC_INVALID;
}
if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
vfio_unpin_pages(&q->matrix_mdev->vdev, &q->saved_pfn, 1);
q->saved_pfn = 0;
if (q->saved_iova && !WARN_ON(!q->matrix_mdev)) {
vfio_unpin_pages(&q->matrix_mdev->vdev, q->saved_iova, 1);
q->saved_iova = 0;
}
}
@ -154,7 +154,7 @@ static struct ap_queue_status vfio_ap_irq_disable(struct vfio_ap_queue *q)
int retries = 5;
do {
status = ap_aqic(q->apqn, aqic_gisa, NULL);
status = ap_aqic(q->apqn, aqic_gisa, 0);
switch (status.response_code) {
case AP_RESPONSE_OTHERWISE_CHANGED:
case AP_RESPONSE_NORMAL:
@ -189,27 +189,19 @@ end_free:
*
* @vcpu: the object representing the vcpu executing the PQAP(AQIC) instruction.
* @nib: the location for storing the nib address.
* @g_pfn: the location for storing the page frame number of the page containing
* the nib.
*
* When the PQAP(AQIC) instruction is executed, general register 2 contains the
* address of the notification indicator byte (nib) used for IRQ notification.
* This function parses the nib from gr2 and calculates the page frame
* number for the guest of the page containing the nib. The values are
* stored in @nib and @g_pfn respectively.
*
* The g_pfn of the nib is then validated to ensure the nib address is valid.
* This function parses and validates the nib from gr2.
*
* Return: returns zero if the nib address is a valid; otherwise, returns
* -EINVAL.
*/
static int vfio_ap_validate_nib(struct kvm_vcpu *vcpu, unsigned long *nib,
unsigned long *g_pfn)
static int vfio_ap_validate_nib(struct kvm_vcpu *vcpu, dma_addr_t *nib)
{
*nib = vcpu->run->s.regs.gprs[2];
*g_pfn = *nib >> PAGE_SHIFT;
if (kvm_is_error_hva(gfn_to_hva(vcpu->kvm, *g_pfn)))
if (kvm_is_error_hva(gfn_to_hva(vcpu->kvm, *nib >> PAGE_SHIFT)))
return -EINVAL;
return 0;
@ -239,33 +231,34 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
int isc,
struct kvm_vcpu *vcpu)
{
unsigned long nib;
struct ap_qirq_ctrl aqic_gisa = {};
struct ap_queue_status status = {};
struct kvm_s390_gisa *gisa;
struct page *h_page;
int nisc;
struct kvm *kvm;
unsigned long h_nib, g_pfn, h_pfn;
phys_addr_t h_nib;
dma_addr_t nib;
int ret;
/* Verify that the notification indicator byte address is valid */
if (vfio_ap_validate_nib(vcpu, &nib, &g_pfn)) {
VFIO_AP_DBF_WARN("%s: invalid NIB address: nib=%#lx, g_pfn=%#lx, apqn=%#04x\n",
__func__, nib, g_pfn, q->apqn);
if (vfio_ap_validate_nib(vcpu, &nib)) {
VFIO_AP_DBF_WARN("%s: invalid NIB address: nib=%pad, apqn=%#04x\n",
__func__, &nib, q->apqn);
status.response_code = AP_RESPONSE_INVALID_ADDRESS;
return status;
}
ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
IOMMU_READ | IOMMU_WRITE, &h_pfn);
ret = vfio_pin_pages(&q->matrix_mdev->vdev, nib, 1,
IOMMU_READ | IOMMU_WRITE, &h_page);
switch (ret) {
case 1:
break;
default:
VFIO_AP_DBF_WARN("%s: vfio_pin_pages failed: rc=%d,"
"nib=%#lx, g_pfn=%#lx, apqn=%#04x\n",
__func__, ret, nib, g_pfn, q->apqn);
"nib=%pad, apqn=%#04x\n",
__func__, ret, &nib, q->apqn);
status.response_code = AP_RESPONSE_INVALID_ADDRESS;
return status;
@ -274,7 +267,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
kvm = q->matrix_mdev->kvm;
gisa = kvm->arch.gisa_int.origin;
h_nib = (h_pfn << PAGE_SHIFT) | (nib & ~PAGE_MASK);
h_nib = page_to_phys(h_page) | (nib & ~PAGE_MASK);
aqic_gisa.gisc = isc;
nisc = kvm_s390_gisc_register(kvm, isc);
@ -290,17 +283,17 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
aqic_gisa.ir = 1;
aqic_gisa.gisa = (uint64_t)gisa >> 4;
status = ap_aqic(q->apqn, aqic_gisa, (void *)h_nib);
status = ap_aqic(q->apqn, aqic_gisa, h_nib);
switch (status.response_code) {
case AP_RESPONSE_NORMAL:
/* See if we did clear older IRQ configuration */
vfio_ap_free_aqic_resources(q);
q->saved_pfn = g_pfn;
q->saved_iova = nib;
q->saved_isc = isc;
break;
case AP_RESPONSE_OTHERWISE_CHANGED:
/* We could not modify IRQ setings: clear new configuration */
vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
vfio_unpin_pages(&q->matrix_mdev->vdev, nib, 1);
kvm_s390_gisc_unregister(kvm, isc);
break;
default:
@ -1226,34 +1219,13 @@ static int vfio_ap_mdev_set_kvm(struct ap_matrix_mdev *matrix_mdev,
return 0;
}
/**
* vfio_ap_mdev_iommu_notifier - IOMMU notifier callback
*
* @nb: The notifier block
* @action: Action to be taken
* @data: data associated with the request
*
* For an UNMAP request, unpin the guest IOVA (the NIB guest address we
* pinned before). Other requests are ignored.
*
* Return: for an UNMAP request, NOFITY_OK; otherwise NOTIFY_DONE.
*/
static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb,
unsigned long action, void *data)
static void vfio_ap_mdev_dma_unmap(struct vfio_device *vdev, u64 iova,
u64 length)
{
struct ap_matrix_mdev *matrix_mdev;
struct ap_matrix_mdev *matrix_mdev =
container_of(vdev, struct ap_matrix_mdev, vdev);
matrix_mdev = container_of(nb, struct ap_matrix_mdev, iommu_notifier);
if (action == VFIO_IOMMU_NOTIFY_DMA_UNMAP) {
struct vfio_iommu_type1_dma_unmap *unmap = data;
unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
return NOTIFY_OK;
}
return NOTIFY_DONE;
vfio_unpin_pages(&matrix_mdev->vdev, iova, 1);
}
/**
@ -1380,27 +1352,11 @@ static int vfio_ap_mdev_open_device(struct vfio_device *vdev)
{
struct ap_matrix_mdev *matrix_mdev =
container_of(vdev, struct ap_matrix_mdev, vdev);
unsigned long events;
int ret;
if (!vdev->kvm)
return -EINVAL;
ret = vfio_ap_mdev_set_kvm(matrix_mdev, vdev->kvm);
if (ret)
return ret;
matrix_mdev->iommu_notifier.notifier_call = vfio_ap_mdev_iommu_notifier;
events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
&matrix_mdev->iommu_notifier);
if (ret)
goto err_kvm;
return 0;
err_kvm:
vfio_ap_mdev_unset_kvm(matrix_mdev);
return ret;
return vfio_ap_mdev_set_kvm(matrix_mdev, vdev->kvm);
}
static void vfio_ap_mdev_close_device(struct vfio_device *vdev)
@ -1408,8 +1364,6 @@ static void vfio_ap_mdev_close_device(struct vfio_device *vdev)
struct ap_matrix_mdev *matrix_mdev =
container_of(vdev, struct ap_matrix_mdev, vdev);
vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
&matrix_mdev->iommu_notifier);
vfio_ap_mdev_unset_kvm(matrix_mdev);
}
@ -1461,6 +1415,7 @@ static const struct vfio_device_ops vfio_ap_matrix_dev_ops = {
.open_device = vfio_ap_mdev_open_device,
.close_device = vfio_ap_mdev_close_device,
.ioctl = vfio_ap_mdev_ioctl,
.dma_unmap = vfio_ap_mdev_dma_unmap,
};
static struct mdev_driver vfio_ap_matrix_driver = {

View File

@ -81,8 +81,6 @@ struct ap_matrix {
* @node: allows the ap_matrix_mdev struct to be added to a list
* @matrix: the adapters, usage domains and control domains assigned to the
* mediated matrix device.
* @iommu_notifier: notifier block used for specifying callback function for
* handling the VFIO_IOMMU_NOTIFY_DMA_UNMAP even
* @kvm: the struct holding guest's state
* @pqap_hook: the function pointer to the interception handler for the
* PQAP(AQIC) instruction.
@ -92,7 +90,6 @@ struct ap_matrix_mdev {
struct vfio_device vdev;
struct list_head node;
struct ap_matrix matrix;
struct notifier_block iommu_notifier;
struct kvm *kvm;
crypto_hook pqap_hook;
struct mdev_device *mdev;
@ -102,13 +99,13 @@ struct ap_matrix_mdev {
* struct vfio_ap_queue - contains the data associated with a queue bound to the
* vfio_ap device driver
* @matrix_mdev: the matrix mediated device
* @saved_pfn: the guest PFN pinned for the guest
* @saved_iova: the notification indicator byte (nib) address
* @apqn: the APQN of the AP queue device
* @saved_isc: the guest ISC registered with the GIB interface
*/
struct vfio_ap_queue {
struct ap_matrix_mdev *matrix_mdev;
unsigned long saved_pfn;
dma_addr_t saved_iova;
int apqn;
#define VFIO_AP_ISC_INVALID 0xff
unsigned char saved_isc;

View File

@ -39,7 +39,7 @@ struct vfio_fsl_mc_device {
struct vfio_fsl_mc_irq *mc_irqs;
};
extern int vfio_fsl_mc_set_irqs_ioctl(struct vfio_fsl_mc_device *vdev,
int vfio_fsl_mc_set_irqs_ioctl(struct vfio_fsl_mc_device *vdev,
u32 flags, unsigned int index,
unsigned int start, unsigned int count,
void *data);

View File

@ -1185,7 +1185,7 @@ static int hisi_acc_vfio_pci_open_device(struct vfio_device *core_vdev)
if (ret)
return ret;
if (core_vdev->ops->migration_set_state) {
if (core_vdev->mig_ops) {
ret = hisi_acc_vf_qm_init(hisi_acc_vdev);
if (ret) {
vfio_pci_core_disable(vdev);
@ -1208,6 +1208,11 @@ static void hisi_acc_vfio_pci_close_device(struct vfio_device *core_vdev)
vfio_pci_core_close_device(core_vdev);
}
static const struct vfio_migration_ops hisi_acc_vfio_pci_migrn_state_ops = {
.migration_set_state = hisi_acc_vfio_pci_set_device_state,
.migration_get_state = hisi_acc_vfio_pci_get_device_state,
};
static const struct vfio_device_ops hisi_acc_vfio_pci_migrn_ops = {
.name = "hisi-acc-vfio-pci-migration",
.open_device = hisi_acc_vfio_pci_open_device,
@ -1219,8 +1224,6 @@ static const struct vfio_device_ops hisi_acc_vfio_pci_migrn_ops = {
.mmap = hisi_acc_vfio_pci_mmap,
.request = vfio_pci_core_request,
.match = vfio_pci_core_match,
.migration_set_state = hisi_acc_vfio_pci_set_device_state,
.migration_get_state = hisi_acc_vfio_pci_get_device_state,
};
static const struct vfio_device_ops hisi_acc_vfio_pci_ops = {
@ -1272,6 +1275,8 @@ static int hisi_acc_vfio_pci_probe(struct pci_dev *pdev, const struct pci_device
if (!ret) {
vfio_pci_core_init_device(&hisi_acc_vdev->core_device, pdev,
&hisi_acc_vfio_pci_migrn_ops);
hisi_acc_vdev->core_device.vdev.mig_ops =
&hisi_acc_vfio_pci_migrn_state_ops;
} else {
pci_warn(pdev, "migration support failed, continue with generic interface\n");
vfio_pci_core_init_device(&hisi_acc_vdev->core_device, pdev,

View File

@ -88,6 +88,16 @@ static int mlx5fv_vf_event(struct notifier_block *nb,
return 0;
}
void mlx5vf_cmd_close_migratable(struct mlx5vf_pci_core_device *mvdev)
{
if (!mvdev->migrate_cap)
return;
mutex_lock(&mvdev->state_mutex);
mlx5vf_disable_fds(mvdev);
mlx5vf_state_mutex_unlock(mvdev);
}
void mlx5vf_cmd_remove_migratable(struct mlx5vf_pci_core_device *mvdev)
{
if (!mvdev->migrate_cap)
@ -98,7 +108,8 @@ void mlx5vf_cmd_remove_migratable(struct mlx5vf_pci_core_device *mvdev)
destroy_workqueue(mvdev->cb_wq);
}
void mlx5vf_cmd_set_migratable(struct mlx5vf_pci_core_device *mvdev)
void mlx5vf_cmd_set_migratable(struct mlx5vf_pci_core_device *mvdev,
const struct vfio_migration_ops *mig_ops)
{
struct pci_dev *pdev = mvdev->core_device.pdev;
int ret;
@ -139,6 +150,7 @@ void mlx5vf_cmd_set_migratable(struct mlx5vf_pci_core_device *mvdev)
mvdev->core_device.vdev.migration_flags =
VFIO_MIGRATION_STOP_COPY |
VFIO_MIGRATION_P2P;
mvdev->core_device.vdev.mig_ops = mig_ops;
end:
mlx5_vf_put_core_dev(mvdev->mdev);

View File

@ -62,8 +62,10 @@ int mlx5vf_cmd_suspend_vhca(struct mlx5vf_pci_core_device *mvdev, u16 op_mod);
int mlx5vf_cmd_resume_vhca(struct mlx5vf_pci_core_device *mvdev, u16 op_mod);
int mlx5vf_cmd_query_vhca_migration_state(struct mlx5vf_pci_core_device *mvdev,
size_t *state_size);
void mlx5vf_cmd_set_migratable(struct mlx5vf_pci_core_device *mvdev);
void mlx5vf_cmd_set_migratable(struct mlx5vf_pci_core_device *mvdev,
const struct vfio_migration_ops *mig_ops);
void mlx5vf_cmd_remove_migratable(struct mlx5vf_pci_core_device *mvdev);
void mlx5vf_cmd_close_migratable(struct mlx5vf_pci_core_device *mvdev);
int mlx5vf_cmd_save_vhca_state(struct mlx5vf_pci_core_device *mvdev,
struct mlx5_vf_migration_file *migf);
int mlx5vf_cmd_load_vhca_state(struct mlx5vf_pci_core_device *mvdev,

View File

@ -570,10 +570,15 @@ static void mlx5vf_pci_close_device(struct vfio_device *core_vdev)
struct mlx5vf_pci_core_device *mvdev = container_of(
core_vdev, struct mlx5vf_pci_core_device, core_device.vdev);
mlx5vf_disable_fds(mvdev);
mlx5vf_cmd_close_migratable(mvdev);
vfio_pci_core_close_device(core_vdev);
}
static const struct vfio_migration_ops mlx5vf_pci_mig_ops = {
.migration_set_state = mlx5vf_pci_set_device_state,
.migration_get_state = mlx5vf_pci_get_device_state,
};
static const struct vfio_device_ops mlx5vf_pci_ops = {
.name = "mlx5-vfio-pci",
.open_device = mlx5vf_pci_open_device,
@ -585,8 +590,6 @@ static const struct vfio_device_ops mlx5vf_pci_ops = {
.mmap = vfio_pci_core_mmap,
.request = vfio_pci_core_request,
.match = vfio_pci_core_match,
.migration_set_state = mlx5vf_pci_set_device_state,
.migration_get_state = mlx5vf_pci_get_device_state,
};
static int mlx5vf_pci_probe(struct pci_dev *pdev,
@ -599,7 +602,7 @@ static int mlx5vf_pci_probe(struct pci_dev *pdev,
if (!mvdev)
return -ENOMEM;
vfio_pci_core_init_device(&mvdev->core_device, pdev, &mlx5vf_pci_ops);
mlx5vf_cmd_set_migratable(mvdev);
mlx5vf_cmd_set_migratable(mvdev, &mlx5vf_pci_mig_ops);
dev_set_drvdata(&pdev->dev, &mvdev->core_device);
ret = vfio_pci_core_register_device(&mvdev->core_device);
if (ret)

View File

@ -222,7 +222,7 @@ static int vfio_default_config_write(struct vfio_pci_core_device *vdev, int pos,
memcpy(vdev->vconfig + pos, &virt_val, count);
}
/* Non-virtualzed and writable bits go to hardware */
/* Non-virtualized and writable bits go to hardware */
if (write & ~virt) {
struct pci_dev *pdev = vdev->pdev;
__le32 phys_val = 0;
@ -1728,7 +1728,7 @@ int vfio_config_init(struct vfio_pci_core_device *vdev)
/*
* Config space, caps and ecaps are all dword aligned, so we could
* use one byte per dword to record the type. However, there are
* no requiremenst on the length of a capability, so the gap between
* no requirements on the length of a capability, so the gap between
* capabilities needs byte granularity.
*/
map = kmalloc(pdev->cfg_size, GFP_KERNEL);

View File

@ -1868,6 +1868,13 @@ int vfio_pci_core_register_device(struct vfio_pci_core_device *vdev)
if (pdev->hdr_type != PCI_HEADER_TYPE_NORMAL)
return -EINVAL;
if (vdev->vdev.mig_ops) {
if (!(vdev->vdev.mig_ops->migration_get_state &&
vdev->vdev.mig_ops->migration_set_state) ||
!(vdev->vdev.migration_flags & VFIO_MIGRATION_STOP_COPY))
return -EINVAL;
}
/*
* Prevent binding to PFs with VFs enabled, the VFs might be in use
* by the host or other users. We cannot capture the VFs if they

View File

@ -78,21 +78,20 @@ struct vfio_platform_reset_node {
vfio_platform_reset_fn_t of_reset;
};
extern int vfio_platform_probe_common(struct vfio_platform_device *vdev,
struct device *dev);
int vfio_platform_probe_common(struct vfio_platform_device *vdev,
struct device *dev);
void vfio_platform_remove_common(struct vfio_platform_device *vdev);
extern int vfio_platform_irq_init(struct vfio_platform_device *vdev);
extern void vfio_platform_irq_cleanup(struct vfio_platform_device *vdev);
int vfio_platform_irq_init(struct vfio_platform_device *vdev);
void vfio_platform_irq_cleanup(struct vfio_platform_device *vdev);
extern int vfio_platform_set_irqs_ioctl(struct vfio_platform_device *vdev,
uint32_t flags, unsigned index,
unsigned start, unsigned count,
void *data);
int vfio_platform_set_irqs_ioctl(struct vfio_platform_device *vdev,
uint32_t flags, unsigned index,
unsigned start, unsigned count, void *data);
extern void __vfio_platform_register_reset(struct vfio_platform_reset_node *n);
extern void vfio_platform_unregister_reset(const char *compat,
vfio_platform_reset_fn_t fn);
void __vfio_platform_register_reset(struct vfio_platform_reset_node *n);
void vfio_platform_unregister_reset(const char *compat,
vfio_platform_reset_fn_t fn);
#define vfio_platform_register_reset(__compat, __reset) \
static struct vfio_platform_reset_node __reset ## _node = { \
.owner = THIS_MODULE, \

View File

@ -231,6 +231,9 @@ int vfio_register_iommu_driver(const struct vfio_iommu_driver_ops *ops)
{
struct vfio_iommu_driver *driver, *tmp;
if (WARN_ON(!ops->register_device != !ops->unregister_device))
return -EINVAL;
driver = kzalloc(sizeof(*driver), GFP_KERNEL);
if (!driver)
return -ENOMEM;
@ -504,7 +507,9 @@ static struct vfio_group *vfio_noiommu_group_alloc(struct device *dev,
if (IS_ERR(iommu_group))
return ERR_CAST(iommu_group);
iommu_group_set_name(iommu_group, "vfio-noiommu");
ret = iommu_group_set_name(iommu_group, "vfio-noiommu");
if (ret)
goto out_put_group;
ret = iommu_group_add_device(iommu_group, dev);
if (ret)
goto out_put_group;
@ -554,7 +559,7 @@ static struct vfio_group *vfio_group_find_or_alloc(struct device *dev)
* restore cache coherency. It has to be checked here because it is only
* valid for cases where we are using iommu groups.
*/
if (!iommu_capable(dev->bus, IOMMU_CAP_CACHE_COHERENCY)) {
if (!device_iommu_capable(dev, IOMMU_CAP_CACHE_COHERENCY)) {
iommu_group_put(iommu_group);
return ERR_PTR(-EINVAL);
}
@ -1082,6 +1087,7 @@ static void vfio_device_unassign_container(struct vfio_device *device)
static struct file *vfio_device_open(struct vfio_device *device)
{
struct vfio_iommu_driver *iommu_driver;
struct file *filep;
int ret;
@ -1112,6 +1118,12 @@ static struct file *vfio_device_open(struct vfio_device *device)
if (ret)
goto err_undo_count;
}
iommu_driver = device->group->container->iommu_driver;
if (iommu_driver && iommu_driver->ops->register_device)
iommu_driver->ops->register_device(
device->group->container->iommu_data, device);
up_read(&device->group->group_rwsem);
}
mutex_unlock(&device->dev_set->lock);
@ -1146,13 +1158,19 @@ static struct file *vfio_device_open(struct vfio_device *device)
err_close_device:
mutex_lock(&device->dev_set->lock);
down_read(&device->group->group_rwsem);
if (device->open_count == 1 && device->ops->close_device)
if (device->open_count == 1 && device->ops->close_device) {
device->ops->close_device(device);
iommu_driver = device->group->container->iommu_driver;
if (iommu_driver && iommu_driver->ops->unregister_device)
iommu_driver->ops->unregister_device(
device->group->container->iommu_data, device);
}
err_undo_count:
up_read(&device->group->group_rwsem);
device->open_count--;
if (device->open_count == 0 && device->kvm)
device->kvm = NULL;
up_read(&device->group->group_rwsem);
mutex_unlock(&device->dev_set->lock);
module_put(device->dev->driver->owner);
err_unassign_container:
@ -1342,12 +1360,18 @@ static const struct file_operations vfio_group_fops = {
static int vfio_device_fops_release(struct inode *inode, struct file *filep)
{
struct vfio_device *device = filep->private_data;
struct vfio_iommu_driver *iommu_driver;
mutex_lock(&device->dev_set->lock);
vfio_assert_device_open(device);
down_read(&device->group->group_rwsem);
if (device->open_count == 1 && device->ops->close_device)
device->ops->close_device(device);
iommu_driver = device->group->container->iommu_driver;
if (iommu_driver && iommu_driver->ops->unregister_device)
iommu_driver->ops->unregister_device(
device->group->container->iommu_data, device);
up_read(&device->group->group_rwsem);
device->open_count--;
if (device->open_count == 0)
@ -1544,8 +1568,7 @@ vfio_ioctl_device_feature_mig_device_state(struct vfio_device *device,
struct file *filp = NULL;
int ret;
if (!device->ops->migration_set_state ||
!device->ops->migration_get_state)
if (!device->mig_ops)
return -ENOTTY;
ret = vfio_check_feature(flags, argsz,
@ -1561,7 +1584,8 @@ vfio_ioctl_device_feature_mig_device_state(struct vfio_device *device,
if (flags & VFIO_DEVICE_FEATURE_GET) {
enum vfio_device_mig_state curr_state;
ret = device->ops->migration_get_state(device, &curr_state);
ret = device->mig_ops->migration_get_state(device,
&curr_state);
if (ret)
return ret;
mig.device_state = curr_state;
@ -1569,7 +1593,7 @@ vfio_ioctl_device_feature_mig_device_state(struct vfio_device *device,
}
/* Handle the VFIO_DEVICE_FEATURE_SET */
filp = device->ops->migration_set_state(device, mig.device_state);
filp = device->mig_ops->migration_set_state(device, mig.device_state);
if (IS_ERR(filp) || !filp)
goto out_copy;
@ -1592,8 +1616,7 @@ static int vfio_ioctl_device_feature_migration(struct vfio_device *device,
};
int ret;
if (!device->ops->migration_set_state ||
!device->ops->migration_get_state)
if (!device->mig_ops)
return -ENOTTY;
ret = vfio_check_feature(flags, argsz, VFIO_DEVICE_FEATURE_GET,
@ -1815,6 +1838,7 @@ struct vfio_info_cap_header *vfio_info_cap_add(struct vfio_info_cap *caps,
buf = krealloc(caps->buf, caps->size + size, GFP_KERNEL);
if (!buf) {
kfree(caps->buf);
caps->buf = NULL;
caps->size = 0;
return ERR_PTR(-ENOMEM);
}
@ -1913,26 +1937,25 @@ int vfio_set_irqs_validate_and_prepare(struct vfio_irq_set *hdr, int num_irqs,
EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
/*
* Pin a set of guest PFNs and return their associated host PFNs for local
* Pin contiguous user pages and return their associated host pages for local
* domain only.
* @device [in] : device
* @user_pfn [in]: array of user/guest PFNs to be pinned.
* @npage [in] : count of elements in user_pfn array. This count should not
* be greater VFIO_PIN_PAGES_MAX_ENTRIES.
* @iova [in] : starting IOVA of user pages to be pinned.
* @npage [in] : count of pages to be pinned. This count should not
* be greater than VFIO_PIN_PAGES_MAX_ENTRIES.
* @prot [in] : protection flags
* @phys_pfn[out]: array of host PFNs
* @pages[out] : array of host pages
* Return error or number of pages pinned.
*/
int vfio_pin_pages(struct vfio_device *device, unsigned long *user_pfn,
int npage, int prot, unsigned long *phys_pfn)
int vfio_pin_pages(struct vfio_device *device, dma_addr_t iova,
int npage, int prot, struct page **pages)
{
struct vfio_container *container;
struct vfio_group *group = device->group;
struct vfio_iommu_driver *driver;
int ret;
if (!user_pfn || !phys_pfn || !npage ||
!vfio_assert_device_open(device))
if (!pages || !npage || !vfio_assert_device_open(device))
return -EINVAL;
if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
@ -1946,8 +1969,8 @@ int vfio_pin_pages(struct vfio_device *device, unsigned long *user_pfn,
driver = container->iommu_driver;
if (likely(driver && driver->ops->pin_pages))
ret = driver->ops->pin_pages(container->iommu_data,
group->iommu_group, user_pfn,
npage, prot, phys_pfn);
group->iommu_group, iova,
npage, prot, pages);
else
ret = -ENOTTY;
@ -1956,37 +1979,28 @@ int vfio_pin_pages(struct vfio_device *device, unsigned long *user_pfn,
EXPORT_SYMBOL(vfio_pin_pages);
/*
* Unpin set of host PFNs for local domain only.
* Unpin contiguous host pages for local domain only.
* @device [in] : device
* @user_pfn [in]: array of user/guest PFNs to be unpinned. Number of user/guest
* PFNs should not be greater than VFIO_PIN_PAGES_MAX_ENTRIES.
* @npage [in] : count of elements in user_pfn array. This count should not
* @iova [in] : starting address of user pages to be unpinned.
* @npage [in] : count of pages to be unpinned. This count should not
* be greater than VFIO_PIN_PAGES_MAX_ENTRIES.
* Return error or number of pages unpinned.
*/
int vfio_unpin_pages(struct vfio_device *device, unsigned long *user_pfn,
int npage)
void vfio_unpin_pages(struct vfio_device *device, dma_addr_t iova, int npage)
{
struct vfio_container *container;
struct vfio_iommu_driver *driver;
int ret;
if (!user_pfn || !npage || !vfio_assert_device_open(device))
return -EINVAL;
if (WARN_ON(npage <= 0 || npage > VFIO_PIN_PAGES_MAX_ENTRIES))
return;
if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
return -E2BIG;
if (WARN_ON(!vfio_assert_device_open(device)))
return;
/* group->container cannot change while a vfio device is open */
container = device->group->container;
driver = container->iommu_driver;
if (likely(driver && driver->ops->unpin_pages))
ret = driver->ops->unpin_pages(container->iommu_data, user_pfn,
npage);
else
ret = -ENOTTY;
return ret;
driver->ops->unpin_pages(container->iommu_data, iova, npage);
}
EXPORT_SYMBOL(vfio_unpin_pages);
@ -2001,13 +2015,13 @@ EXPORT_SYMBOL(vfio_unpin_pages);
* not a real device DMA, it is not necessary to pin the user space memory.
*
* @device [in] : VFIO device
* @user_iova [in] : base IOVA of a user space buffer
* @iova [in] : base IOVA of a user space buffer
* @data [in] : pointer to kernel buffer
* @len [in] : kernel buffer length
* @write : indicate read or write
* Return error code on failure or 0 on success.
*/
int vfio_dma_rw(struct vfio_device *device, dma_addr_t user_iova, void *data,
int vfio_dma_rw(struct vfio_device *device, dma_addr_t iova, void *data,
size_t len, bool write)
{
struct vfio_container *container;
@ -2023,97 +2037,13 @@ int vfio_dma_rw(struct vfio_device *device, dma_addr_t user_iova, void *data,
if (likely(driver && driver->ops->dma_rw))
ret = driver->ops->dma_rw(container->iommu_data,
user_iova, data, len, write);
iova, data, len, write);
else
ret = -ENOTTY;
return ret;
}
EXPORT_SYMBOL(vfio_dma_rw);
static int vfio_register_iommu_notifier(struct vfio_group *group,
unsigned long *events,
struct notifier_block *nb)
{
struct vfio_container *container;
struct vfio_iommu_driver *driver;
int ret;
lockdep_assert_held_read(&group->group_rwsem);
container = group->container;
driver = container->iommu_driver;
if (likely(driver && driver->ops->register_notifier))
ret = driver->ops->register_notifier(container->iommu_data,
events, nb);
else
ret = -ENOTTY;
return ret;
}
static int vfio_unregister_iommu_notifier(struct vfio_group *group,
struct notifier_block *nb)
{
struct vfio_container *container;
struct vfio_iommu_driver *driver;
int ret;
lockdep_assert_held_read(&group->group_rwsem);
container = group->container;
driver = container->iommu_driver;
if (likely(driver && driver->ops->unregister_notifier))
ret = driver->ops->unregister_notifier(container->iommu_data,
nb);
else
ret = -ENOTTY;
return ret;
}
int vfio_register_notifier(struct vfio_device *device,
enum vfio_notify_type type, unsigned long *events,
struct notifier_block *nb)
{
struct vfio_group *group = device->group;
int ret;
if (!nb || !events || (*events == 0) ||
!vfio_assert_device_open(device))
return -EINVAL;
switch (type) {
case VFIO_IOMMU_NOTIFY:
ret = vfio_register_iommu_notifier(group, events, nb);
break;
default:
ret = -EINVAL;
}
return ret;
}
EXPORT_SYMBOL(vfio_register_notifier);
int vfio_unregister_notifier(struct vfio_device *device,
enum vfio_notify_type type,
struct notifier_block *nb)
{
struct vfio_group *group = device->group;
int ret;
if (!nb || !vfio_assert_device_open(device))
return -EINVAL;
switch (type) {
case VFIO_IOMMU_NOTIFY:
ret = vfio_unregister_iommu_notifier(group, nb);
break;
default:
ret = -EINVAL;
}
return ret;
}
EXPORT_SYMBOL(vfio_unregister_notifier);
/*
* Module/class support
*/
@ -2159,13 +2089,17 @@ static int __init vfio_init(void)
if (ret)
goto err_alloc_chrdev;
pr_info(DRIVER_DESC " version: " DRIVER_VERSION "\n");
#ifdef CONFIG_VFIO_NOIOMMU
vfio_register_iommu_driver(&vfio_noiommu_ops);
ret = vfio_register_iommu_driver(&vfio_noiommu_ops);
#endif
if (ret)
goto err_driver_register;
pr_info(DRIVER_DESC " version: " DRIVER_VERSION "\n");
return 0;
err_driver_register:
unregister_chrdev_region(vfio.group_devt, MINORMASK + 1);
err_alloc_chrdev:
class_destroy(vfio.class);
vfio.class = NULL;

View File

@ -50,16 +50,15 @@ struct vfio_iommu_driver_ops {
struct iommu_group *group);
int (*pin_pages)(void *iommu_data,
struct iommu_group *group,
unsigned long *user_pfn,
dma_addr_t user_iova,
int npage, int prot,
unsigned long *phys_pfn);
int (*unpin_pages)(void *iommu_data,
unsigned long *user_pfn, int npage);
int (*register_notifier)(void *iommu_data,
unsigned long *events,
struct notifier_block *nb);
int (*unregister_notifier)(void *iommu_data,
struct notifier_block *nb);
struct page **pages);
void (*unpin_pages)(void *iommu_data,
dma_addr_t user_iova, int npage);
void (*register_device)(void *iommu_data,
struct vfio_device *vdev);
void (*unregister_device)(void *iommu_data,
struct vfio_device *vdev);
int (*dma_rw)(void *iommu_data, dma_addr_t user_iova,
void *data, size_t count, bool write);
struct iommu_domain *(*group_iommu_domain)(void *iommu_data,

View File

@ -378,8 +378,7 @@ static void tce_iommu_release(void *iommu_data)
kfree(container);
}
static void tce_iommu_unuse_page(struct tce_container *container,
unsigned long hpa)
static void tce_iommu_unuse_page(unsigned long hpa)
{
struct page *page;
@ -474,7 +473,7 @@ static int tce_iommu_clear(struct tce_container *container,
continue;
}
tce_iommu_unuse_page(container, oldhpa);
tce_iommu_unuse_page(oldhpa);
}
iommu_tce_kill(tbl, firstentry, pages);
@ -524,7 +523,7 @@ static long tce_iommu_build(struct tce_container *container,
ret = iommu_tce_xchg_no_kill(container->mm, tbl, entry + i,
&hpa, &dirtmp);
if (ret) {
tce_iommu_unuse_page(container, hpa);
tce_iommu_unuse_page(hpa);
pr_err("iommu_tce: %s failed ioba=%lx, tce=%lx, ret=%ld\n",
__func__, entry << tbl->it_page_shift,
tce, ret);
@ -532,7 +531,7 @@ static long tce_iommu_build(struct tce_container *container,
}
if (dirtmp != DMA_NONE)
tce_iommu_unuse_page(container, hpa);
tce_iommu_unuse_page(hpa);
tce += IOMMU_PAGE_SIZE(tbl);
}
@ -1266,7 +1265,10 @@ static int tce_iommu_attach_group(void *iommu_data,
goto unlock_exit;
}
/* Check if new group has the same iommu_ops (i.e. compatible) */
/*
* Check if new group has the same iommu_table_group_ops
* (i.e. compatible)
*/
list_for_each_entry(tcegrp, &container->group_list, next) {
struct iommu_table_group *table_group_tmp;

View File

@ -67,7 +67,8 @@ struct vfio_iommu {
struct list_head iova_list;
struct mutex lock;
struct rb_root dma_list;
struct blocking_notifier_head notifier;
struct list_head device_list;
struct mutex device_list_lock;
unsigned int dma_avail;
unsigned int vaddr_invalid_count;
uint64_t pgsize_bitmap;
@ -828,9 +829,9 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
static int vfio_iommu_type1_pin_pages(void *iommu_data,
struct iommu_group *iommu_group,
unsigned long *user_pfn,
dma_addr_t user_iova,
int npage, int prot,
unsigned long *phys_pfn)
struct page **pages)
{
struct vfio_iommu *iommu = iommu_data;
struct vfio_iommu_group *group;
@ -840,7 +841,7 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
bool do_accounting;
dma_addr_t iova;
if (!iommu || !user_pfn || !phys_pfn)
if (!iommu || !pages)
return -EINVAL;
/* Supported for v2 version only */
@ -856,7 +857,7 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
again:
if (iommu->vaddr_invalid_count) {
for (i = 0; i < npage; i++) {
iova = user_pfn[i] << PAGE_SHIFT;
iova = user_iova + PAGE_SIZE * i;
ret = vfio_find_dma_valid(iommu, iova, PAGE_SIZE, &dma);
if (ret < 0)
goto pin_done;
@ -865,8 +866,8 @@ again:
}
}
/* Fail if notifier list is empty */
if (!iommu->notifier.head) {
/* Fail if no dma_umap notifier is registered */
if (list_empty(&iommu->device_list)) {
ret = -EINVAL;
goto pin_done;
}
@ -879,9 +880,10 @@ again:
do_accounting = list_empty(&iommu->domain_list);
for (i = 0; i < npage; i++) {
unsigned long phys_pfn;
struct vfio_pfn *vpfn;
iova = user_pfn[i] << PAGE_SHIFT;
iova = user_iova + PAGE_SIZE * i;
dma = vfio_find_dma(iommu, iova, PAGE_SIZE);
if (!dma) {
ret = -EINVAL;
@ -895,23 +897,25 @@ again:
vpfn = vfio_iova_get_vfio_pfn(dma, iova);
if (vpfn) {
phys_pfn[i] = vpfn->pfn;
pages[i] = pfn_to_page(vpfn->pfn);
continue;
}
remote_vaddr = dma->vaddr + (iova - dma->iova);
ret = vfio_pin_page_external(dma, remote_vaddr, &phys_pfn[i],
ret = vfio_pin_page_external(dma, remote_vaddr, &phys_pfn,
do_accounting);
if (ret)
goto pin_unwind;
ret = vfio_add_to_pfn_list(dma, iova, phys_pfn[i]);
ret = vfio_add_to_pfn_list(dma, iova, phys_pfn);
if (ret) {
if (put_pfn(phys_pfn[i], dma->prot) && do_accounting)
if (put_pfn(phys_pfn, dma->prot) && do_accounting)
vfio_lock_acct(dma, -1, true);
goto pin_unwind;
}
pages[i] = pfn_to_page(phys_pfn);
if (iommu->dirty_page_tracking) {
unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
@ -934,43 +938,38 @@ again:
goto pin_done;
pin_unwind:
phys_pfn[i] = 0;
pages[i] = NULL;
for (j = 0; j < i; j++) {
dma_addr_t iova;
iova = user_pfn[j] << PAGE_SHIFT;
iova = user_iova + PAGE_SIZE * j;
dma = vfio_find_dma(iommu, iova, PAGE_SIZE);
vfio_unpin_page_external(dma, iova, do_accounting);
phys_pfn[j] = 0;
pages[j] = NULL;
}
pin_done:
mutex_unlock(&iommu->lock);
return ret;
}
static int vfio_iommu_type1_unpin_pages(void *iommu_data,
unsigned long *user_pfn,
int npage)
static void vfio_iommu_type1_unpin_pages(void *iommu_data,
dma_addr_t user_iova, int npage)
{
struct vfio_iommu *iommu = iommu_data;
bool do_accounting;
int i;
if (!iommu || !user_pfn || npage <= 0)
return -EINVAL;
/* Supported for v2 version only */
if (!iommu->v2)
return -EACCES;
if (WARN_ON(!iommu->v2))
return;
mutex_lock(&iommu->lock);
do_accounting = list_empty(&iommu->domain_list);
for (i = 0; i < npage; i++) {
dma_addr_t iova = user_iova + PAGE_SIZE * i;
struct vfio_dma *dma;
dma_addr_t iova;
iova = user_pfn[i] << PAGE_SHIFT;
dma = vfio_find_dma(iommu, iova, PAGE_SIZE);
if (!dma)
break;
@ -979,7 +978,8 @@ static int vfio_iommu_type1_unpin_pages(void *iommu_data,
}
mutex_unlock(&iommu->lock);
return i > 0 ? i : -EINVAL;
WARN_ON(i != npage);
}
static long vfio_sync_unpin(struct vfio_dma *dma, struct vfio_domain *domain,
@ -1287,6 +1287,35 @@ static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
return 0;
}
/*
* Notify VFIO drivers using vfio_register_emulated_iommu_dev() to invalidate
* and unmap iovas within the range we're about to unmap. Drivers MUST unpin
* pages in response to an invalidation.
*/
static void vfio_notify_dma_unmap(struct vfio_iommu *iommu,
struct vfio_dma *dma)
{
struct vfio_device *device;
if (list_empty(&iommu->device_list))
return;
/*
* The device is expected to call vfio_unpin_pages() for any IOVA it has
* pinned within the range. Since vfio_unpin_pages() will eventually
* call back down to this code and try to obtain the iommu->lock we must
* drop it.
*/
mutex_lock(&iommu->device_list_lock);
mutex_unlock(&iommu->lock);
list_for_each_entry(device, &iommu->device_list, iommu_entry)
device->ops->dma_unmap(device, dma->iova, dma->size);
mutex_unlock(&iommu->device_list_lock);
mutex_lock(&iommu->lock);
}
static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
struct vfio_iommu_type1_dma_unmap *unmap,
struct vfio_bitmap *bitmap)
@ -1377,12 +1406,6 @@ again:
if (!iommu->v2 && iova > dma->iova)
break;
/*
* Task with same address space who mapped this iova range is
* allowed to unmap the iova range.
*/
if (dma->task->mm != current->mm)
break;
if (invalidate_vaddr) {
if (dma->vaddr_invalid) {
@ -1406,8 +1429,6 @@ again:
}
if (!RB_EMPTY_ROOT(&dma->pfn_list)) {
struct vfio_iommu_type1_dma_unmap nb_unmap;
if (dma_last == dma) {
BUG_ON(++retries > 10);
} else {
@ -1415,20 +1436,7 @@ again:
retries = 0;
}
nb_unmap.iova = dma->iova;
nb_unmap.size = dma->size;
/*
* Notify anyone (mdev vendor drivers) to invalidate and
* unmap iovas within the range we're about to unmap.
* Vendor drivers MUST unpin pages in response to an
* invalidation.
*/
mutex_unlock(&iommu->lock);
blocking_notifier_call_chain(&iommu->notifier,
VFIO_IOMMU_NOTIFY_DMA_UNMAP,
&nb_unmap);
mutex_lock(&iommu->lock);
vfio_notify_dma_unmap(iommu, dma);
goto again;
}
@ -1679,18 +1687,6 @@ out_unlock:
return ret;
}
static int vfio_bus_type(struct device *dev, void *data)
{
struct bus_type **bus = data;
if (*bus && *bus != dev->bus)
return -EINVAL;
*bus = dev->bus;
return 0;
}
static int vfio_iommu_replay(struct vfio_iommu *iommu,
struct vfio_domain *domain)
{
@ -2153,13 +2149,26 @@ static void vfio_iommu_iova_insert_copy(struct vfio_iommu *iommu,
list_splice_tail(iova_copy, iova);
}
/* Redundantly walks non-present capabilities to simplify caller */
static int vfio_iommu_device_capable(struct device *dev, void *data)
{
return device_iommu_capable(dev, (enum iommu_cap)data);
}
static int vfio_iommu_domain_alloc(struct device *dev, void *data)
{
struct iommu_domain **domain = data;
*domain = iommu_domain_alloc(dev->bus);
return 1; /* Don't iterate */
}
static int vfio_iommu_type1_attach_group(void *iommu_data,
struct iommu_group *iommu_group, enum vfio_group_type type)
{
struct vfio_iommu *iommu = iommu_data;
struct vfio_iommu_group *group;
struct vfio_domain *domain, *d;
struct bus_type *bus = NULL;
bool resv_msi, msi_remap;
phys_addr_t resv_msi_base = 0;
struct iommu_domain_geometry *geo;
@ -2192,18 +2201,19 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
goto out_unlock;
}
/* Determine bus_type in order to allocate a domain */
ret = iommu_group_for_each_dev(iommu_group, &bus, vfio_bus_type);
if (ret)
goto out_free_group;
ret = -ENOMEM;
domain = kzalloc(sizeof(*domain), GFP_KERNEL);
if (!domain)
goto out_free_group;
/*
* Going via the iommu_group iterator avoids races, and trivially gives
* us a representative device for the IOMMU API call. We don't actually
* want to iterate beyond the first device (if any).
*/
ret = -EIO;
domain->domain = iommu_domain_alloc(bus);
iommu_group_for_each_dev(iommu_group, &domain->domain,
vfio_iommu_domain_alloc);
if (!domain->domain)
goto out_free_domain;
@ -2258,7 +2268,8 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
list_add(&group->next, &domain->group_list);
msi_remap = irq_domain_check_msi_remap() ||
iommu_capable(bus, IOMMU_CAP_INTR_REMAP);
iommu_group_for_each_dev(iommu_group, (void *)IOMMU_CAP_INTR_REMAP,
vfio_iommu_device_capable);
if (!allow_unsafe_interrupts && !msi_remap) {
pr_warn("%s: No interrupt remapping support. Use the module param \"allow_unsafe_interrupts\" to enable VFIO IOMMU support on this platform\n",
@ -2478,7 +2489,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
if (list_empty(&iommu->emulated_iommu_groups) &&
list_empty(&iommu->domain_list)) {
WARN_ON(iommu->notifier.head);
WARN_ON(!list_empty(&iommu->device_list));
vfio_iommu_unmap_unpin_all(iommu);
}
goto detach_group_done;
@ -2510,7 +2521,8 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
if (list_empty(&domain->group_list)) {
if (list_is_singular(&iommu->domain_list)) {
if (list_empty(&iommu->emulated_iommu_groups)) {
WARN_ON(iommu->notifier.head);
WARN_ON(!list_empty(
&iommu->device_list));
vfio_iommu_unmap_unpin_all(iommu);
} else {
vfio_iommu_unmap_unpin_reaccount(iommu);
@ -2571,7 +2583,8 @@ static void *vfio_iommu_type1_open(unsigned long arg)
iommu->dma_avail = dma_entry_limit;
iommu->container_open = true;
mutex_init(&iommu->lock);
BLOCKING_INIT_NOTIFIER_HEAD(&iommu->notifier);
mutex_init(&iommu->device_list_lock);
INIT_LIST_HEAD(&iommu->device_list);
init_waitqueue_head(&iommu->vaddr_wait);
iommu->pgsize_bitmap = PAGE_MASK;
INIT_LIST_HEAD(&iommu->emulated_iommu_groups);
@ -3008,28 +3021,40 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
}
}
static int vfio_iommu_type1_register_notifier(void *iommu_data,
unsigned long *events,
struct notifier_block *nb)
static void vfio_iommu_type1_register_device(void *iommu_data,
struct vfio_device *vdev)
{
struct vfio_iommu *iommu = iommu_data;
/* clear known events */
*events &= ~VFIO_IOMMU_NOTIFY_DMA_UNMAP;
if (!vdev->ops->dma_unmap)
return;
/* refuse to register if still events remaining */
if (*events)
return -EINVAL;
return blocking_notifier_chain_register(&iommu->notifier, nb);
/*
* list_empty(&iommu->device_list) is tested under the iommu->lock while
* iteration for dma_unmap must be done under the device_list_lock.
* Holding both locks here allows avoiding the device_list_lock in
* several fast paths. See vfio_notify_dma_unmap()
*/
mutex_lock(&iommu->lock);
mutex_lock(&iommu->device_list_lock);
list_add(&vdev->iommu_entry, &iommu->device_list);
mutex_unlock(&iommu->device_list_lock);
mutex_unlock(&iommu->lock);
}
static int vfio_iommu_type1_unregister_notifier(void *iommu_data,
struct notifier_block *nb)
static void vfio_iommu_type1_unregister_device(void *iommu_data,
struct vfio_device *vdev)
{
struct vfio_iommu *iommu = iommu_data;
return blocking_notifier_chain_unregister(&iommu->notifier, nb);
if (!vdev->ops->dma_unmap)
return;
mutex_lock(&iommu->lock);
mutex_lock(&iommu->device_list_lock);
list_del(&vdev->iommu_entry);
mutex_unlock(&iommu->device_list_lock);
mutex_unlock(&iommu->lock);
}
static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
@ -3163,8 +3188,8 @@ static const struct vfio_iommu_driver_ops vfio_iommu_driver_ops_type1 = {
.detach_group = vfio_iommu_type1_detach_group,
.pin_pages = vfio_iommu_type1_pin_pages,
.unpin_pages = vfio_iommu_type1_unpin_pages,
.register_notifier = vfio_iommu_type1_register_notifier,
.unregister_notifier = vfio_iommu_type1_unregister_notifier,
.register_device = vfio_iommu_type1_register_device,
.unregister_device = vfio_iommu_type1_unregister_device,
.dma_rw = vfio_iommu_type1_dma_rw,
.group_iommu_domain = vfio_iommu_type1_group_iommu_domain,
.notify = vfio_iommu_type1_notify,

View File

@ -65,11 +65,6 @@ struct mdev_driver {
struct device_driver driver;
};
static inline const guid_t *mdev_uuid(struct mdev_device *mdev)
{
return &mdev->uuid;
}
extern struct bus_type mdev_bus_type;
int mdev_register_device(struct device *dev, struct mdev_driver *mdev_driver);

View File

@ -32,6 +32,11 @@ struct vfio_device_set {
struct vfio_device {
struct device *dev;
const struct vfio_device_ops *ops;
/*
* mig_ops is a static property of the vfio_device which must be set
* prior to registering the vfio_device.
*/
const struct vfio_migration_ops *mig_ops;
struct vfio_group *group;
struct vfio_device_set *dev_set;
struct list_head dev_set_list;
@ -44,6 +49,7 @@ struct vfio_device {
unsigned int open_count;
struct completion comp;
struct list_head group_next;
struct list_head iommu_entry;
};
/**
@ -60,17 +66,9 @@ struct vfio_device {
* @match: Optional device name match callback (return: 0 for no-match, >0 for
* match, -errno for abort (ex. match with insufficient or incorrect
* additional args)
* @dma_unmap: Called when userspace unmaps IOVA from the container
* this device is attached to.
* @device_feature: Optional, fill in the VFIO_DEVICE_FEATURE ioctl
* @migration_set_state: Optional callback to change the migration state for
* devices that support migration. It's mandatory for
* VFIO_DEVICE_FEATURE_MIGRATION migration support.
* The returned FD is used for data transfer according to the FSM
* definition. The driver is responsible to ensure that FD reaches end
* of stream or error whenever the migration FSM leaves a data transfer
* state or before close_device() returns.
* @migration_get_state: Optional callback to get the migration state for
* devices that support migration. It's mandatory for
* VFIO_DEVICE_FEATURE_MIGRATION migration support.
*/
struct vfio_device_ops {
char *name;
@ -85,8 +83,24 @@ struct vfio_device_ops {
int (*mmap)(struct vfio_device *vdev, struct vm_area_struct *vma);
void (*request)(struct vfio_device *vdev, unsigned int count);
int (*match)(struct vfio_device *vdev, char *buf);
void (*dma_unmap)(struct vfio_device *vdev, u64 iova, u64 length);
int (*device_feature)(struct vfio_device *device, u32 flags,
void __user *arg, size_t argsz);
};
/**
* @migration_set_state: Optional callback to change the migration state for
* devices that support migration. It's mandatory for
* VFIO_DEVICE_FEATURE_MIGRATION migration support.
* The returned FD is used for data transfer according to the FSM
* definition. The driver is responsible to ensure that FD reaches end
* of stream or error whenever the migration FSM leaves a data transfer
* state or before close_device() returns.
* @migration_get_state: Optional callback to get the migration state for
* devices that support migration. It's mandatory for
* VFIO_DEVICE_FEATURE_MIGRATION migration support.
*/
struct vfio_migration_ops {
struct file *(*migration_set_state)(
struct vfio_device *device,
enum vfio_device_mig_state new_state);
@ -140,36 +154,18 @@ int vfio_mig_get_next_state(struct vfio_device *device,
/*
* External user API
*/
extern struct iommu_group *vfio_file_iommu_group(struct file *file);
extern bool vfio_file_enforced_coherent(struct file *file);
extern void vfio_file_set_kvm(struct file *file, struct kvm *kvm);
extern bool vfio_file_has_dev(struct file *file, struct vfio_device *device);
struct iommu_group *vfio_file_iommu_group(struct file *file);
bool vfio_file_enforced_coherent(struct file *file);
void vfio_file_set_kvm(struct file *file, struct kvm *kvm);
bool vfio_file_has_dev(struct file *file, struct vfio_device *device);
#define VFIO_PIN_PAGES_MAX_ENTRIES (PAGE_SIZE/sizeof(unsigned long))
extern int vfio_pin_pages(struct vfio_device *device, unsigned long *user_pfn,
int npage, int prot, unsigned long *phys_pfn);
extern int vfio_unpin_pages(struct vfio_device *device, unsigned long *user_pfn,
int npage);
extern int vfio_dma_rw(struct vfio_device *device, dma_addr_t user_iova,
void *data, size_t len, bool write);
/* each type has independent events */
enum vfio_notify_type {
VFIO_IOMMU_NOTIFY = 0,
};
/* events for VFIO_IOMMU_NOTIFY */
#define VFIO_IOMMU_NOTIFY_DMA_UNMAP BIT(0)
extern int vfio_register_notifier(struct vfio_device *device,
enum vfio_notify_type type,
unsigned long *required_events,
struct notifier_block *nb);
extern int vfio_unregister_notifier(struct vfio_device *device,
enum vfio_notify_type type,
struct notifier_block *nb);
int vfio_pin_pages(struct vfio_device *device, dma_addr_t iova,
int npage, int prot, struct page **pages);
void vfio_unpin_pages(struct vfio_device *device, dma_addr_t iova, int npage);
int vfio_dma_rw(struct vfio_device *device, dma_addr_t iova,
void *data, size_t len, bool write);
/*
* Sub-module helpers
@ -178,25 +174,24 @@ struct vfio_info_cap {
struct vfio_info_cap_header *buf;
size_t size;
};
extern struct vfio_info_cap_header *vfio_info_cap_add(
struct vfio_info_cap *caps, size_t size, u16 id, u16 version);
extern void vfio_info_cap_shift(struct vfio_info_cap *caps, size_t offset);
struct vfio_info_cap_header *vfio_info_cap_add(struct vfio_info_cap *caps,
size_t size, u16 id,
u16 version);
void vfio_info_cap_shift(struct vfio_info_cap *caps, size_t offset);
extern int vfio_info_add_capability(struct vfio_info_cap *caps,
struct vfio_info_cap_header *cap,
size_t size);
int vfio_info_add_capability(struct vfio_info_cap *caps,
struct vfio_info_cap_header *cap, size_t size);
extern int vfio_set_irqs_validate_and_prepare(struct vfio_irq_set *hdr,
int num_irqs, int max_irq_type,
size_t *data_size);
int vfio_set_irqs_validate_and_prepare(struct vfio_irq_set *hdr,
int num_irqs, int max_irq_type,
size_t *data_size);
struct pci_dev;
#if IS_ENABLED(CONFIG_VFIO_SPAPR_EEH)
extern void vfio_spapr_pci_eeh_open(struct pci_dev *pdev);
extern void vfio_spapr_pci_eeh_release(struct pci_dev *pdev);
extern long vfio_spapr_iommu_eeh_ioctl(struct iommu_group *group,
unsigned int cmd,
unsigned long arg);
void vfio_spapr_pci_eeh_open(struct pci_dev *pdev);
void vfio_spapr_pci_eeh_release(struct pci_dev *pdev);
long vfio_spapr_iommu_eeh_ioctl(struct iommu_group *group, unsigned int cmd,
unsigned long arg);
#else
static inline void vfio_spapr_pci_eeh_open(struct pci_dev *pdev)
{
@ -230,10 +225,9 @@ struct virqfd {
struct virqfd **pvirqfd;
};
extern int vfio_virqfd_enable(void *opaque,
int (*handler)(void *, void *),
void (*thread)(void *, void *),
void *data, struct virqfd **pvirqfd, int fd);
extern void vfio_virqfd_disable(struct virqfd **pvirqfd);
int vfio_virqfd_enable(void *opaque, int (*handler)(void *, void *),
void (*thread)(void *, void *), void *data,
struct virqfd **pvirqfd, int fd);
void vfio_virqfd_disable(struct virqfd **pvirqfd);
#endif /* VFIO_H */

View File

@ -147,23 +147,23 @@ struct vfio_pci_core_device {
#define is_irq_none(vdev) (!(is_intx(vdev) || is_msi(vdev) || is_msix(vdev)))
#define irq_is(vdev, type) (vdev->irq_type == type)
extern void vfio_pci_intx_mask(struct vfio_pci_core_device *vdev);
extern void vfio_pci_intx_unmask(struct vfio_pci_core_device *vdev);
void vfio_pci_intx_mask(struct vfio_pci_core_device *vdev);
void vfio_pci_intx_unmask(struct vfio_pci_core_device *vdev);
extern int vfio_pci_set_irqs_ioctl(struct vfio_pci_core_device *vdev,
uint32_t flags, unsigned index,
unsigned start, unsigned count, void *data);
int vfio_pci_set_irqs_ioctl(struct vfio_pci_core_device *vdev,
uint32_t flags, unsigned index,
unsigned start, unsigned count, void *data);
extern ssize_t vfio_pci_config_rw(struct vfio_pci_core_device *vdev,
char __user *buf, size_t count,
loff_t *ppos, bool iswrite);
ssize_t vfio_pci_config_rw(struct vfio_pci_core_device *vdev,
char __user *buf, size_t count,
loff_t *ppos, bool iswrite);
extern ssize_t vfio_pci_bar_rw(struct vfio_pci_core_device *vdev, char __user *buf,
size_t count, loff_t *ppos, bool iswrite);
ssize_t vfio_pci_bar_rw(struct vfio_pci_core_device *vdev, char __user *buf,
size_t count, loff_t *ppos, bool iswrite);
#ifdef CONFIG_VFIO_PCI_VGA
extern ssize_t vfio_pci_vga_rw(struct vfio_pci_core_device *vdev, char __user *buf,
size_t count, loff_t *ppos, bool iswrite);
ssize_t vfio_pci_vga_rw(struct vfio_pci_core_device *vdev, char __user *buf,
size_t count, loff_t *ppos, bool iswrite);
#else
static inline ssize_t vfio_pci_vga_rw(struct vfio_pci_core_device *vdev,
char __user *buf, size_t count,
@ -173,32 +173,31 @@ static inline ssize_t vfio_pci_vga_rw(struct vfio_pci_core_device *vdev,
}
#endif
extern long vfio_pci_ioeventfd(struct vfio_pci_core_device *vdev, loff_t offset,
uint64_t data, int count, int fd);
long vfio_pci_ioeventfd(struct vfio_pci_core_device *vdev, loff_t offset,
uint64_t data, int count, int fd);
extern int vfio_pci_init_perm_bits(void);
extern void vfio_pci_uninit_perm_bits(void);
int vfio_pci_init_perm_bits(void);
void vfio_pci_uninit_perm_bits(void);
extern int vfio_config_init(struct vfio_pci_core_device *vdev);
extern void vfio_config_free(struct vfio_pci_core_device *vdev);
int vfio_config_init(struct vfio_pci_core_device *vdev);
void vfio_config_free(struct vfio_pci_core_device *vdev);
extern int vfio_pci_register_dev_region(struct vfio_pci_core_device *vdev,
unsigned int type, unsigned int subtype,
const struct vfio_pci_regops *ops,
size_t size, u32 flags, void *data);
int vfio_pci_register_dev_region(struct vfio_pci_core_device *vdev,
unsigned int type, unsigned int subtype,
const struct vfio_pci_regops *ops,
size_t size, u32 flags, void *data);
extern int vfio_pci_set_power_state(struct vfio_pci_core_device *vdev,
pci_power_t state);
int vfio_pci_set_power_state(struct vfio_pci_core_device *vdev,
pci_power_t state);
extern bool __vfio_pci_memory_enabled(struct vfio_pci_core_device *vdev);
extern void vfio_pci_zap_and_down_write_memory_lock(struct vfio_pci_core_device
*vdev);
extern u16 vfio_pci_memory_lock_and_enable(struct vfio_pci_core_device *vdev);
extern void vfio_pci_memory_unlock_and_restore(struct vfio_pci_core_device *vdev,
u16 cmd);
bool __vfio_pci_memory_enabled(struct vfio_pci_core_device *vdev);
void vfio_pci_zap_and_down_write_memory_lock(struct vfio_pci_core_device *vdev);
u16 vfio_pci_memory_lock_and_enable(struct vfio_pci_core_device *vdev);
void vfio_pci_memory_unlock_and_restore(struct vfio_pci_core_device *vdev,
u16 cmd);
#ifdef CONFIG_VFIO_PCI_IGD
extern int vfio_pci_igd_init(struct vfio_pci_core_device *vdev);
int vfio_pci_igd_init(struct vfio_pci_core_device *vdev);
#else
static inline int vfio_pci_igd_init(struct vfio_pci_core_device *vdev)
{
@ -207,8 +206,8 @@ static inline int vfio_pci_igd_init(struct vfio_pci_core_device *vdev)
#endif
#ifdef CONFIG_VFIO_PCI_ZDEV_KVM
extern int vfio_pci_info_zdev_add_caps(struct vfio_pci_core_device *vdev,
struct vfio_info_cap *caps);
int vfio_pci_info_zdev_add_caps(struct vfio_pci_core_device *vdev,
struct vfio_info_cap *caps);
int vfio_pci_zdev_open_device(struct vfio_pci_core_device *vdev);
void vfio_pci_zdev_close_device(struct vfio_pci_core_device *vdev);
#else