[PATCH] powerpc: IOMMU SG paranoia

This addresses two items, which are unlikely to be hit if we
trust drivers.

The first is moving a memory barrier below where the vmerged SG count
is passed back, but before the list is set to end.  If those
instructions were reordered, there could be an issue in iommu_unmap_sg().

The second is making sure we terminate the list on the failure case of
iommu_map_sg().  If a driver does not look at the failure return code,
it could pass a ill-formed SG list to iommu_unmap_sg().

Signed-off-by: Jake Moilanen <moilanen@austin.ibm.com>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This commit is contained in:
Jake Moilanen 2006-01-30 21:51:54 -06:00 committed by Paul Mackerras
parent cdc3ee8f20
commit a958a26486

View File

@ -334,9 +334,6 @@ int iommu_map_sg(struct device *dev, struct iommu_table *tbl,
spin_unlock_irqrestore(&(tbl->it_lock), flags);
/* Make sure updates are seen by hardware */
mb();
DBG("mapped %d elements:\n", outcount);
/* For the sake of iommu_unmap_sg, we clear out the length in the
@ -347,6 +344,10 @@ int iommu_map_sg(struct device *dev, struct iommu_table *tbl,
outs->dma_address = DMA_ERROR_CODE;
outs->dma_length = 0;
}
/* Make sure updates are seen by hardware */
mb();
return outcount;
failure:
@ -358,6 +359,8 @@ int iommu_map_sg(struct device *dev, struct iommu_table *tbl,
npages = (PAGE_ALIGN(s->dma_address + s->dma_length) - vaddr)
>> PAGE_SHIFT;
__iommu_free(tbl, vaddr, npages);
s->dma_address = DMA_ERROR_CODE;
s->dma_length = 0;
}
}
spin_unlock_irqrestore(&(tbl->it_lock), flags);