Set the VM size based on system memory size between the ASIC-specific
limits given by min_vm_size and max_bits. GFXv9 GPUs will keep their
default VM size of 256TB (48 bit). Only older GPUs will adjust VM size
depending on system memory size.
This makes more VM space available for ROCm applications on GFXv8 GPUs
that want to map all available VRAM and system memory in their SVM
address space.
v2:
* Clarify comment
* Round up memory size before >> 30
* Round up automatic vm_size to power of two
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
Acked-by: Junwei Zhang <Jerry.Zhang@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
I continue to work for bulk moving that based on the proposal by Christian.
Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact performance seriously.
Then Christian provided a workaround to not move PD/PT BOs on LRU with below
patch:
Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
validating VM PTs")
However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
instead of one by one.
Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
validated we move all BOs together to the end of the LRU without dropping the
lock for the LRU.
While doing so we note the beginning and end of this block in the LRU list.
Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
we don't move every BO one by one, but instead cut the LRU list into pieces so
that we bulk move everything to the end in just one operation.
Test data:
+--------------+-----------------+-----------+---------------------------------------+
| |The Talos |Clpeak(OCL)|BusSpeedReadback(OCL) |
| |Principle(Vulkan)| | |
+------------------------------------------------------------------------------------+
| | | |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
| Original | 147.7 FPS | 76.86 us |0.307 ms(8K) 0.310 ms(16K) |
+------------------------------------------------------------------------------------+
| Orignial + WA| | |0.254 ms(1K) 0.241 ms(2K) |
|(don't move | 162.1 FPS | 42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
|PT BOs on LRU)| | | |
+------------------------------------------------------------------------------------+
| Bulk move | 163.1 FPS | 40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
| | | |0.214 ms(8K) 0.225 ms(16K) |
+--------------+-----------------+-----------+---------------------------------------+
After test them with above three benchmarks include vulkan and opencl. We can
see the visible improvement than original, and even better than original with
workaround.
v2: move all BOs include idle, relocated, and moved list to the end of LRU and
put them together.
v3: remove unused parameter and use list_for_each_entry instead of the one with
save entry.
v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
all bo will be back on idle list.
v5: remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of
validated, and move ttm_bo_bulk_move_lru_tail() also into
amdgpu_vm_move_to_lru_tail().
v6: clean up and fix return value.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Acked-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add process and thread names and pids and a function to extract
this info from relevant amdgpu_vm.
v2: Add documentation and fix identation.
v3: Add getter and setter functions for amdgpu_task_info.
Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
Acked-by: Jim Qu <Jim.Qu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Only the moved state needs a separate spin lock protection. All other
states are protected by reserving the VM anyway.
v2: fix some more incorrect cases
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This change prepares for a workaround in amdkfd for a GFX9 HW bug. It
requires the control stack memory of compute queues, which is allocated
from the second page of MQD gart BOs, to have mtype NC, rather than
the default UC.
Signed-off-by: Yong Zhao <yong.zhao@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
v2: Removed updating and checking of vm->vm_context
v3: Enable amdgpu_vm_clear_bo in amdgpu_vm_make_compute
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com>
Remove struct amdkfd_vm and move the fields into struct amdgpu_vm.
This will allow turning a VM created by a DRM render node into a
KFD VM.
v2: Removed vm_context field
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com>
Instead of falling back to 2 level and very limited address space use
2+1 PD support and 128TB + 512GB of virtual address space.
v2: cleanup defines, rebase on top of level enum
v3: fix inverted check in hardware setup
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-and-Tested-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This moves and renames the AMDGPU scheduler to a common location in DRM
in order to facilitate re-use by other drivers. This is mostly a straight
forward rename with no code changes.
One notable exception is the function to_drm_sched_fence(), which is no
longer a inline header function to avoid the need to export the
drm_sched_fence_ops_scheduled and drm_sched_fence_ops_finished structures.
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
More new stuff for 4.15. Highlights:
- Add clock query interface for raven
- Add new FENCE_TO_HANDLE ioctl
- UVD video encode ring support on polaris
- transparent huge page DMA support
- deadlock fixes
- compute pipe lru tweaks
- powerplay cleanups and regression fixes
- fix duplicate symbol issue with radeon and amdgpu
- misc bug fixes
* 'drm-next-4.15' of git://people.freedesktop.org/~agd5f/linux: (72 commits)
drm/radeon/dp: make radeon_dp_get_dp_link_config static
drm/radeon: move ci_send_msg_to_smc to where it's used
drm/amd/sched: fix deadlock caused by unsignaled fences of deleted jobs
drm/amd/sched: NULL out the s_fence field after run_job
drm/amd/sched: move adding finish callback to amd_sched_job_begin
drm/amd/sched: fix an outdated comment
drm/amd/sched: rename amd_sched_entity_pop_job
drm/amdgpu: minor coding style fix
drm/ttm: add transparent huge page support for DMA allocations v2
drm/ttm: add support for different pool sizes
drm/ttm: remove unsued options from ttm_mem_global_alloc_page
drm/amdgpu: add uvd enc irq
drm/amdgpu: add uvd enc ib test
drm/amdgpu: add uvd enc ring test
drm/amdgpu: add uvd enc vm functions (v2)
drm/amdgpu: add uvd enc into run queue
drm/amdgpu: add uvd enc rings
drm/amdgpu: add new uvd enc ring methods
drm/amdgpu: add uvd enc command in header
drm/amdgpu: add uvd enc registers in header
...
When many wavefronts cause VM faults at the same time, it can
overwhelm the interrupt handler and cause IH ring overflows before
the driver can notify or kill the faulting application.
As a workaround I'm introducing limited per-VM fault credit. After
that number of VM faults have occurred, further VM faults are
filtered out at the prescreen stage of processing.
This depends on the PASID in the interrupt packet, so it currently
only works for KFD contexts.
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
First feature pull for 4.15. Highlights:
- Per VM BO support
- Lots of powerplay cleanups
- Powerplay support for CI
- pasid mgr for kfd
- interrupt infrastructure for recoverable page faults
- SR-IOV fixes
- initial GPU reset for vega10
- prime mmap support
- ttm page table debugging improvements
- lots of bug fixes
* 'drm-next-4.15' of git://people.freedesktop.org/~agd5f/linux: (232 commits)
drm/amdgpu: clarify license in amdgpu_trace_points.c
drm/amdgpu: Add gem_prime_mmap support
drm/amd/powerplay: delete dead code in smumgr
drm/amd/powerplay: delete SMUM_FIELD_MASK
drm/amd/powerplay: delete SMUM_WAIT_INDIRECT_FIELD
drm/amd/powerplay: delete SMUM_READ_FIELD
drm/amd/powerplay: delete SMUM_SET_FIELD
drm/amd/powerplay: delete SMUM_READ_VFPF_INDIRECT_FIELD
drm/amd/powerplay: delete SMUM_WRITE_VFPF_INDIRECT_FIELD
drm/amd/powerplay: delete SMUM_WRITE_FIELD
drm/amd/powerplay: delete SMU_WRITE_INDIRECT_FIELD
drm/amd/powerplay: move macros to hwmgr.h
drm/amd/powerplay: move PHM_WAIT_VFPF_INDIRECT_FIELD to hwmgr.h
drm/amd/powerplay: move SMUM_WAIT_VFPF_INDIRECT_FIELD_UNEQUAL to hwmgr.h
drm/amd/powerplay: move SMUM_WAIT_INDIRECT_FIELD_UNEQUAL to hwmgr.h
drm/amd/powerplay: add new helper functions in hwmgr.h
drm/amd/powerplay: use SMU_IND_INDEX/DATA_11 pair
drm/amd/powerplay: refine powerplay code.
drm/amd/powerplay: delete dead code in hwmgr.h
drm/amd/powerplay: refine interface in struct pp_smumgr_func
...
IH tracks pending retry faults in a hash table for fast lookup in
interrupt context. Each VM has a short FIFO of pending VM faults for
processing in a bottom half.
The IH prescreening stage adds retry faults and filters out repeated
retry interrupts to minimize the impact of interrupt storms.
It's the VM's responsibility remove pending faults once they are
handled. For now this is only done when the VM is destroyed.
v2:
- Made the hash table smaller and the FIFO longer. I never want the
FIFO to fill up, because that would make prescreen take longer.
128 pending page faults should be enough to keep migrations busy.
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
Acked-by: Christian König <christian.koenig@amd.com> (v1)
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Allows assigning a PASID to a VM for identifying VMs involved in page
faults. The global PASID manager is also exported in the KFD
interface so that AMDGPU and KFD can share the PASID space.
PASIDs of different sizes can be requested. On APUs, the PASID size
is deterined by the capabilities of the IOMMU. So KFD must be able
to allocate PASIDs in a smaller range.
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Oded Gabbay <oded.gabbay@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
There is no guarantee that the last BO_VA actually needed an update.
Additional to that all command submissions must wait for moved BOs to
be cleared, not just the first one.
v2: Don't overwrite any newer fence.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Use the VM instead of the BO list to find the BO for a virtual address.
This fixes UVD/VCE in physical mode with VM local BOs.
Signed-off-by: Christian König <christian.koenig@amd.com>
Acked-by: Leo Liu <leo.liu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Per VM BOs are handled like VM PDs and PTs. They are always valid and don't
need to be specified in the BO lists.
v2: validate PDs/PTs first
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
We changed this to use an extra list a while back, but for the next
series I need a separate flag again.
v2: reorder to avoid unlocked list access
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Instead of using the vm_state use a separate flag to note
that the BO was moved.
v2: reorder patches to avoid temporary lockless access
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Stop checking the mapped BO itself, cause that one is
certainly not a page table.
Additional to that move the code into amdgpu_vm.c
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This should save us a bunch of command submission overhead.
v2: move the LRU move to the right place to avoid the move for the root BO
and handle the shadow BOs as well. This turned out to be a bug fix because
the move needs to happen before the kmap.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com>
Acked-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
We achieved that by setting S(SYSTEM) and P(PDE as PTE) bit to 1 for
PDEs and setting S bit to 1 for PTEs when the corresponding addresses
are not occupied by gpu driver allocated buffers.
Signed-off-by: Yong Zhao <Yong.Zhao@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>