Splitting it into KB/MB is just confusing.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Not sure why somebody thought that this is a long.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Just use the system queue now that we don't block any more.
v2: handle DAL as well.
v3: agd: split DAL changes out
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Mykola Lysenko <mykola.lysenko@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com> (v1)
Just register an callback and reschedule the work item if necessary.
Signed-off-by: Christian König <christian.koenig@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
wait_event() never returns before the fence was signaled.
Signed-off-by: Christian König <christian.koenig@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
The release notifier can get called a second time from
mmu_notifier_unregister depending on a race between
__mmu_notifier_release and amdgpu_mn_destroy. Use
mmu_notifier_unregister_no_release to avoid this.
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Rafael Antognolli's new DRM_DP_AUX_CHARDEV feature causes a WARN_ON
if drm_dp_aux->dev == drm_connector->kdev and drm_dp_aux_unregister()
is called after drm_connector_unregister(). radeon is the only driver
affected by this besides i915. (amdgpu calls drm_dp_aux_unregister()
before drm_connector_unregister().)
Cc: Rafael Antognolli <rafael.antognolli@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Split the sw and hw parts into separate functions.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Read back harvest configuration from registers and simplify
calculations. No need to program the raster config registers.
These are programmed as golden registers and the user mode
drivers program them as well.
v2: rebase on Tom's patches
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Don't use pointer arithmetic and fix the indentation.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
This allows us to remove the kernel context and use a better
priority for the submissions.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
This allows us to remove the kernel context and use a better
priority for the submissions.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
This allows us to remove the global kernel context.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Distribute the load on both rings.
v2: use a loop for the initialization
v3: agd: rebase on upstream
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Updates from different VMs can be processed independently.
v2: agd: rebase on upstream
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Avoid a lock inversion problem by just using the mmap_sem to
protect the entries of the intervall tree.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reduce for loop with bitmask to simple complement and mask
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Replaces switch statements with direct assignments to
reduce line count significantly.
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
In case CONFIG_DRM_AMD_POWERPLAY is defined and amdgpu.powerplay=0.
some functions in powrplay can also be called by DAL. and the input parameter is *adev.
if just check point not NULL was not enough and will lead to NULL point error.
Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Eric Yang <eric.yang2@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Use min required system clock calculated by dal
Signed-off-by: Harry Wentland <harry.wentland@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: David Rokhvarg <David.Rokhvarg@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
amd_pp_dal_clock_info to amd_pp_simple_clock_info.
Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Harry Wentland <harry.wentland@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Prior actual MMIO flip we need to acquire DAL mutex to guard
our target state which get modified on reset mode.
Assign page flip status before actual flip to handle
the possible race condition with interrupt.
Signed-off-by: Vitaly Prosyak <vitaly.prosyak@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Harry Wentland <harry.wentland@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
No need to keep that for every IB.
Signed-off-by: Christian König <christian.koenig@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
We need the IB test for GPU resets as well and
the scheduler should be stoped then.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
We need the IB test for GPU resets as well and
the scheduler should be stoped then.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
That's probably a better matching name.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add a job_alloc_with_ib helper and proper job submission.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
There is no point in sending them through the scheduler.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
We can't submit to multiple rings at the same time anyway.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The padding depends on the firmware version and we need that for BO moves as
well, not only for VM updates.
v2: new approach of making pad_ib a ring function
v3: fix typo in macro name
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>