Further testing showed that the idea with the chash doesn't work as expected.
Especially we can't predict when we can remove the entries from the hash again.
So replace the chash with a ring buffer/hash mix where entries in the container
age automatically based on their timestamp.
v2: use ring buffer / hash mix
v3: check the timeout to make sure all entries age
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> (v2)
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
All the gmc_*_set_pde_pte functions are the same across different ASICs,
so we can eliminate the set_pde_pte function pointer and instead use a
generic function.
Signed-off-by: Yong Zhao <Yong.Zhao@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
For the vram_start is 0 case, the gart range will be from 0x0000FFFF00000000
to 0x0000FFFF1FFFFFFF, which will cause the engine hang.
So to avoid the hole, limit the max mc address to AMDGPU_GMC_HOLE_START.:wq
Signed-off-by: Emily Deng <Emily.Deng@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
On hives with xgmi enabled, the fb_location aperture is a size
which defines the total framebuffer size of all nodes in the
hive. Each GPU in the hive has the same view via the fb_location
aperture. GPU0 starts at offset (0 * segment size),
GPU1 starts at offset (1 * segment size), etc.
For access to local vram on each GPU, we need to take this offset into
account. This including on setting up GPUVM page table and GART table
v2: squash in "drm/amdgpu: Init correct fb region for none XGMI configuration"
Acked-by: Huang Rui <ray.huang@amd.com>
Acked-by: Slava Abramov <slava.abramov@amd.com>
Signed-off-by: Shaoyun Liu <Shaoyun.Liu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com>
Acked-by: Huang Rui <ray.huang@amd.com>