This is really just an alias of mmap_gtt. The 'mmap offset' nomenclature
comes from the value returned by this ioctl which is the offset into the
device fd which userpace uses with mmap(2).
mmap_gtt was our initial mmap_offset implementation, this extends
our CPU mmap support to allow additional fault handlers that depends on
the object's backing pages.
Note that we multiplex mmap_gtt and mmap_offset through the same ioctl,
and use the zero extending behaviour of drm to differentiate between
them, when we inspect the flags.
To support multiple mmap types on an object we need to support multiple
mmap_offsets for an object (each offset in the global device address
space corresponding to a unique instance of the object for a file + mmap
type). As we drop the simplified drm core idea of a single mmap_offset,
we need to provide replacement hooks for the dumb mmap interface as
well.
Link: https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1675
Testcase: igt/gem_mmap_offset
Signed-off-by: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191204120032.3682839-1-chris@chris-wilson.co.uk
Backmerge to get dfce90259d ("Backmerge i915 security patches from
commit 'ea0b163b13ff' into drm-next") and thus 100d46bd72 ("Merge
Intel Gen8/Gen9 graphics fixes from Jon Bloomfield.").
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
This backmerges the branch that ended up in Linus' tree. It removes
all the changes for the rc6 patches from Linus' tree in favour of
a patch that is based on a large refactor that occured.
Otherwise it all looks good.
Signed-off-by: Dave Airlie <airlied@redhat.com>
In some circumstances the RC6 context can get corrupted. We can detect
this and take the required action, that is disable RC6 and runtime PM.
The HW recovers from the corrupted state after a system suspend/resume
cycle, so detect the recovery and re-enable RC6 and runtime PM.
v2: rebase (Mika)
v3:
- Move intel_suspend_gt_powersave() to the end of the GEM suspend
sequence.
- Add commit message.
v4:
- Rebased on intel_uncore_forcewake_put(i915->uncore, ...) API
change.
v5:
- Rebased on latest upstream gt_pm refactoring.
v6:
- s/i915_rc6_/intel_rc6_/
- Don't return a value from i915_rc6_ctx_wa_check().
v7:
- Rebased on latest gt rc6 refactoring.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
[airlied: pull this later version of this patch into drm-next
to make resolving the conflict mess easier.]
Signed-off-by: Dave Airlie <airlied@redhat.com>
In some circumstances the RC6 context can get corrupted. We can detect
this and take the required action, that is disable RC6 and runtime PM.
The HW recovers from the corrupted state after a system suspend/resume
cycle, so detect the recovery and re-enable RC6 and runtime PM.
v2: rebase (Mika)
v3:
- Move intel_suspend_gt_powersave() to the end of the GEM suspend
sequence.
- Add commit message.
v4:
- Rebased on intel_uncore_forcewake_put(i915->uncore, ...) API
change.
v5: rebased on gem/gt split (Mika)
Signed-off-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
To keep things manageable, the pre-gen9 cmdparser does not
attempt to track any form of nested BB_START's. This did not
prevent usermode from using nested starts, or even chained
batches because the cmdparser is not strictly enforced pre gen9.
Instead, the existence of a nested BB_START would cause the batch
to be emitted in insecure mode, and any privileged capabilities
would not be available.
For Gen9, the cmdparser becomes mandatory (for BCS at least), and
so not providing any form of nested BB_START support becomes
overly restrictive. Any such batch will simply not run.
We make heavy use of backward jumps in igt, and it is much easier
to add support for this restricted subset of nested jumps, than to
rewrite the whole of our test suite to avoid them.
Add the required logic to support limited backward jumps, to
instructions that have already been validated by the parser.
Note that it's not sufficient to simply approve any BB_START
that jumps backwards in the buffer because this would allow an
attacker to embed a rogue instruction sequence within the
operand words of a harmless instruction (say LRI) and jump to
that.
We introduce a bit array to track every instr offset successfully
validated, and test the target of BB_START against this. If the
target offset hits, it is re-written to the same offset in the
shadow buffer and the BB_START cmd is allowed.
Note: This patch deliberately ignores checkpatch issues in the
cmdtables, in order to match the style of the surrounding code.
We'll correct the entire file in one go in a later patch.
v2: set dispatch secure late (Mika)
v3: rebase (Mika)
v4: Clear whitelist on each parse
Minor review updates (Chris)
v5: Correct backward jump batching
v6: fix compilation error due to struct eb shuffle (Mika)
Cc: Tony Luck <tony.luck@intel.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Takashi Iwai <tiwai@suse.de>
Cc: Tyler Hicks <tyhicks@canonical.com>
Signed-off-by: Jon Bloomfield <jon.bloomfield@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Chris Wilson <chris.p.wilson@intel.com>
For Gen7, the original cmdparser motive was to permit limited
use of register read/write instructions in unprivileged BB's.
This worked by copying the user supplied bb to a kmd owned
bb, and running it in secure mode, from the ggtt, only if
the scanner finds no unsafe commands or registers.
For Gen8+ we can't use this same technique because running bb's
from the ggtt also disables access to ppgtt space. But we also
do not actually require 'secure' execution since we are only
trying to reduce the available command/register set. Instead we
will copy the user buffer to a kmd owned read-only bb in ppgtt,
and run in the usual non-secure mode.
Note that ro pages are only supported by ppgtt (not ggtt), but
luckily that's exactly what we need.
Add the required paths to map the shadow buffer to ppgtt ro for Gen8+
v2: IS_GEN7/IS_GEN (Mika)
v3: rebase
v4: rebase
v5: rebase
Signed-off-by: Jon Bloomfield <jon.bloomfield@intel.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Takashi Iwai <tiwai@suse.de>
Cc: Tyler Hicks <tyhicks@canonical.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Chris Wilson <chris.p.wilson@intel.com>
Retroactively stop reporting support for secure batches
through the api for gen6+ so that older binaries trigger
the fallback path instead.
Older binaries use secure batches pre gen6 to access resources
that are not available to normal usermode processes. However,
all known userspace explicitly checks for HAS_SECURE_BATCHES
before relying on the secure batch feature.
Since there are no known binaries relying on this for newer gens
we can kill secure batches from gen6, via I915_PARAM_HAS_SECURE_BATCHES.
v2: rebase (Mika)
v3: rebase (Mika)
Signed-off-by: Jon Bloomfield <jon.bloomfield@intel.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Takashi Iwai <tiwai@suse.de>
Cc: Tyler Hicks <tyhicks@canonical.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Chris Wilson <chris.p.wilson@intel.com>
Vanshidhar Konda asked for the simplest test "to verify that the kernel
can submit and hardware can execute batch buffers on all the command
streamers in parallel." We have a number of tests in userspace that
submit load to each engine and verify that it is present, but strictly
we have no selftest to prove that the kernel can _simultaneously_
execute on all known engines. (We have tests to demonstrate that we can
submit to HW in parallel, but we don't insist that they execute in
parallel.)
v2: Improve the igt_spinner support for older gen.
Suggested-by: Vanshidhar Konda <vanshidhar.r.konda@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Vanshidhar Konda <vanshidhar.r.konda@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Vanshidhar Konda <vanshidhar.r.konda@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191101101528.10553-1-chris@chris-wilson.co.uk
So far we've sort of protected the global state under dev_priv with
the connection_mutex. I wan to change that so that we can change the
cdclk even for pure plane updates. To that end let's formalize the
protection of the global state to follow what I started with the cdclk
code already (though not entirely properly) such that any crtc mutex
will suffice as a read lock, and all crtcs mutexes act as the write
lock.
We'll also pimp intel_atomic_state_clear() to clear the entire global
state, so that we don't accidentally leak stale information between
the locking retries.
As a slight optimization we'll only lock the crtc mutexes to protect
the global state, however if and when we actually have to poke the
hw (eg. if the actual cdclk changes) we must serialize commits
across all crtcs so that a parallel nonblocking commit can't get
ahead of the cdclk reprogamming. We do that by adding all crtcs to
the state.
TODO: the old global state examined during commit may still
be a problem since it always looks at the _latest_ swapped state
in dev_priv. Need to add proper old/new state for that too I think.
v2: Remeber to serialize the commits if necessary
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191015193035.25982-3-ville.syrjala@linux.intel.com
Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Medium term goal is to eliminate the i915->engine[] array and to get there
we have recently introduced equivalent array in intel_gt. Now we need to
migrate the code further towards this state.
This next step is to eliminate usage of i915->engines[] from the
for_each_engine_masked iterator.
For this to work we also need to use engine->id as index when populating
the gt->engine[] array and adjust the default engine set indexing to use
engine->legacy_idx instead of assuming gt->engines[] indexing.
v2:
* Populate gt->engine[] earlier.
* Check that we don't duplicate engine->legacy_idx
v3:
* Work around the initialization order issue between default_engines()
and intel_engines_driver_register() which sets engine->legacy_idx for
now. It will be fixed properly later.
v4:
* Merge with forgotten v2.5.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191017161852.8836-1-tvrtko.ursulin@linux.intel.com
DC3CO is useful power state, when DMC detects PSR2 idle frame
while an active video playback, playing 30fps video on 60hz panel
is the classic example of this use case.
B.Specs:49196 has a restriction to enable DC3CO only for Video Playback.
It will be worthy to enable DC3CO after completion of each pageflip
and switch back to DC5 when display is idle because driver doesn't
differentiate between video playback and a normal pageflip.
We will use Frontbuffer flush call tgl_dc3co_flush() to enable DC3CO
state only for ORIGIN_FLIP flush call, because DC3CO state has primarily
targeted for VPB use case. We are not interested here for frontbuffer
invalidates calls because that triggers PSR2 exit, which will
explicitly disable DC3CO.
DC5 and DC6 saves more power, but can't be entered during video
playback because there are not enough idle frames in a row to meet
most PSR2 panel deep sleep entry requirement typically 4 frames.
As PSR2 existing implementation is using minimum 6 idle frames for
deep sleep, it is safer to enable DC5/6 after 6 idle frames
(By scheduling a delayed work of 6 idle frames, once DC3CO has been
enabled after a pageflip).
After manually waiting for 6 idle frames DC5/6 will be enabled and
PSR2 deep sleep idle frames will be restored to 6 idle frames, at this
point DMC will triggers DC5/6 once PSR2 enters to deep sleep after
6 idle frames.
In future when we will enable S/W PSR2 tracking, we can change the
PSR2 required deep sleep idle frames to 1 so DMC can trigger the
DC5/6 immediately after S/W manual waiting of 6 idle frames get
complete.
v2: calculated s/w state to switch over dc3co when there is an
update. [Imre]
Used cancel_delayed_work_sync() in order to avoid any race
with already scheduled delayed work. [Imre]
v3: Cancel_delayed_work_sync() may blocked the commit work.
hence dropping it, dc5_idle_thread() checks the valid wakeref before
putting the reference count, which avoids any chances of dropping
a zero wakeref. [Imre (IRC)]
v4: Used frontbuffer flush mechanism. [Imre]
v5: Used psr.pipe to extract frontbuffer busy bits. [Imre]
Used cancel_delayed_work_sync() in encoder disable path. [Imre]
Used mod_delayed_work() instead of cancelling and scheduling a
delayed work. [Imre]
Used psr.lock in tgl_dc5_idle_thread() to enable psr2 deep
sleep. [Imre]
Removed DC5_REQ_IDLE_FRAMES macro. [Imre]
v6: Used dc3co_exitline check instead of TGL and dc3co allowed_dc_mask
checks, used delayed_work_pending with the psr lock and removed the
psr2_deep_slp_disabled flag. [Imre]
v7: Code refactoring, moved most of functional code to inte_psr.c [Imre]
Using frontbuffer_bits on psr.pipe check instead of
busy_frontbuffer_bits. [Imre]
Calculating dc3co_exit_delay in intel_psr_enable_locked. [Imre]
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Animesh Manna <animesh.manna@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Anshuman Gupta <anshuman.gupta@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191003081738.22101-6-anshuman.gupta@intel.com
Add target_dc_state and used by set_target_dc_state API
in order to enable DC3CO state with existing DC states.
target_dc_state will enable/disable the desired DC state in
DC_STATE_EN reg when "DC Off" power well gets disable/enable.
v2: commit log improvement.
v3: Used intel_wait_for_register to wait for DC3CO exit. [Imre]
Used gen9_set_dc_state() to allow/disallow DC3CO. [Imre]
Moved transcoder psr2 exit line enablement from tgl_allow_dc3co()
to a appropriate place haswell_crtc_enable(). [Imre]
Changed the DC3CO power well enabled call back logic as
recommended in review comments. [Imre]
v4: Used wait_for_us() instead of intel_wait_for_reg(). [Imre (IRC)]
v5: using udelay() instead of waiting for DC3CO exit status.
v6: Fixed minor unwanted change.
v7: Removed DC3CO powerwell and POWER_DOMAIN_VIDEO.
v8: Uniform checks by using only target_dc_state instead of allowed_dc_mask
in "DC off" power well callback. [Imre]
Adding "DC off" power well id to older platforms. [Imre]
Removed psr2_deep_sleep flag from tgl_set_target_dc_state. [Imre]
v9: Used switch case for target DC state in
gen9_dc_off_power_well_disable(), checking DC3CO state against
allowed DC mask, using WARN_ON() in
tgl_set_target_dc_state(). [Imre]
v10: Code refactoring and using sanitize_target_dc_state(). [Imre]
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Animesh Manna <animesh.manna@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Anshuman Gupta <anshuman.gupta@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191003081738.22101-4-anshuman.gupta@intel.com