SKL+ needs >4K alignment for tiled surfaces, so make
intel_compute_page_offset() handle it.
The way we do it is first we compute the closest tile boundary
as before, and then figure out how many tiles we need to go
to reach the desired alignment. The difference in the offset
is then added into the x/y offsets.
v2: Be less confusing wrt. units (pixels vs. bytes) (Daniel)
v3: Use u32 for offsets
Have intel_adjust_tile_offset() return the new offset (will be
useful later)
Add an offset_aligned variable (Daniel)
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1455569699-27905-5-git-send-email-ville.syrjala@linux.intel.com
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The page aligned surface address calculation needs to know which way
things are rotated. The contract now says that the caller must pass the
rotate x/y coordinates, as well as the tile_height aligned stride in
the tile_height direction. This will make it fairly simple to deal with
90/270 degree rotation on SKL+ where we have to deal with the rotated
view into the GTT.
v2: Pass rotation instead of bool even thoughwe only care about 0/180 vs. 90/270
v3: Introduce intel_tile_dims(), and don't mix up different units so much
v4: Unconfuse bytes vs. pixels even more
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1455569699-27905-4-git-send-email-ville.syrjala@linux.intel.com
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In addition to calculating final watermarks, let's also pre-calculate a
set of intermediate watermark values at atomic check time. These
intermediate watermarks are a combination of the watermarks for the old
state and the new state; they should satisfy the requirements of both
states which means they can be programmed immediately when we commit the
atomic state (without waiting for a vblank). Once the vblank does
happen, we can then re-program watermarks to the more optimal final
value.
v2: Significant rebasing/rewriting.
v3:
- Move 'need_postvbl_update' flag to CRTC state (Daniel)
- Don't forget to check intermediate watermark values for validity
(Maarten)
- Don't due async watermark optimization; just do it at the end of the
atomic transaction, after waiting for vblanks. We do want it to be
async eventually, but adding that now will cause more trouble for
Maarten's in-progress work. (Maarten)
- Don't allocate space in crtc_state for intermediate watermarks on
platforms that don't need it (gen9+).
- Move WaCxSRDisabledForSpriteScaling:ivb into intel_begin_crtc_commit
now that ilk_update_wm is gone.
v4:
- Add a wm_mutex to cover updates to intel_crtc->active and the
need_postvbl_update flag. Since we don't have async yet it isn't
terribly important yet, but might as well add it now.
- Change interface to program watermarks. Platforms will now expose
.initial_watermarks() and .optimize_watermarks() functions to do
watermark programming. These should lock wm_mutex, copy the
appropriate state values into intel_crtc->active, and then call
the internal program watermarks function.
v5:
- Skip intermediate watermark calculation/check during initial hardware
readout since we don't trust the existing HW values (and don't have
valid values of our own yet).
- Don't try to call .optimize_watermarks() on platforms that don't have
atomic watermarks yet. (Maarten)
v6:
- Rebase
v7:
- Further rebase
v8:
- A few minor indentation and line length fixes
v9:
- Yet another rebase since Maarten's patches reworked a bunch of the
code (wm_pre, wm_post, etc.) that this was previously based on.
v10:
- Move wm_mutex to dev_priv to protect against racing commits against
disjoint CRTC sets. (Maarten)
- Drop unnecessary clearing of cstate->wm.need_postvbl_update (Maarten)
v11:
- Now that we've moved to atomic watermark updates, make sure we call
the proper function to program watermarks in
{ironlake,haswell}_crtc_enable(); the failure to do so on the
previous patch iteration led to us not actually programming the
watermarks before turning on the CRTC, which was the cause of the
underruns that the CI system was seeing.
- Fix inverted logic for determining when to optimize watermarks. We
were needlessly optimizing when the intermediate/optimal values were
the same (harmless), but not actually optimizing when they differed
(also harmless, but wasteful from a power/bandwidth perspective).
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1456276813-5689-1-git-send-email-matthew.d.roper@intel.com
Currently we perform our own wait in post_plane_update,
but the atomic core performs another one in wait_for_vblanks.
This means that 2 vblanks are done when a fb is changed,
which is a bit overkill.
Merge them by creating a helper function that takes a crtc mask
for the planes to wait on.
The broadwell vblank workaround may look gone entirely but this is
not the case. pipe_config->wm_changed is set to true
when any plane is turned on, which forces a vblank wait.
Changes since v1:
- Removing the double vblank wait on broadwell moved to its own commit.
Changes since v2:
- Move out POWER_DOMAIN_MODESET handling to its own commit.
Changes since v3:
- Do not wait for vblank on legacy cursor updates. (Ville)
- Move broadwell vblank workaround comment to page_flip_finished. (Ville)
Changes since v4:
- Compile fix, legacy_cursor_flip -> *_update.
Changes since v5:
- Kill brackets.
- Add WARN_ON when wait_for_vblanks fails.
- Remove extra newlines.
- Split the checks whether vblank is needed to a separate function,
with comments why a vblank is needed.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/56CD84DA.5030507@linux.intel.com
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
RPS lock must be taken before the struct_mutex to avoid
locking inversion. So stop grabbing it for the whole
powersave initialization and instead only take it during
the sections which need it.
Also, struct_mutex is not needed any more since dedicated
RPS lock was added in:
commit 4fc688ce79
Author: Jesse Barnes <jbarnes@virtuousgeek.org>
Date: Fri Nov 2 11:14:01 2012 -0700
drm/i915: protect RPS/RC6 related accesses (including PCU) with a new mutex
Based on prototype patch by Chris Wilson and a subsequent
mailing list discussion involving Ville, Imre, Chris and
Daniel.
v2: More details in the commit.
v3: Use drm_gem_object_unreference_unlocked. (Chris Wilson)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
VMA creation and GEM list management need the big lock.
v2:
Mutex unlock ended on the wrong path somehow. (0-day, Julia Lawall)
Not to mention drm_gem_object_unreference was there in existing
code with no mutex held.
v3:
Some callers of i915_gem_object_create_stolen_for_preallocated
already hold the lock so move the mutex into the other caller
as well.
v4:
Changed to lockdep_assert_held. (Chris Wilson)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
This allows iteration over encoders without requiring connection_mutex.
Changes since v1:
- Add a set_best_encoder helper function and update encoder_mask inside
it.
Changes since v2:
- Relax the WARN_ON(!crtc), with explanation.
- Call set_best_encoder when connector is moved between crtc's.
- Add some paranoia to steal_encoder to prevent accidentally setting
best_encoder to NULL.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Gustavo Padovan <gustavo.padovan@collabora.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/56AA200A.6070501@linux.intel.com
First drm-misc pull req for 4.6. Big one is the drm_event cleanup, which
is also prep work for adding android fence support to kms (Gustavo is
planning to do that). Otherwise random small bits all over.
* tag 'topic/drm-misc-2016-02-08' of git://anongit.freedesktop.org/drm-intel: (33 commits)
gma500: clean up an excessive and confusing helper
drm/gma500: remove helper function
drm/vmwgfx: Nuke preclose hook
drm/vc4: Nuke preclose hook
drm/tilcdc: Nuke preclose hook
drm/tegra: Stop cancelling page flip events
drm/shmob: Nuke preclose hook
drm/rcar: Nuke preclose hook
drm/omap: Nuke close hooks
drm/msm: Nuke preclose hooks
drm/imx: Unconfuse preclose logic
drm/exynos: Remove event cancelling from postclose
drm/atmel: Nuke preclose
drm/i915: Nuke intel_modeset_preclose
drm: Nuke vblank event file cleanup code
drm: Clean up pending events in the core
drm/vblank: Use drm_event_reserve_init
drm/vmwgfx: fix a NULL dereference
drm/crtc-helper: Add caveat to disable_unused_functions doc
drm/gma500: Remove empty preclose hook
...
Older FBC platforms have this restriction where FBC can't be enabled
if multiple pipes are enabled. In the current code, we disable FBC
before the second pipe becomes visible.
One of the problems with this code is that the current
multiple_pipes_ok() implementation just iterates through all CRTCs
looking at their states, but it doesn't make sure that the state
locks are grabbed. It also can't just grab the locks for every CRTC
since this would kill one of the biggest advantages of atomic
modesetting.
After the recent FBC changes, we now have the appropriate locks for
the given CRTC, so we can just try to maintain the state of each CRTC
and update it once intel_fbc_pre_update is called.
As a last note, I don't have gen 2/3 machines to test this code. My
current plan is to enable FBC on just the newer platforms, so this
patch is just an attempt to get the gen 2/3 code at least looking
sane, so if one day someone decide to fix FBC on these platforms, they
may have less work to do.
Not-tested-by: Paulo Zanoni <paulo.r.zanoni@intel.com> (only on HSW+)
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453210558-7875-16-git-send-email-paulo.r.zanoni@intel.com
We'll now call intel_fbc_pre_update instead of intel_fbc_deactivate
during atomic commits. This will continue to guarantee that we
deactivate FBC and it will also update the state checking structures
at the correct time. Then, later, at the point where we were calling
intel_fbc_update, we'll only need to call intel_fbc_post_update.
Also add the proper warnings in case we don't have the appropriate
locks. Daniel mentioned the warnings will have to be removed for async
commits, but let's keep them here while we can.
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453210558-7875-12-git-send-email-paulo.r.zanoni@intel.com
We unconditionally disable/update FBC even during the page flip
IOCTLs, and an unconditional disable/update at every atomic commit
touching the primary plane shouldn't impact PC state residency
noticeably. Besides, the code that checks for rotation is a good hint
that we may be forgetting something else, so let's leave all the
decisions to intel_fbc.c, making the code much safer.
Once we have the code to properly make FBC enable/update decisions
based on atomic states, with proper locking, then we'll be able to
evaluate whether it will be worth trying to optimize the cases where a
disable isn't needed.
v2: Upstream moved and now our patch needs to remove dev_priv.
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453406837-10511-1-git-send-email-paulo.r.zanoni@intel.com
Before this patch, page flips would call intel_frontbuffer_flip() and
intel_frontbuffer_flip_complete(), which would call intel_fbc_flush(),
which would call intel_fbc_update(). The problem is that drawing
operations also trigger intel_fbc_flush() calls, so it's not
guaranteed that we have the CRTC and FB locks grabbed when
intel_fbc_flush() happens, since the call trace may come from the
rendering path.
We're trying to make the FBC code grab the appropriate CRTC/FB locks,
so split the drawing and the flipping logic in order to achieve that
in later patches. So now the frontbuffer tracking code is just going
to be used for frontbuffer drawing, and intel_fbc_update() is going to
be used directly for actual page flips.
As a note, we don't need to call intel_fbc_flip() during the two
places where we call intel_frontbuffer_flip() since in one of them we
already have an intel_fbc_update() call, and in the other we have the
planes disabled.
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453210558-7875-7-git-send-email-paulo.r.zanoni@intel.com
There are a number of places where the driver needs a request, but isn't
working on behalf of any specific user or in a specific context. At
present, we associate them with the per-engine default context. A future
patch will abolish those per-engine context pointers; but we can already
eliminate a lot of the references to them, just by making the allocator
allow NULL as a shorthand for "an appropriate context for this ring",
which will mean that the callers don't need to know anything about how
the "appropriate context" is found (e.g. per-ring vs per-device, etc).
So this patch renames the existing i915_gem_request_alloc(), and makes
it local (static inline), and replaces it with a wrapper that provides
a default if the context is NULL, and also has a nicer calling
convention (doesn't require a pointer to an output parameter). Then we
change all callers to use the new convention:
OLD:
err = i915_gem_request_alloc(ring, user_ctx, &req);
if (err) ...
NEW:
req = i915_gem_request_alloc(ring, user_ctx);
if (IS_ERR(req)) ...
OLD:
err = i915_gem_request_alloc(ring, ring->default_context, &req);
if (err) ...
NEW:
req = i915_gem_request_alloc(ring, NULL);
if (IS_ERR(req)) ...
v4: Rebased
Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
Reviewed-by: Nick Hoath <nicholas.hoath@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453230175-19330-2-git-send-email-david.s.gordon@intel.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>