Just like Haswell, but with the small twist that the panel fitter for pipe A is
now also in the always-on power well.
v2: Use the new HAS_POWER_WELL macro.
v3: Rebase on top of intel_using_power_well patches.
v4: This time actually update the PFIT check correctly so that the
pipe A pfit is in the always-on domain.
v5: Rebase on top of the VGA power domain addition.
v6: Rebase on top of the new power domain infrastructure. Also pimp the commit
message a bit while at it.
v7: Use IS_BROADWELL instead of IS_GEN8 (Ville).
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> (v1)
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For now it's just equivalent to IS_GEN8, but in the future we might
want to change that (e.g., on Gen 7 we have IS_VALLEYVIEW,
IS_IVYBRIDGE and IS_HASWELL).
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Aside from the potential size increase of the PPGTT, the primary
difference from previous hardware is the Page Directories are no longer
carved out of the Global GTT.
Note that the PDE allocation is done as a 8MB contiguous allocation,
this needs to be eventually fixed (since driver reloading will be a
pain otherwise). Also, this will be a no-go for real PPGTT support.
v2: Move vtable initialization
v3: Resolve conflicts due to patch series reordering.
v4: Rebase on top of the address space refactoring of the PPGTT
support. Drop Imre's r-b tag for v2, too outdated by now.
v5: Free the correct amount of memory, "get_order takes size not a page
count." (Imre)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The interrupt handling implementation remains the same as previous
generations with the 4 types of registers, status, identity, mask, and
enable. However the layout of where the bits go have changed entirely.
To address these changes, all of the interrupt vfuncs needed special
gen8 code.
The way it works is there is a top level status register now which
informs the interrupt service routine which unit caused the interrupt,
and therefore which interrupt registers to read to process the
interrupt. For display the division is quite logical, a set of interrupt
registers for each pipe, and in addition to those, a set each for "misc"
and port.
For GT the things get a bit hairy, as seen by the code. Each of the GT
units has it's own bits defined. They all look *very similar* and
resides in 16 bits of a GT register. As an example, RCS and BCS share
register 0. To compact the code a bit, at a slight expense to
complexity, this is exactly how the code works as well. 2 structures are
added to the ring buffer so that our ring buffer interrupt handling code
knows which ring shares the interrupt registers, and a shift value (ie.
the top or bottom 16 bits of the register).
The above allows us to kept the interrupt register caching scheme, the
per interrupt enables, and the code to mask and unmask interrupts
relatively clean (again at the cost of some more complexity).
Most of the GT units mentioned above are command streamers, and so the
symmetry should work quite well for even the yet to be implemented rings
which Broadwell adds.
v2: Fixes up a couple of bugs, and is more verbose about errors in the
Broadwell interrupt handler.
v3: fix DE_MISC IER offset
v4: Simplify interrupts:
I totally misread the docs the first time I implemented interrupts, and
so this should greatly simplify the mess. Unlike GEN6, we never touch
the regular mask registers in irq_get/put.
v5: Rebased on to of recent pch hotplug setup changes.
v6: Fixup on top of moving num_pipes to intel_info.
v7: Rebased on top of Egbert Eich's hpd irq handling rework. Also
wired up ibx_hpd_irq_setup for gen8.
v8: Rebase on top of Jani's asle handling rework.
v9: Rebase on top of Ben's VECS enabling for Haswell, where he
unfortunately went OCD on the gt irq #defines. Not that they're still
not yet fully consistent:
- Used the GT_RENDER_ #defines + bdw shifts.
- Dropped the shift from the L3_PARITY stuff, seemed clearer.
- s/irq_refcount/irq_refcount.gt/
v10: Squash in VECS enabling patches and the gen8_gt_irq_handler
refactoring from Zhao Yakui <yakui.zhao@intel.com>
v11: Rebase on top of the interrupt cleanups in upstream.
v12: Rebase on top of Ben's DPF changes in upstream.
v13: Drop bdw from the HAS_L3_DPF feature flag for now, it's unclear what
exactly needs to be done. Requested by Ben.
v14: Fix the patch.
- Drop the mask of reserved bits and assorted logic, it doesn't match
the spec.
- Do the posting read inconditionally instead of commenting it out.
- Add a GEN8_MASTER_IRQ_CONTROL definition and use it.
- Fix up the GEN8_PIPE interrupt defines and give the GEN8_ prefixes -
we actually will need to use them.
- Enclose macros in do {} while (0) (checkpatch).
- Clear DE_MISC interrupt bits only after having processed them.
- Fix whitespace fail (checkpatch).
- Fix overtly long lines where appropriate (checkpatch).
- Don't use typedef'ed private_t (maintainer-scripts).
- Align the function parameter list correctly.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net> (v4)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
bikeshed
No PCI ids yet, so nothing should happen.
Rebase-Note: This one needs replacement ;-)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
I want to merge in the new Broadwell support as a late hw enabling
pull request. But since the internal branch was based upon our
drm-intel-nightly integration branch I need to resolve all the
oustanding conflicts in drm/i915 with a backmerge to make the 60+
patches apply properly.
We'll propably have some fun because Linus will come up with a
slightly different merge solution.
Conflicts:
drivers/gpu/drm/i915/i915_dma.c
drivers/gpu/drm/i915/i915_drv.c
drivers/gpu/drm/i915/intel_crt.c
drivers/gpu/drm/i915/intel_ddi.c
drivers/gpu/drm/i915/intel_display.c
drivers/gpu/drm/i915/intel_dp.c
drivers/gpu/drm/i915/intel_drv.h
All rather simple adjacent lines changed or partial backports from
-next to -fixes, with the exception of the thaw code in i915_dma.c.
That one needed a bit of shuffling to restore the intent.
Oh and the massive header file reordering in intel_drv.h is a bit
trouble. But not much.
v2: Also don't forget the fixup for the silent conflict that results
in compile fail ...
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On gmch platforms the normal pipe source CRC registers don't work for
DP and TV encoders. And on newer platforms the single pipe CRC has
been replaced by a set of CRC at different stages in the platform.
Now most of our userspace tests don't care one bit about the exact
CRC, they simply want something that reflects any changes on the
screen. Hence add a new auto target for platform agnostic tests to
use.
v2: Pass back the adjusted source so that it can be shown in debugfs.
v3: I seem to be unable to get a stable CRC for DP ports. So let's
just disable them for now when using the auto mode. Note that
testcases need to be restructured so that they can dynamically skip
connectors. They also first need to set up the desired mode
configuration, since otherwise the auto mode won't do the right thing.
v4: Don't leak the modeset mutex on error paths.
v5: Spelling fix for the i9xx auto_source function.
Cc: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Request by Ville in his review of the CRC stuff. This converts
everything but ilk_display_irq_handler since that needs a bit more
than a simple search&replace to look nice.
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The bbstate contains useful bits of debugging information such as
whether the batch is being read from GTT or PPGTT, or whether it is
allowed to execute privileged instructions.
v2: Only record BB_STATE for gen4+
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The only real need for this field was in
i915_{request,release}_power_well, but there we can get at it by a
container_of magic. Also since in the future we'll have multiple power
wells each with its own power_well struct it makes sense to remove the
field from there where it'd be just redundancy.
Suggested-by: Paulo Zanoni <paulo.zanoni@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Currently we make sure that all power domains are enabled during driver
init and turn off unneded ones only after the first modeset. Similarly
during suspend we enable all power domains, which will remain on through
the following resume until the first modeset.
This logic is supported by intel_set_power_well() in the power domain
framework. It would be nice to simplify the API, so that we only have
get/put functions and make it more explicit on the higher level how this
"power well on during init" logic works. This will make it also easier
if in the future we want to shorten the time the power wells are on.
For this add a new device private flag tracking whether we have the
power wells on because of init/suspend and use only
intel_display_power_get()/put(). As nothing else uses
intel_set_power_well() we can remove it.
This also fixes
commit 6efdf354dd
Author: Imre Deak <imre.deak@intel.com>
Date: Wed Oct 16 17:25:52 2013 +0300
drm/i915: enable only the needed power domains during modeset
where removing intel_set_power_well() resulted in not releasing the
reference on the power well that was taken during init and thus leaving
the power well on all the time. Regression reported by Paulo.
v2:
- move the init_power_on flag to the power_domains struct (Daniel)
v3:
- add note about this being a regression fix too (Paulo)
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In the future we'll need to support multiple power wells, so prepare for
that here. Create a new power domains struct which contains all
power domain/well specific fields. Since we'll have one lock protecting
all power wells, move power_well->lock to the new struct too.
No functional change.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Daniel pointed out that it was hard to get anything lockless to work
correctly, so don't even try for this non critical piece of code and
just use a spin lock.
v2: Make intel_pipe_crc->opened a bool
v3: Use assert_spin_locked() instead of a comment (Daniel Vetter)
v4: Use spin_lock_irq() in the debugfs functions (they can only be
called from process context),
Use spin_lock() in the pipe_crc_update() function that can only be
called from an interrupt handler,
Use wait_event_interruptible_lock_irq() when waiting for data in the
cicular buffer to ensure proper locking around the condition we are
waiting for. (Daniel Vetter)
Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Adding stuff to the bottom of struct drm_i915_driver_private is
nowadays considered uncool.
Cc: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
There is no hard need for this to be a spin lock, as we don't take these
locks in irq context from anywhere. An upcoming patch will add calls to
punit read/write functions from within regions protected by this lock
and those functions need a mutex in turn. As a solution for that convert
the spin lock to be a mutex.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It is just cleaner this way and makes it easier to add support for
other HW generations with always-on power wells powering a different
set of domains.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Upcoming patches will add tracking for a set of power domains via a
bitmask; to make things simple there remove the current gap in the
enum values.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On pre-gen5 and vlv we can't use the pipe source when TV-out or a DP
port is connected to the pipe. Hence we need to expose new CRC
sources.
Also simplify the existing pipe source platform code a bit by
rejecting all unhandled sources by default.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Since we use intel_enable_rc6() now for more than just when we're
enabling RC6, we'll see this message many times, and it is just
confusing.
As an example, calc_residency calls this function whenever poked via
sysfs. This leaves the impression in dmesg that we're constantly
re-enabling RC6.
While at it, move the defines and description from drv.h to intel_pm.c,
since these are only ever used in that code.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Once the machine gets to a certain point in the suspend process, we
expect the GPU to be idle. If it is not, we might corrupt memory.
Empirically (with an early version of this patch) we have seen this is
not the case. We cannot currently explain why the latent GPU writes
occur.
In the technical sense, this patch is a workaround in that we have an
issue we can't explain, and the patch indirectly solves the issue.
However, it's really better than a workaround because we understand why
it works, and it really should be a safe thing to do in all cases.
The noticeable effect other than the debug messages would be an increase
in the suspend time. I have not measure how expensive it actually is.
I think it would be good to spend further time to root cause why we're
seeing these latent writes, but it shouldn't preclude preventing the
fallout.
NOTE: It should be safe (and makes some sense IMO) to also keep the
VALID bit unset on resume when we clear_range(). I've opted not to do
this as properly clearing those bits at some later point would be extra
work.
v2: Fix bugzilla link
Bugzilla: http://bugs.freedesktop.org/show_bug.cgi?id=65496
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=59321
Tested-by: Takashi Iwai <tiwai@suse.de>
Tested-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Tested-By: Todd Previte <tprevite@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We need this to work around a corruption when the boot kernel image
loads the hibernated kernel image from swap on Haswell systems -
somehow not everything is properly shut off.
This is just the prep work, the next patch will implement the actual
workaround.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Add a commit message suitable for -fixes and add cc: stable]
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We enable the interrupt unconditionally and only control it
through the enable bit in the CRC control register.
v2: Extract per-platform helpers to compute the register values.
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This will be needed for setting the HDMI pixel clock for audio
config. No functional changes.
v2: Now with a commit message.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
At least on linux sizeof(long) == sizeof(void*) and the thinking
is that you can grab about as many references as there's memory.
Doesn't really matter, just a bit of OCD since the fixed size data
type in a pure in-kernel datastructure look off.
v2: Ville asked for an overflow check since no one prevents userspace
from incrementing the pin count forever.
v3: s/INT/LONG/, noticed by Chris.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Assuming that all framebuffer related metadata is invariant simplifies
our userspace input data checking. And current userspace always first
updates the tiling of an object before creating a framebuffer with it.
This allows us to upconvert a check in pin_and_fence to a WARN.
In the future it should also be helpful to know which buffer objects
are potential scanout targets for e.g. frontbuffer rendering tracking
and similar things.
Note that SNA shipped for one prerelease with code which will be
broken through this patch. But users shouldn't notice since it's
purely an optimization and will transparently fall back to allocating
a new fb. i-g-t also had offending code (now fixed), but we don't
really care about breaking the test-suite.
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Grumpily-reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We have two once very similar functions, i915_gpu_idle() and
i915_gem_idle(). The former is used as the lower level operation to
flush work on the GPU, whereas the latter is the high level interface to
flush the GEM bookkeeping in addition to flushing the GPU. As such
i915_gem_idle() also clears out the request and activity lists and
cancels the delayed work. This is what we need for unloading the driver,
unfortunately we called i915_gpu_idle() instead.
In the process, make sure that when cancelling the delayed work and
timer, which is synchronous, that we do not hold any locks to prevent a
deadlock if the work item is already waiting upon the mutex. This
requires us to push the mutex down from the caller to i915_gem_idle().
v2: s/i915_gem_idle/i915_gem_suspend/
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70334
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Tested-by: xunx.fang@intel.com
[danvet: Only set ums.suspended for !kms as discussed earlier. Chris
noticed that this slipped through.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Also use #ifdef to keep consistent with all other such cases.
Cc: Damien Lespiau <damien.lespiau@intel.com>
Acked-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It doesn't really make sense to have two processes dequeueing the CRC
values at the same time. Forbid that usage.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
seq_file is not quite the right interface for these ones. We have a
circular buffer with a new entry per vblank on one side and a process
wanting to dequeue the CRC with a read().
It's quite racy to wait for vblank in user land and then try to read a
pipe_crc file, sometimes the CRC interrupt hasn't been fired and we end
up with an EOF.
So, let's have the read on the pipe_crc file block until the interrupt
gives us a new entry. At that point we can wake the reading process.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
So we don't eat that memory when not needed.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
There are a few good properties to a circular buffer, for instance it
has a number of entries (before we were always dumping the full buffer).
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Note the "return -ENODEV;" in pipe_crc_set_source(). The ctl file is
disabled until the end of the series to be able to do incremental
improvements.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
There are several points in the display pipeline where CRCs can be
computed on the bits flowing there. For instance, it's usually possible
to compute the CRCs of the primary plane, the sprite plane or the CRCs
of the bits after the panel fitter (collectively called pipe CRCs).
v2: Quite a bit of rework here and there (Damien)
Signed-off-by: Shuang He <shuang.he@intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Fix intermediate compile file reported by Wu Fengguang's
kernel builder.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
I've sent this patch several times for various reasons. It essentially
cleans up a lot of code where we need to do something per ring, and want
to query whether or not the ring exists on that hardware.
It has various uses coming up, but for now it shouldn't be too
offensive.
v2: Big conflict resolution on Damien's DEV_INFO_FOR_EACH stuff
v3: Resolved vebox addition
v4: Rebased after months of disuse. Also made failed ringbuffer init
cleaner.
v5: Remove the init cleaner from v4. There is a better way to do it.
(Chris)
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
To make it easier to check what watermark updates are actually
necessary, keep copies of the relevant bits that match the current
hardware state.
Also add DDB partitioning into hsw_wm_values as that's another piece
of state we want to track.
We don't read out the hardware state on init yet, so we can't really
start using this yet, but it will be used later.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
[danvet: Paulo asked for a comment around the memcmp to say that we
depend upon zero-initializing the entire structures due to padding.
But a later patch in this series removes the memcmp again. So this is
ok as-is.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Boots Just Fine (tm)!
The only glitch seems to be that at least on Fedora the boot splash
gets confused and doesn't display much at all.
And since there's no ugly console flickering anymore in between, the
flicker while switching between X servers (VT support is still enabled)
is even more jarring.
Also, I'm unsure whether we don't need to somehow kick out vgacon, now
that nothing else gets in the way. But stuff seems to work, so I
don't care. Also everything still works as well with VGA_CONSOLE=n
Also the #ifdef mess needs a bit of a cleanup, follow-up patches will
do just that.
To keep the Kconfig tidy, extract all the i915 options into its own
file.
v2:
- Rebase on top of the preliminary hw support option and the
intel_drv.h cleanup.
- Shut up warnings in i915_debugfs.c
v3: Use the right CONFIG variable, spotted by Chon Ming.
Cc: Lee, Chon Ming <chon.ming.lee@intel.com>
Cc: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Chon Ming Lee <chon.ming.lee@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As we delay the initial RPS enabling (upon boot and after resume), there
is a chance that we may start to render and trigger RPS boosts before we
set up the punit. Any changes we make could result in inconsistent
hardware state, with a danger of causing undefined behaviour. However,
as the boosting is a optional tweak to RPS, we can simply ignore it
whilst RPS is not yet enabled.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that MMIO has been split up into gen specific functions it is
obvious when HAS_FPGA_DBG_UNCLAIMED, HAS_FORCE_WAKE are needed. As such,
we can remove this extraneous condition.
As a result of this, as well as previously existing function pointers
for forcewake, we no longer need the has_force_wake member in the device
specific data structure.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In preparation for having per GEN MMIO functions, create, and start
using MMIO functions in our uncore data structure. This simply makes the
transition easier by allowing us to just plug in the per GEN stuff
later.
For simplicity, I moved the intel_uncore_init() function down since
those rely on static functions defined lower in the file. This is most
of the churn in this patch.
I made one unrelated change here by using off_t datatype for the offset
of the register to write. I like the clarity that this brings to the
code. If I did it as a separate patch, I am pretty certain it would get
bikeshedded to oblivion.
Requested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The conflict in intel_drv.h tripped me up a bit since a patch in dinq
moves all the functions around, but another one in drm-next removes a
single function. So I'ev figured backing this into a backmerge would
be good.
i915_dma.c is just adjacent lines changed, nothing nefarious there.
Conflicts:
drivers/gpu/drm/i915/i915_dma.c
drivers/gpu/drm/i915/intel_drv.h
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We can get the PCI vendor and device IDs via dev->pdev. So we can drop
the duplicated information.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
All drivers embed gem-objects into their own buffer objects. There is no
reason to keep drm_gem_object_alloc(), gem->driver_private and
->gem_init_object() anymore.
New drivers are highly encouraged to do the same. There is no benefit in
allocating gem-objects separately.
Cc: Dave Airlie <airlied@gmail.com>
Cc: Alex Deucher <alexdeucher@gmail.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Rob Clark <robdclark@gmail.com>
Cc: Inki Dae <inki.dae@samsung.com>
Cc: Ben Skeggs <skeggsb@gmail.com>
Cc: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
for igt test case.
v2: remove trailing spaces and fix conflicts
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
[danvet:
- make it comipile
- s/IS_HASWELL/HAS_PSR/]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
After applying wait-boost we often find ourselves stuck at higher clocks
than required. The current threshold value requires the GPU to be
continuously and completely idle for 313ms before it is dropped by one
bin. Conversely, we require the GPU to be busy for an average of 90% over
a 84ms period before we upclock. So the current thresholds almost never
downclock the GPU, and respond very slowly to sudden demands for more
power. It is easy to observe that we currently lock into the wrong bin
and both underperform in benchmarks and consume more power than optimal
(just by repeating the task and measuring the different results).
An alternative approach, as discussed in the bspec, is to use a
continuous threshold for upclocking, and an average value for downclocking.
This is good for quickly detecting and reacting to state changes within a
frame, however it fails with the common throttling method of waiting
upon the outstanding frame - at least it is difficult to choose a
threshold that works well at 15,000fps and at 60fps. So continue to use
average busy/idle loads to determine frequency change.
v2: Use 3 power zones to keep frequencies low in steady-state mostly
idle (e.g. scrolling, interactive 2D drawing), and frequencies high
for demanding games. In between those end-states, we use a
fast-reclocking algorithm to converge more quickly on the desired bin.
v3: Bug fixes - make sure we reset adj after switching power zones.
v4: Tune - drop the continuous busy thresholds as it prevents us from
choosing the right frequency for glxgears style swap benchmarks. Instead
the goal is to be able to find the right clocks irrespective of the
wait-boost.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Cc: Stéphane Marchesin <stephane.marchesin@gmail.com>
Cc: Owen Taylor <otaylor@redhat.com>
Cc: "Meng, Mengmeng" <mengmeng.meng@intel.com>
Cc: "Zhuang, Lena" <lena.zhuang@intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If we encounter a situation where the CPU blocks waiting for results
from the GPU, give the GPU a kick to boost its the frequency.
This should work to reduce user interface stalls and to quickly promote
mesa to high frequencies - but the cost is that our requested frequency
stalls high (as we do not idle for long enough before rc6 to start
reducing frequencies, nor are we aggressive at down clocking an
underused GPU). However, this should be mitigated by rc6 itself powering
off the GPU when idle, and that energy use is dependent upon the workload
of the GPU in addition to its frequency (e.g. the math or sampler
functions only consume power when used). Still, this is likely to
adversely affect light workloads.
In particular, this nearly eliminates the highly noticeable wake-up lag
in animations from idle. For example, expose or workspace transitions.
(However, given the situation where we fail to downclock, our requested
frequency is almost always the maximum, except for Baytrail where we
manually downclock upon idling. This often masks the latency of
upclocking after being idle, so animations are typically smooth - at the
cost of increased power consumption.)
Stéphane raised the concern that this will punish good applications and
reward bad applications - but due to the nature of how mesa performs its
client throttling, I believe all mesa applications will be roughly
equally affected. To address this concern, and to prevent applications
like compositors from permanently boosting the RPS state, we ratelimit the
frequency of the wait-boosts each client recieves.
Unfortunately, this techinique is ineffective with Ironlake - which also
has dynamic render power states and suffers just as dramatically. For
Ironlake, the thermal/power headroom is shared with the CPU through
Intelligent Power Sharing and the intel-ips module. This leaves us with
no GPU boost frequencies available when coming out of idle, and due to
hardware limitations we cannot change the arbitration between the CPU and
GPU quickly enough to be effective.
v2: Limit each client to receiving a single boost for each active period.
Tested by QA to only marginally increase power, and to demonstrably
increase throughput in games. No latency measurements yet.
v3: Cater for front-buffer rendering with manual throttling.
v4: Tidy up.
v5: Sadly the compositor needs frequent boosts as it may never idle, but
due to its picking mechanism (using ReadPixels) may require frequent
waits. Those waits, along with the waits for the vrefresh swap, conspire
to keep the GPU at low frequencies despite the interactive latency. To
overcome this we ditch the one-boost-per-active-period and just ratelimit
the number of wait-boosts each client can receive.
Reported-and-tested-by: Paul Neumann <paul104x@yahoo.de>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=68716
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Cc: Stéphane Marchesin <stephane.marchesin@gmail.com>
Cc: Owen Taylor <otaylor@redhat.com>
Cc: "Meng, Mengmeng" <mengmeng.meng@intel.com>
Cc: "Zhuang, Lena" <lena.zhuang@intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
[danvet: No extern for function prototypes in headers.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When we switched to always using a timeout in conjunction with
wait_seqno, we lost the ability to detect missed interrupts. Since, we
have had issues with interrupts on a number of generations, and they are
required to be delivered in a timely fashion for a smooth UX, it is
important that we do log errors found in the wild and prevent the
display stalling for upwards of 1s every time the seqno interrupt is
missed.
Rather than continue to fix up the timeouts to work around the interface
impedence in wait_event_*(), open code the combination of
wait_event[_interruptible][_timeout], and use the exposed timer to
poll for seqno should we detect a lost interrupt.
v2: In order to satisfy the debug requirement of logging missed
interrupts with the real world requirments of making machines work even
if interrupts are hosed, we revert to polling after detecting a missed
interrupt.
v3: Throw in a debugfs interface to simulate broken hw not reporting
interrupts.
v4: s/EGAIN/EAGAIN/ (Imre)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Imre Deak <imre.deak@intel.com>
[danvet: Don't use the struct typedef in new code.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Add the missing cache-level to the describe_obj() function for debug and
error reporting.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Even though we track object activity and not VMA, because we have the
active_list be based on the VM, it makes the most sense to use VMAs in
the APIs.
NOTE: Daniel intends to eventually rip out active/inactive LRUs, but for
now, leave them be.
v2: Remove leftover hunk from the previous patch which didn't keep
i915_gem_object_move_to_active. That patch had to rely on the ring to
get the dev instead of the obj. (Chris)
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
"We do fairly often lookup the ggtt vma for an obj." - Chris Wilson. As
such, provide a function to offer slightly cheaper access to the vma.
Not performance tested. By my quick estimation it saves at least 3
pointer dereferences from the existing mechanism.
This patch mostly matches code from Chris in
<20130911221430.GB7825@nuc-i3427.alporthouse.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>