forked from Minki/linux
Merge tag 'drm-misc-next-2017-03-06' of git://anongit.freedesktop.org/git/drm-misc into drm-next
First slice of drm-misc-next for 4.12: Core/subsystem-wide: - link status core patch from Manasi, for signalling link train fail to userspace. I also had the i915 patch in here, but that had a small buglet in our CI, so reverted. - more debugfs_remove removal from Noralf, almost there now (Noralf said he'll try to follow up with the stragglers). - drm todo moved into kerneldoc, for better visibility (see Documentation/gpu/todo.rst), lots of starter tasks in there. - devm_ of helpers + use it in sti (from Ben Gaignard, acked by Rob Herring) - extended framebuffer fbdev support (for fbdev flipping), and vblank wait ioctl fbdev support (Maxime Ripard) - misc small things all over, as usual - add vblank callbacks to drm_crtc_funcs, plus make lots of good use of this to simplify drivers (Shawn Guo) - new atomic iterator macros to unconfuse old vs. new state Small drivers: - vc4 improvements from Eric - vc4 kerneldocs (Eric)! - tons of improvements for dw-mipi-dsi in rockchip from John Keeping and Chris Zhong. - MAINTAINERS entries for drivers managed in drm-misc. It's not yet official, still an experiment, but definitely not complete fail and better to avoid confusion. We kinda screwed that up with drm-misc a bit when we started committers last year. - qxl atomic conversion (Gabriel Krisman) - bunch of virtual driver polish (qxl, virgl, ...) - misc tiny patches all over This is the first time we've done the same merge-window blackout for drm-misc as we've done for drm-intel for ages, hence why we have a _lot_ of stuff queued already. But it's still only half of drm-intel (room to grow!), and the drivers in drm-misc experiment seems to work at least insofar as that you also get lots of driver updates here alredy. * tag 'drm-misc-next-2017-03-06' of git://anongit.freedesktop.org/git/drm-misc: (141 commits) drm/vc4: Fix OOPSes from trying to cache a partially constructed BO. drm/vc4: Fulfill user BO creation requests from the kernel BO cache. Revert "drm/i915: Implement Link Rate fallback on Link training failure" drm/fb-helper: implement ioctl FBIO_WAITFORVSYNC drm: Update drm_fbdev_cma_init documentation drm/rockchip/dsi: add dw-mipi power domain support drm/rockchip/dsi: fix insufficient bandwidth of some panel dt-bindings: add power domain node for dw-mipi-rockchip drm/rockchip/dsi: remove mode_valid function drm/rockchip/dsi: dw-mipi: correct the coding style drm/rockchip/dsi: dw-mipi: support RK3399 mipi dsi dt-bindings: add rk3399 support for dw-mipi-rockchip drm/rockchip: dw-mipi-dsi: add reset control drm/rockchip: dw-mipi-dsi: support non-burst modes drm/rockchip: dw-mipi-dsi: defer probe if panel is not loaded drm/rockchip: vop: test for P{H,V}SYNC drm/rockchip: dw-mipi-dsi: use positive check for N{H, V}SYNC drm/rockchip: dw-mipi-dsi: use specific poll helper drm/rockchip: dw-mipi-dsi: improve PLL configuration drm/rockchip: dw-mipi-dsi: properly configure PHY timing ...
This commit is contained in:
commit
b558dfd56a
@ -5,14 +5,19 @@ Required properties:
|
||||
- #address-cells: Should be <1>.
|
||||
- #size-cells: Should be <0>.
|
||||
- compatible: "rockchip,rk3288-mipi-dsi", "snps,dw-mipi-dsi".
|
||||
"rockchip,rk3399-mipi-dsi", "snps,dw-mipi-dsi".
|
||||
- reg: Represent the physical address range of the controller.
|
||||
- interrupts: Represent the controller's interrupt to the CPU(s).
|
||||
- clocks, clock-names: Phandles to the controller's pll reference
|
||||
clock(ref) and APB clock(pclk), as described in [1].
|
||||
clock(ref) and APB clock(pclk). For RK3399, a phy config clock
|
||||
(phy_cfg) is additional required. As described in [1].
|
||||
- rockchip,grf: this soc should set GRF regs to mux vopl/vopb.
|
||||
- ports: contain a port node with endpoint definitions as defined in [2].
|
||||
For vopb,set the reg = <0> and set the reg = <1> for vopl.
|
||||
|
||||
Optional properties:
|
||||
- power-domains: a phandle to mipi dsi power domain node.
|
||||
|
||||
[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
|
||||
[2] Documentation/devicetree/bindings/media/video-interfaces.txt
|
||||
|
||||
|
@ -183,14 +183,12 @@ GEM Objects Lifetime
|
||||
--------------------
|
||||
|
||||
All GEM objects are reference-counted by the GEM core. References can be
|
||||
acquired and release by :c:func:`calling
|
||||
drm_gem_object_reference()` and
|
||||
:c:func:`drm_gem_object_unreference()` respectively. The caller
|
||||
must hold the :c:type:`struct drm_device <drm_device>`
|
||||
struct_mutex lock when calling
|
||||
:c:func:`drm_gem_object_reference()`. As a convenience, GEM
|
||||
provides :c:func:`drm_gem_object_unreference_unlocked()`
|
||||
functions that can be called without holding the lock.
|
||||
acquired and release by :c:func:`calling drm_gem_object_get()` and
|
||||
:c:func:`drm_gem_object_put()` respectively. The caller must hold the
|
||||
:c:type:`struct drm_device <drm_device>` struct_mutex lock when calling
|
||||
:c:func:`drm_gem_object_get()`. As a convenience, GEM provides
|
||||
:c:func:`drm_gem_object_put_unlocked()` functions that can be called without
|
||||
holding the lock.
|
||||
|
||||
When the last reference to a GEM object is released the GEM core calls
|
||||
the :c:type:`struct drm_driver <drm_driver>` gem_free_object
|
||||
|
@ -12,8 +12,10 @@ Linux GPU Driver Developer's Guide
|
||||
drm-uapi
|
||||
i915
|
||||
tinydrm
|
||||
vc4
|
||||
vga-switcheroo
|
||||
vgaarbiter
|
||||
todo
|
||||
|
||||
.. only:: subproject and html
|
||||
|
||||
|
@ -50,3 +50,13 @@ names are "Notes" with information for dangerous or tricky corner cases,
|
||||
and "FIXME" where the interface could be cleaned up.
|
||||
|
||||
Also read the :ref:`guidelines for the kernel documentation at large <doc_guide>`.
|
||||
|
||||
Getting Started
|
||||
===============
|
||||
|
||||
Developers interested in helping out with the DRM subsystem are very welcome.
|
||||
Often people will resort to sending in patches for various issues reported by
|
||||
checkpatch or sparse. We welcome such contributions.
|
||||
|
||||
Anyone looking to kick it up a notch can find a list of janitorial tasks on
|
||||
the :ref:`TODO list <todo>`.
|
||||
|
321
Documentation/gpu/todo.rst
Normal file
321
Documentation/gpu/todo.rst
Normal file
@ -0,0 +1,321 @@
|
||||
.. _todo:
|
||||
|
||||
=========
|
||||
TODO list
|
||||
=========
|
||||
|
||||
This section contains a list of smaller janitorial tasks in the kernel DRM
|
||||
graphics subsystem useful as newbie projects. Or for slow rainy days.
|
||||
|
||||
Subsystem-wide refactorings
|
||||
===========================
|
||||
|
||||
De-midlayer drivers
|
||||
-------------------
|
||||
|
||||
With the recent ``drm_bus`` cleanup patches for 3.17 it is no longer required
|
||||
to have a ``drm_bus`` structure set up. Drivers can directly set up the
|
||||
``drm_device`` structure instead of relying on bus methods in ``drm_usb.c``
|
||||
and ``drm_platform.c``. The goal is to get rid of the driver's ``->load`` /
|
||||
``->unload`` callbacks and open-code the load/unload sequence properly, using
|
||||
the new two-stage ``drm_device`` setup/teardown.
|
||||
|
||||
Once all existing drivers are converted we can also remove those bus support
|
||||
files for USB and platform devices.
|
||||
|
||||
All you need is a GPU for a non-converted driver (currently almost all of
|
||||
them, but also all the virtual ones used by KVM, so everyone qualifies).
|
||||
|
||||
Contact: Daniel Vetter, Thierry Reding, respective driver maintainers
|
||||
|
||||
Switch from reference/unreference to get/put
|
||||
--------------------------------------------
|
||||
|
||||
For some reason DRM core uses ``reference``/``unreference`` suffixes for
|
||||
refcounting functions, but kernel uses ``get``/``put`` (e.g.
|
||||
``kref_get``/``put()``). It would be good to switch over for consistency, and
|
||||
it's shorter. Needs to be done in 3 steps for each pair of functions:
|
||||
|
||||
* Create new ``get``/``put`` functions, define the old names as compatibility
|
||||
wrappers
|
||||
* Switch over each file/driver using a cocci-generated spatch.
|
||||
* Once all users of the old names are gone, remove them.
|
||||
|
||||
This way drivers/patches in the progress of getting merged won't break.
|
||||
|
||||
Contact: Daniel Vetter
|
||||
|
||||
Convert existing KMS drivers to atomic modesetting
|
||||
--------------------------------------------------
|
||||
|
||||
3.19 has the atomic modeset interfaces and helpers, so drivers can now be
|
||||
converted over. Modern compositors like Wayland or Surfaceflinger on Android
|
||||
really want an atomic modeset interface, so this is all about the bright
|
||||
future.
|
||||
|
||||
There is a conversion guide for atomic and all you need is a GPU for a
|
||||
non-converted driver (again virtual HW drivers for KVM are still all
|
||||
suitable).
|
||||
|
||||
As part of this drivers also need to convert to universal plane (which means
|
||||
exposing primary & cursor as proper plane objects). But that's much easier to
|
||||
do by directly using the new atomic helper driver callbacks.
|
||||
|
||||
Contact: Daniel Vetter, respective driver maintainers
|
||||
|
||||
Clean up the clipped coordination confusion around planes
|
||||
---------------------------------------------------------
|
||||
|
||||
We have a helper to get this right with drm_plane_helper_check_update(), but
|
||||
it's not consistently used. This should be fixed, preferrably in the atomic
|
||||
helpers (and drivers then moved over to clipped coordinates). Probably the
|
||||
helper should also be moved from drm_plane_helper.c to the atomic helpers, to
|
||||
avoid confusion - the other helpers in that file are all deprecated legacy
|
||||
helpers.
|
||||
|
||||
Contact: Ville Syrjälä, Daniel Vetter, driver maintainers
|
||||
|
||||
Implement deferred fbdev setup in the helper
|
||||
--------------------------------------------
|
||||
|
||||
Many (especially embedded drivers) want to delay fbdev setup until there's a
|
||||
real screen plugged in. This is to avoid the dreaded fallback to the low-res
|
||||
fbdev default. Many drivers have a hacked-up (and often broken) version of this,
|
||||
better to do it once in the shared helpers. Thierry has a patch series, but that
|
||||
one needs to be rebased and final polish applied.
|
||||
|
||||
Contact: Thierry Reding, Daniel Vetter, driver maintainers
|
||||
|
||||
Convert early atomic drivers to async commit helpers
|
||||
----------------------------------------------------
|
||||
|
||||
For the first year the atomic modeset helpers didn't support asynchronous /
|
||||
nonblocking commits, and every driver had to hand-roll them. This is fixed
|
||||
now, but there's still a pile of existing drivers that easily could be
|
||||
converted over to the new infrastructure.
|
||||
|
||||
One issue with the helpers is that they require that drivers handle completion
|
||||
events for atomic commits correctly. But fixing these bugs is good anyway.
|
||||
|
||||
Contact: Daniel Vetter, respective driver maintainers
|
||||
|
||||
Fallout from atomic KMS
|
||||
-----------------------
|
||||
|
||||
``drm_atomic_helper.c`` provides a batch of functions which implement legacy
|
||||
IOCTLs on top of the new atomic driver interface. Which is really nice for
|
||||
gradual conversion of drivers, but unfortunately the semantic mismatches are
|
||||
a bit too severe. So there's some follow-up work to adjust the function
|
||||
interfaces to fix these issues:
|
||||
|
||||
* atomic needs the lock acquire context. At the moment that's passed around
|
||||
implicitly with some horrible hacks, and it's also allocate with
|
||||
``GFP_NOFAIL`` behind the scenes. All legacy paths need to start allocating
|
||||
the acquire context explicitly on stack and then also pass it down into
|
||||
drivers explicitly so that the legacy-on-atomic functions can use them.
|
||||
|
||||
* A bunch of the vtable hooks are now in the wrong place: DRM has a split
|
||||
between core vfunc tables (named ``drm_foo_funcs``), which are used to
|
||||
implement the userspace ABI. And then there's the optional hooks for the
|
||||
helper libraries (name ``drm_foo_helper_funcs``), which are purely for
|
||||
internal use. Some of these hooks should be move from ``_funcs`` to
|
||||
``_helper_funcs`` since they are not part of the core ABI. There's a
|
||||
``FIXME`` comment in the kerneldoc for each such case in ``drm_crtc.h``.
|
||||
|
||||
* There's a new helper ``drm_atomic_helper_best_encoder()`` which could be
|
||||
used by all atomic drivers which don't select the encoder for a given
|
||||
connector at runtime. That's almost all of them, and would allow us to get
|
||||
rid of a lot of ``best_encoder`` boilerplate in drivers.
|
||||
|
||||
Contact: Daniel Vetter
|
||||
|
||||
Get rid of dev->struct_mutex from GEM drivers
|
||||
---------------------------------------------
|
||||
|
||||
``dev->struct_mutex`` is the Big DRM Lock from legacy days and infested
|
||||
everything. Nowadays in modern drivers the only bit where it's mandatory is
|
||||
serializing GEM buffer object destruction. Which unfortunately means drivers
|
||||
have to keep track of that lock and either call ``unreference`` or
|
||||
``unreference_locked`` depending upon context.
|
||||
|
||||
Core GEM doesn't have a need for ``struct_mutex`` any more since kernel 4.8,
|
||||
and there's a ``gem_free_object_unlocked`` callback for any drivers which are
|
||||
entirely ``struct_mutex`` free.
|
||||
|
||||
For drivers that need ``struct_mutex`` it should be replaced with a driver-
|
||||
private lock. The tricky part is the BO free functions, since those can't
|
||||
reliably take that lock any more. Instead state needs to be protected with
|
||||
suitable subordinate locks or some cleanup work pushed to a worker thread. For
|
||||
performance-critical drivers it might also be better to go with a more
|
||||
fine-grained per-buffer object and per-context lockings scheme. Currently the
|
||||
following drivers still use ``struct_mutex``: ``msm``, ``omapdrm`` and
|
||||
``udl``.
|
||||
|
||||
Contact: Daniel Vetter
|
||||
|
||||
Core refactorings
|
||||
=================
|
||||
|
||||
Use new IDR deletion interface to clean up drm_gem_handle_delete()
|
||||
------------------------------------------------------------------
|
||||
|
||||
See the "This is gross" comment -- apparently the IDR system now can return an
|
||||
error code instead of oopsing.
|
||||
|
||||
Clean up the DRM header mess
|
||||
----------------------------
|
||||
|
||||
Currently the DRM subsystem has only one global header, ``drmP.h``. This is
|
||||
used both for functions exported to helper libraries and drivers and functions
|
||||
only used internally in the ``drm.ko`` module. The goal would be to move all
|
||||
header declarations not needed outside of ``drm.ko`` into
|
||||
``drivers/gpu/drm/drm_*_internal.h`` header files. ``EXPORT_SYMBOL`` also
|
||||
needs to be dropped for these functions.
|
||||
|
||||
This would nicely tie in with the below task to create kerneldoc after the API
|
||||
is cleaned up. Or with the "hide legacy cruft better" task.
|
||||
|
||||
Note that this is well in progress, but ``drmP.h`` is still huge. The updated
|
||||
plan is to switch to per-file driver API headers, which will also structure
|
||||
the kerneldoc better. This should also allow more fine-grained ``#include``
|
||||
directives.
|
||||
|
||||
Contact: Daniel Vetter
|
||||
|
||||
Add missing kerneldoc for exported functions
|
||||
--------------------------------------------
|
||||
|
||||
The DRM reference documentation is still lacking kerneldoc in a few areas. The
|
||||
task would be to clean up interfaces like moving functions around between
|
||||
files to better group them and improving the interfaces like dropping return
|
||||
values for functions that never fail. Then write kerneldoc for all exported
|
||||
functions and an overview section and integrate it all into the drm DocBook.
|
||||
|
||||
See https://dri.freedesktop.org/docs/drm/ for what's there already.
|
||||
|
||||
Contact: Daniel Vetter
|
||||
|
||||
Hide legacy cruft better
|
||||
------------------------
|
||||
|
||||
Way back DRM supported only drivers which shadow-attached to PCI devices with
|
||||
userspace or fbdev drivers setting up outputs. Modern DRM drivers take charge
|
||||
of the entire device, you can spot them with the DRIVER_MODESET flag.
|
||||
|
||||
Unfortunately there's still large piles of legacy code around which needs to
|
||||
be hidden so that driver writers don't accidentally end up using it. And to
|
||||
prevent security issues in those legacy IOCTLs from being exploited on modern
|
||||
drivers. This has multiple possible subtasks:
|
||||
|
||||
* Make sure legacy IOCTLs can't be used on modern drivers.
|
||||
* Extract support code for legacy features into a ``drm-legacy.ko`` kernel
|
||||
module and compile it only when one of the legacy drivers is enabled.
|
||||
* Extract legacy functions into their own headers and remove it that from the
|
||||
monolithic ``drmP.h`` header.
|
||||
* Remove any lingering cruft from the OS abstraction layer from modern
|
||||
drivers.
|
||||
|
||||
This is mostly done, the only thing left is to split up ``drm_irq.c`` into
|
||||
legacy cruft and the parts needed by modern KMS drivers.
|
||||
|
||||
Contact: Daniel Vetter
|
||||
|
||||
Make panic handling work
|
||||
------------------------
|
||||
|
||||
This is a really varied tasks with lots of little bits and pieces:
|
||||
|
||||
* The panic path can't be tested currently, leading to constant breaking. The
|
||||
main issue here is that panics can be triggered from hardirq contexts and
|
||||
hence all panic related callback can run in hardirq context. It would be
|
||||
awesome if we could test at least the fbdev helper code and driver code by
|
||||
e.g. trigger calls through drm debugfs files. hardirq context could be
|
||||
achieved by using an IPI to the local processor.
|
||||
|
||||
* There's a massive confusion of different panic handlers. DRM fbdev emulation
|
||||
helpers have one, but on top of that the fbcon code itself also has one. We
|
||||
need to make sure that they stop fighting over each another.
|
||||
|
||||
* ``drm_can_sleep()`` is a mess. It hides real bugs in normal operations and
|
||||
isn't a full solution for panic paths. We need to make sure that it only
|
||||
returns true if there's a panic going on for real, and fix up all the
|
||||
fallout.
|
||||
|
||||
* The panic handler must never sleep, which also means it can't ever
|
||||
``mutex_lock()``. Also it can't grab any other lock unconditionally, not
|
||||
even spinlocks (because NMI and hardirq can panic too). We need to either
|
||||
make sure to not call such paths, or trylock everything. Really tricky.
|
||||
|
||||
* For the above locking troubles reasons it's pretty much impossible to
|
||||
attempt a synchronous modeset from panic handlers. The only thing we could
|
||||
try to achive is an atomic ``set_base`` of the primary plane, and hope that
|
||||
it shows up. Everything else probably needs to be delayed to some worker or
|
||||
something else which happens later on. Otherwise it just kills the box
|
||||
harder, prevent the panic from going out on e.g. netconsole.
|
||||
|
||||
* There's also proposal for a simplied DRM console instead of the full-blown
|
||||
fbcon and DRM fbdev emulation. Any kind of panic handling tricks should
|
||||
obviously work for both console, in case we ever get kmslog merged.
|
||||
|
||||
Contact: Daniel Vetter
|
||||
|
||||
Better Testing
|
||||
==============
|
||||
|
||||
Enable trinity for DRM
|
||||
----------------------
|
||||
|
||||
And fix up the fallout. Should be really interesting ...
|
||||
|
||||
Make KMS tests in i-g-t generic
|
||||
-------------------------------
|
||||
|
||||
The i915 driver team maintains an extensive testsuite for the i915 DRM driver,
|
||||
including tons of testcases for corner-cases in the modesetting API. It would
|
||||
be awesome if those tests (at least the ones not relying on Intel-specific GEM
|
||||
features) could be made to run on any KMS driver.
|
||||
|
||||
Basic work to run i-g-t tests on non-i915 is done, what's now missing is mass-
|
||||
converting things over. For modeset tests we also first need a bit of
|
||||
infrastructure to use dumb buffers for untiled buffers, to be able to run all
|
||||
the non-i915 specific modeset tests.
|
||||
|
||||
Contact: Daniel Vetter
|
||||
|
||||
Create a virtual KMS driver for testing (vkms)
|
||||
----------------------------------------------
|
||||
|
||||
With all the latest helpers it should be fairly simple to create a virtual KMS
|
||||
driver useful for testing, or for running X or similar on headless machines
|
||||
(to be able to still use the GPU). This would be similar to vgem, but aimed at
|
||||
the modeset side.
|
||||
|
||||
Once the basics are there there's tons of possibilities to extend it.
|
||||
|
||||
Contact: Daniel Vetter
|
||||
|
||||
Driver Specific
|
||||
===============
|
||||
|
||||
Outside DRM
|
||||
===========
|
||||
|
||||
Better kerneldoc
|
||||
----------------
|
||||
|
||||
This is pretty much done, but there's some advanced topics:
|
||||
|
||||
Come up with a way to hyperlink to struct members. Currently you can hyperlink
|
||||
to the struct using ``#struct_name``, but not to a member within. Would need
|
||||
buy-in from kerneldoc maintainers, and the big question is how to make it work
|
||||
without totally unsightly
|
||||
``drm_foo_bar_really_long_structure->even_longer_memeber`` all over the text
|
||||
which breaks text flow.
|
||||
|
||||
Figure out how to integrate the asciidoc support for ascii-diagrams. We have a
|
||||
few of those (e.g. to describe mode timings), and asciidoc supports converting
|
||||
some ascii-art dialect into pngs. Would be really pretty to make that work.
|
||||
|
||||
Contact: Daniel Vetter, Jani Nikula
|
||||
|
||||
Jani is working on this already, hopefully lands in 4.8.
|
89
Documentation/gpu/vc4.rst
Normal file
89
Documentation/gpu/vc4.rst
Normal file
@ -0,0 +1,89 @@
|
||||
=====================================
|
||||
drm/vc4 Broadcom VC4 Graphics Driver
|
||||
=====================================
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/vc4/vc4_drv.c
|
||||
:doc: Broadcom VC4 Graphics Driver
|
||||
|
||||
Display Hardware Handling
|
||||
=========================
|
||||
|
||||
This section covers everything related to the display hardware including
|
||||
the mode setting infrastructure, plane, sprite and cursor handling and
|
||||
display, output probing and related topics.
|
||||
|
||||
Pixel Valve (DRM CRTC)
|
||||
----------------------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/vc4/vc4_crtc.c
|
||||
:doc: VC4 CRTC module
|
||||
|
||||
HVS
|
||||
---
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/vc4/vc4_hvs.c
|
||||
:doc: VC4 HVS module.
|
||||
|
||||
HVS planes
|
||||
----------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/vc4/vc4_plane.c
|
||||
:doc: VC4 plane module
|
||||
|
||||
HDMI encoder
|
||||
------------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/vc4/vc4_hdmi.c
|
||||
:doc: VC4 Falcon HDMI module
|
||||
|
||||
DSI encoder
|
||||
-----------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/vc4/vc4_dsi.c
|
||||
:doc: VC4 DSI0/DSI1 module
|
||||
|
||||
DPI encoder
|
||||
-----------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/vc4/vc4_dpi.c
|
||||
:doc: VC4 DPI module
|
||||
|
||||
VEC (Composite TV out) encoder
|
||||
------------------------------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/vc4/vc4_vec.c
|
||||
:doc: VC4 SDTV module
|
||||
|
||||
Memory Management and 3D Command Submission
|
||||
===========================================
|
||||
|
||||
This section covers the GEM implementation in the vc4 driver.
|
||||
|
||||
GPU buffer object (BO) management
|
||||
---------------------------------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/vc4/vc4_bo.c
|
||||
:doc: VC4 GEM BO management support
|
||||
|
||||
V3D binner command list (BCL) validation
|
||||
----------------------------------------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/vc4/vc4_validate.c
|
||||
:doc: Command list validator for VC4.
|
||||
|
||||
V3D render command list (RCL) generation
|
||||
----------------------------------------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/vc4/vc4_render_cl.c
|
||||
:doc: Render command list generation
|
||||
|
||||
Shader validator for VC4
|
||||
---------------------------
|
||||
.. kernel-doc:: drivers/gpu/drm/vc4/vc4_validate_shaders.c
|
||||
:doc: Shader validator for VC4.
|
||||
|
||||
V3D Interrupts
|
||||
--------------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/vc4/vc4_irq.c
|
||||
:doc: Interrupt management for the V3D engine
|
14
MAINTAINERS
14
MAINTAINERS
@ -4174,7 +4174,7 @@ F: drivers/gpu/drm/bridge/
|
||||
DRM DRIVER FOR BOCHS VIRTUAL GPU
|
||||
M: Gerd Hoffmann <kraxel@redhat.com>
|
||||
L: virtualization@lists.linux-foundation.org
|
||||
T: git git://git.kraxel.org/linux drm-qemu
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
S: Maintained
|
||||
F: drivers/gpu/drm/bochs/
|
||||
|
||||
@ -4182,7 +4182,7 @@ DRM DRIVER FOR QEMU'S CIRRUS DEVICE
|
||||
M: Dave Airlie <airlied@redhat.com>
|
||||
M: Gerd Hoffmann <kraxel@redhat.com>
|
||||
L: virtualization@lists.linux-foundation.org
|
||||
T: git git://git.kraxel.org/linux drm-qemu
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
S: Obsolete
|
||||
W: https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/
|
||||
F: drivers/gpu/drm/cirrus/
|
||||
@ -4239,6 +4239,7 @@ L: dri-devel@lists.freedesktop.org
|
||||
S: Supported
|
||||
F: drivers/gpu/drm/atmel-hlcdc/
|
||||
F: Documentation/devicetree/bindings/drm/atmel/
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
|
||||
DRM DRIVERS FOR ALLWINNER A10
|
||||
M: Maxime Ripard <maxime.ripard@free-electrons.com>
|
||||
@ -4255,6 +4256,8 @@ W: http://linux-meson.com/
|
||||
S: Supported
|
||||
F: drivers/gpu/drm/meson/
|
||||
F: Documentation/devicetree/bindings/display/amlogic,meson-vpu.txt
|
||||
T: git git://anongit.freedesktop.org/drm/drm-meson
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
|
||||
DRM DRIVERS FOR EXYNOS
|
||||
M: Inki Dae <inki.dae@samsung.com>
|
||||
@ -4385,7 +4388,7 @@ DRM DRIVER FOR QXL VIRTUAL GPU
|
||||
M: Dave Airlie <airlied@redhat.com>
|
||||
M: Gerd Hoffmann <kraxel@redhat.com>
|
||||
L: virtualization@lists.linux-foundation.org
|
||||
T: git git://git.kraxel.org/linux drm-qemu
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
S: Maintained
|
||||
F: drivers/gpu/drm/qxl/
|
||||
F: include/uapi/drm/qxl_drm.h
|
||||
@ -4396,6 +4399,7 @@ L: dri-devel@lists.freedesktop.org
|
||||
S: Maintained
|
||||
F: drivers/gpu/drm/rockchip/
|
||||
F: Documentation/devicetree/bindings/display/rockchip/
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
|
||||
DRM DRIVER FOR SAVAGE VIDEO CARDS
|
||||
S: Orphan / Obsolete
|
||||
@ -4454,6 +4458,7 @@ S: Supported
|
||||
F: drivers/gpu/drm/vc4/
|
||||
F: include/uapi/drm/vc4_drm.h
|
||||
F: Documentation/devicetree/bindings/display/brcm,bcm-vc4.txt
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
|
||||
DRM DRIVERS FOR TI OMAP
|
||||
M: Tomi Valkeinen <tomi.valkeinen@ti.com>
|
||||
@ -4476,6 +4481,7 @@ L: dri-devel@lists.freedesktop.org
|
||||
S: Maintained
|
||||
F: drivers/gpu/drm/zte/
|
||||
F: Documentation/devicetree/bindings/display/zte,vou.txt
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
|
||||
DSBR100 USB FM RADIO DRIVER
|
||||
M: Alexey Klimov <klimov.linux@gmail.com>
|
||||
@ -13314,7 +13320,7 @@ M: David Airlie <airlied@linux.ie>
|
||||
M: Gerd Hoffmann <kraxel@redhat.com>
|
||||
L: dri-devel@lists.freedesktop.org
|
||||
L: virtualization@lists.linux-foundation.org
|
||||
T: git git://git.kraxel.org/linux drm-qemu
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
S: Maintained
|
||||
F: drivers/gpu/drm/virtio/
|
||||
F: include/uapi/linux/virtio_gpu.h
|
||||
|
@ -240,6 +240,8 @@ EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
|
||||
* after it signals with dma_fence_signal. The callback itself can be called
|
||||
* from irq context.
|
||||
*
|
||||
* Returns 0 in case of success, -ENOENT if the fence is already signaled
|
||||
* and -EINVAL in case of error.
|
||||
*/
|
||||
int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
|
||||
dma_fence_func_t func)
|
||||
|
@ -99,6 +99,15 @@ config DRM_FBDEV_EMULATION
|
||||
|
||||
If in doubt, say "Y".
|
||||
|
||||
config DRM_FBDEV_OVERALLOC
|
||||
int "Overallocation of the fbdev buffer"
|
||||
depends on DRM_FBDEV_EMULATION
|
||||
default 100
|
||||
help
|
||||
Defines the fbdev buffer overallocation in percent. Default
|
||||
is 100. Typical values for double buffering will be 200,
|
||||
triple buffering 300.
|
||||
|
||||
config DRM_LOAD_EDID_FIRMWARE
|
||||
bool "Allow to specify an EDID data set instead of probing for it"
|
||||
depends on DRM_KMS_HELPER
|
||||
|
@ -224,7 +224,7 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
|
||||
info = drm_fb_helper_alloc_fbi(helper);
|
||||
if (IS_ERR(info)) {
|
||||
ret = PTR_ERR(info);
|
||||
goto out_unref;
|
||||
goto out;
|
||||
}
|
||||
|
||||
info->par = rfbdev;
|
||||
@ -233,7 +233,7 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
|
||||
ret = amdgpu_framebuffer_init(adev->ddev, &rfbdev->rfb, &mode_cmd, gobj);
|
||||
if (ret) {
|
||||
DRM_ERROR("failed to initialize framebuffer %d\n", ret);
|
||||
goto out_destroy_fbi;
|
||||
goto out;
|
||||
}
|
||||
|
||||
fb = &rfbdev->rfb.base;
|
||||
@ -266,7 +266,7 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
|
||||
|
||||
if (info->screen_base == NULL) {
|
||||
ret = -ENOSPC;
|
||||
goto out_destroy_fbi;
|
||||
goto out;
|
||||
}
|
||||
|
||||
DRM_INFO("fb mappable at 0x%lX\n", info->fix.smem_start);
|
||||
@ -278,9 +278,7 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
|
||||
vga_switcheroo_client_fb_set(adev->ddev->pdev, info);
|
||||
return 0;
|
||||
|
||||
out_destroy_fbi:
|
||||
drm_fb_helper_release_fbi(helper);
|
||||
out_unref:
|
||||
out:
|
||||
if (abo) {
|
||||
|
||||
}
|
||||
@ -304,7 +302,6 @@ static int amdgpu_fbdev_destroy(struct drm_device *dev, struct amdgpu_fbdev *rfb
|
||||
struct amdgpu_framebuffer *rfb = &rfbdev->rfb;
|
||||
|
||||
drm_fb_helper_unregister_fbi(&rfbdev->helper);
|
||||
drm_fb_helper_release_fbi(&rfbdev->helper);
|
||||
|
||||
if (rfb->obj) {
|
||||
amdgpufb_destroy_pinned_object(rfb->obj);
|
||||
|
@ -175,7 +175,6 @@ static struct drm_driver arcpgu_drm_driver = {
|
||||
.dumb_create = drm_gem_cma_dumb_create,
|
||||
.dumb_map_offset = drm_gem_cma_dumb_map_offset,
|
||||
.dumb_destroy = drm_gem_dumb_destroy,
|
||||
.get_vblank_counter = drm_vblank_no_hw_counter,
|
||||
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
|
||||
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
|
||||
.gem_free_object_unlocked = drm_gem_cma_free_object,
|
||||
|
@ -42,6 +42,24 @@ static void hdlcd_crtc_cleanup(struct drm_crtc *crtc)
|
||||
drm_crtc_cleanup(crtc);
|
||||
}
|
||||
|
||||
static int hdlcd_crtc_enable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc);
|
||||
unsigned int mask = hdlcd_read(hdlcd, HDLCD_REG_INT_MASK);
|
||||
|
||||
hdlcd_write(hdlcd, HDLCD_REG_INT_MASK, mask | HDLCD_INTERRUPT_VSYNC);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void hdlcd_crtc_disable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc);
|
||||
unsigned int mask = hdlcd_read(hdlcd, HDLCD_REG_INT_MASK);
|
||||
|
||||
hdlcd_write(hdlcd, HDLCD_REG_INT_MASK, mask & ~HDLCD_INTERRUPT_VSYNC);
|
||||
}
|
||||
|
||||
static const struct drm_crtc_funcs hdlcd_crtc_funcs = {
|
||||
.destroy = hdlcd_crtc_cleanup,
|
||||
.set_config = drm_atomic_helper_set_config,
|
||||
@ -49,6 +67,8 @@ static const struct drm_crtc_funcs hdlcd_crtc_funcs = {
|
||||
.reset = drm_atomic_helper_crtc_reset,
|
||||
.atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state,
|
||||
.atomic_destroy_state = drm_atomic_helper_crtc_destroy_state,
|
||||
.enable_vblank = hdlcd_crtc_enable_vblank,
|
||||
.disable_vblank = hdlcd_crtc_disable_vblank,
|
||||
};
|
||||
|
||||
static struct simplefb_format supported_formats[] = SIMPLEFB_FORMATS;
|
||||
|
@ -199,24 +199,6 @@ static void hdlcd_irq_uninstall(struct drm_device *drm)
|
||||
hdlcd_write(hdlcd, HDLCD_REG_INT_MASK, irq_mask);
|
||||
}
|
||||
|
||||
static int hdlcd_enable_vblank(struct drm_device *drm, unsigned int crtc)
|
||||
{
|
||||
struct hdlcd_drm_private *hdlcd = drm->dev_private;
|
||||
unsigned int mask = hdlcd_read(hdlcd, HDLCD_REG_INT_MASK);
|
||||
|
||||
hdlcd_write(hdlcd, HDLCD_REG_INT_MASK, mask | HDLCD_INTERRUPT_VSYNC);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void hdlcd_disable_vblank(struct drm_device *drm, unsigned int crtc)
|
||||
{
|
||||
struct hdlcd_drm_private *hdlcd = drm->dev_private;
|
||||
unsigned int mask = hdlcd_read(hdlcd, HDLCD_REG_INT_MASK);
|
||||
|
||||
hdlcd_write(hdlcd, HDLCD_REG_INT_MASK, mask & ~HDLCD_INTERRUPT_VSYNC);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
static int hdlcd_show_underrun_count(struct seq_file *m, void *arg)
|
||||
{
|
||||
@ -278,9 +260,6 @@ static struct drm_driver hdlcd_driver = {
|
||||
.irq_preinstall = hdlcd_irq_preinstall,
|
||||
.irq_postinstall = hdlcd_irq_postinstall,
|
||||
.irq_uninstall = hdlcd_irq_uninstall,
|
||||
.get_vblank_counter = drm_vblank_no_hw_counter,
|
||||
.enable_vblank = hdlcd_enable_vblank,
|
||||
.disable_vblank = hdlcd_disable_vblank,
|
||||
.gem_free_object_unlocked = drm_gem_cma_free_object,
|
||||
.gem_vm_ops = &drm_gem_cma_vm_ops,
|
||||
.dumb_create = drm_gem_cma_dumb_create,
|
||||
|
@ -167,6 +167,25 @@ static const struct drm_crtc_helper_funcs malidp_crtc_helper_funcs = {
|
||||
.atomic_check = malidp_crtc_atomic_check,
|
||||
};
|
||||
|
||||
static int malidp_crtc_enable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct malidp_drm *malidp = crtc_to_malidp_device(crtc);
|
||||
struct malidp_hw_device *hwdev = malidp->dev;
|
||||
|
||||
malidp_hw_enable_irq(hwdev, MALIDP_DE_BLOCK,
|
||||
hwdev->map.de_irq_map.vsync_irq);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void malidp_crtc_disable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct malidp_drm *malidp = crtc_to_malidp_device(crtc);
|
||||
struct malidp_hw_device *hwdev = malidp->dev;
|
||||
|
||||
malidp_hw_disable_irq(hwdev, MALIDP_DE_BLOCK,
|
||||
hwdev->map.de_irq_map.vsync_irq);
|
||||
}
|
||||
|
||||
static const struct drm_crtc_funcs malidp_crtc_funcs = {
|
||||
.destroy = drm_crtc_cleanup,
|
||||
.set_config = drm_atomic_helper_set_config,
|
||||
@ -174,6 +193,8 @@ static const struct drm_crtc_funcs malidp_crtc_funcs = {
|
||||
.reset = drm_atomic_helper_crtc_reset,
|
||||
.atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state,
|
||||
.atomic_destroy_state = drm_atomic_helper_crtc_destroy_state,
|
||||
.enable_vblank = malidp_crtc_enable_vblank,
|
||||
.disable_vblank = malidp_crtc_disable_vblank,
|
||||
};
|
||||
|
||||
int malidp_crtc_init(struct drm_device *drm)
|
||||
|
@ -100,7 +100,7 @@ static void malidp_atomic_commit_tail(struct drm_atomic_state *state)
|
||||
drm_atomic_helper_cleanup_planes(drm, state);
|
||||
}
|
||||
|
||||
static struct drm_mode_config_helper_funcs malidp_mode_config_helpers = {
|
||||
static const struct drm_mode_config_helper_funcs malidp_mode_config_helpers = {
|
||||
.atomic_commit_tail = malidp_atomic_commit_tail,
|
||||
};
|
||||
|
||||
@ -111,25 +111,6 @@ static const struct drm_mode_config_funcs malidp_mode_config_funcs = {
|
||||
.atomic_commit = drm_atomic_helper_commit,
|
||||
};
|
||||
|
||||
static int malidp_enable_vblank(struct drm_device *drm, unsigned int crtc)
|
||||
{
|
||||
struct malidp_drm *malidp = drm->dev_private;
|
||||
struct malidp_hw_device *hwdev = malidp->dev;
|
||||
|
||||
malidp_hw_enable_irq(hwdev, MALIDP_DE_BLOCK,
|
||||
hwdev->map.de_irq_map.vsync_irq);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void malidp_disable_vblank(struct drm_device *drm, unsigned int pipe)
|
||||
{
|
||||
struct malidp_drm *malidp = drm->dev_private;
|
||||
struct malidp_hw_device *hwdev = malidp->dev;
|
||||
|
||||
malidp_hw_disable_irq(hwdev, MALIDP_DE_BLOCK,
|
||||
hwdev->map.de_irq_map.vsync_irq);
|
||||
}
|
||||
|
||||
static int malidp_init(struct drm_device *drm)
|
||||
{
|
||||
int ret;
|
||||
@ -213,9 +194,6 @@ static struct drm_driver malidp_driver = {
|
||||
.driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_ATOMIC |
|
||||
DRIVER_PRIME,
|
||||
.lastclose = malidp_lastclose,
|
||||
.get_vblank_counter = drm_vblank_no_hw_counter,
|
||||
.enable_vblank = malidp_enable_vblank,
|
||||
.disable_vblank = malidp_disable_vblank,
|
||||
.gem_free_object_unlocked = drm_gem_cma_free_object,
|
||||
.gem_vm_ops = &drm_gem_cma_vm_ops,
|
||||
.dumb_create = drm_gem_cma_dumb_create,
|
||||
|
@ -418,6 +418,25 @@ static bool armada_drm_crtc_mode_fixup(struct drm_crtc *crtc,
|
||||
return true;
|
||||
}
|
||||
|
||||
/* These are locked by dev->vbl_lock */
|
||||
static void armada_drm_crtc_disable_irq(struct armada_crtc *dcrtc, u32 mask)
|
||||
{
|
||||
if (dcrtc->irq_ena & mask) {
|
||||
dcrtc->irq_ena &= ~mask;
|
||||
writel(dcrtc->irq_ena, dcrtc->base + LCD_SPU_IRQ_ENA);
|
||||
}
|
||||
}
|
||||
|
||||
static void armada_drm_crtc_enable_irq(struct armada_crtc *dcrtc, u32 mask)
|
||||
{
|
||||
if ((dcrtc->irq_ena & mask) != mask) {
|
||||
dcrtc->irq_ena |= mask;
|
||||
writel(dcrtc->irq_ena, dcrtc->base + LCD_SPU_IRQ_ENA);
|
||||
if (readl_relaxed(dcrtc->base + LCD_SPU_IRQ_ISR) & mask)
|
||||
writel(0, dcrtc->base + LCD_SPU_IRQ_ISR);
|
||||
}
|
||||
}
|
||||
|
||||
static void armada_drm_crtc_irq(struct armada_crtc *dcrtc, u32 stat)
|
||||
{
|
||||
void __iomem *base = dcrtc->base;
|
||||
@ -491,25 +510,6 @@ static irqreturn_t armada_drm_irq(int irq, void *arg)
|
||||
return IRQ_NONE;
|
||||
}
|
||||
|
||||
/* These are locked by dev->vbl_lock */
|
||||
void armada_drm_crtc_disable_irq(struct armada_crtc *dcrtc, u32 mask)
|
||||
{
|
||||
if (dcrtc->irq_ena & mask) {
|
||||
dcrtc->irq_ena &= ~mask;
|
||||
writel(dcrtc->irq_ena, dcrtc->base + LCD_SPU_IRQ_ENA);
|
||||
}
|
||||
}
|
||||
|
||||
void armada_drm_crtc_enable_irq(struct armada_crtc *dcrtc, u32 mask)
|
||||
{
|
||||
if ((dcrtc->irq_ena & mask) != mask) {
|
||||
dcrtc->irq_ena |= mask;
|
||||
writel(dcrtc->irq_ena, dcrtc->base + LCD_SPU_IRQ_ENA);
|
||||
if (readl_relaxed(dcrtc->base + LCD_SPU_IRQ_ISR) & mask)
|
||||
writel(0, dcrtc->base + LCD_SPU_IRQ_ISR);
|
||||
}
|
||||
}
|
||||
|
||||
static uint32_t armada_drm_crtc_calculate_csc(struct armada_crtc *dcrtc)
|
||||
{
|
||||
struct drm_display_mode *adj = &dcrtc->crtc.mode;
|
||||
@ -1109,6 +1109,22 @@ armada_drm_crtc_set_property(struct drm_crtc *crtc,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* These are called under the vbl_lock. */
|
||||
static int armada_drm_crtc_enable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct armada_crtc *dcrtc = drm_to_armada_crtc(crtc);
|
||||
|
||||
armada_drm_crtc_enable_irq(dcrtc, VSYNC_IRQ_ENA);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void armada_drm_crtc_disable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct armada_crtc *dcrtc = drm_to_armada_crtc(crtc);
|
||||
|
||||
armada_drm_crtc_disable_irq(dcrtc, VSYNC_IRQ_ENA);
|
||||
}
|
||||
|
||||
static const struct drm_crtc_funcs armada_crtc_funcs = {
|
||||
.cursor_set = armada_drm_crtc_cursor_set,
|
||||
.cursor_move = armada_drm_crtc_cursor_move,
|
||||
@ -1116,6 +1132,8 @@ static const struct drm_crtc_funcs armada_crtc_funcs = {
|
||||
.set_config = drm_crtc_helper_set_config,
|
||||
.page_flip = armada_drm_crtc_page_flip,
|
||||
.set_property = armada_drm_crtc_set_property,
|
||||
.enable_vblank = armada_drm_crtc_enable_vblank,
|
||||
.disable_vblank = armada_drm_crtc_disable_vblank,
|
||||
};
|
||||
|
||||
static const struct drm_plane_funcs armada_primary_plane_funcs = {
|
||||
|
@ -104,8 +104,6 @@ struct armada_crtc {
|
||||
|
||||
void armada_drm_crtc_gamma_set(struct drm_crtc *, u16, u16, u16, int);
|
||||
void armada_drm_crtc_gamma_get(struct drm_crtc *, u16 *, u16 *, u16 *, int);
|
||||
void armada_drm_crtc_disable_irq(struct armada_crtc *, u32);
|
||||
void armada_drm_crtc_enable_irq(struct armada_crtc *, u32);
|
||||
void armada_drm_crtc_update_regs(struct armada_crtc *, struct armada_regs *);
|
||||
|
||||
void armada_drm_crtc_plane_disable(struct armada_crtc *dcrtc,
|
||||
|
@ -107,40 +107,9 @@ static struct drm_info_list armada_debugfs_list[] = {
|
||||
};
|
||||
#define ARMADA_DEBUGFS_ENTRIES ARRAY_SIZE(armada_debugfs_list)
|
||||
|
||||
static int drm_add_fake_info_node(struct drm_minor *minor, struct dentry *ent,
|
||||
const void *key)
|
||||
{
|
||||
struct drm_info_node *node;
|
||||
|
||||
node = kmalloc(sizeof(struct drm_info_node), GFP_KERNEL);
|
||||
if (!node) {
|
||||
debugfs_remove(ent);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
node->minor = minor;
|
||||
node->dent = ent;
|
||||
node->info_ent = (void *) key;
|
||||
|
||||
mutex_lock(&minor->debugfs_lock);
|
||||
list_add(&node->list, &minor->debugfs_list);
|
||||
mutex_unlock(&minor->debugfs_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int armada_debugfs_create(struct dentry *root, struct drm_minor *minor,
|
||||
const char *name, umode_t mode, const struct file_operations *fops)
|
||||
{
|
||||
struct dentry *de;
|
||||
|
||||
de = debugfs_create_file(name, mode, root, minor->dev, fops);
|
||||
|
||||
return drm_add_fake_info_node(minor, de, fops);
|
||||
}
|
||||
|
||||
int armada_drm_debugfs_init(struct drm_minor *minor)
|
||||
{
|
||||
struct dentry *de;
|
||||
int ret;
|
||||
|
||||
ret = drm_debugfs_create_files(armada_debugfs_list,
|
||||
@ -149,29 +118,15 @@ int armada_drm_debugfs_init(struct drm_minor *minor)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = armada_debugfs_create(minor->debugfs_root, minor,
|
||||
"reg", S_IFREG | S_IRUSR, &fops_reg_r);
|
||||
if (ret)
|
||||
goto err_1;
|
||||
de = debugfs_create_file("reg", S_IFREG | S_IRUSR,
|
||||
minor->debugfs_root, minor->dev, &fops_reg_r);
|
||||
if (!de)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = armada_debugfs_create(minor->debugfs_root, minor,
|
||||
"reg_wr", S_IFREG | S_IWUSR, &fops_reg_w);
|
||||
if (ret)
|
||||
goto err_2;
|
||||
return ret;
|
||||
de = debugfs_create_file("reg_wr", S_IFREG | S_IWUSR,
|
||||
minor->debugfs_root, minor->dev, &fops_reg_w);
|
||||
if (!de)
|
||||
return -ENOMEM;
|
||||
|
||||
err_2:
|
||||
drm_debugfs_remove_files((struct drm_info_list *)&fops_reg_r, 1, minor);
|
||||
err_1:
|
||||
drm_debugfs_remove_files(armada_debugfs_list, ARMADA_DEBUGFS_ENTRIES,
|
||||
minor);
|
||||
return ret;
|
||||
}
|
||||
|
||||
void armada_drm_debugfs_cleanup(struct drm_minor *minor)
|
||||
{
|
||||
drm_debugfs_remove_files((struct drm_info_list *)&fops_reg_w, 1, minor);
|
||||
drm_debugfs_remove_files((struct drm_info_list *)&fops_reg_r, 1, minor);
|
||||
drm_debugfs_remove_files(armada_debugfs_list, ARMADA_DEBUGFS_ENTRIES,
|
||||
minor);
|
||||
return 0;
|
||||
}
|
||||
|
@ -90,6 +90,5 @@ void armada_fbdev_fini(struct drm_device *);
|
||||
int armada_overlay_plane_create(struct drm_device *, unsigned long);
|
||||
|
||||
int armada_drm_debugfs_init(struct drm_minor *);
|
||||
void armada_drm_debugfs_cleanup(struct drm_minor *);
|
||||
|
||||
#endif
|
||||
|
@ -49,20 +49,6 @@ void armada_drm_queue_unref_work(struct drm_device *dev,
|
||||
spin_unlock_irqrestore(&dev->event_lock, flags);
|
||||
}
|
||||
|
||||
/* These are called under the vbl_lock. */
|
||||
static int armada_drm_enable_vblank(struct drm_device *dev, unsigned int pipe)
|
||||
{
|
||||
struct armada_private *priv = dev->dev_private;
|
||||
armada_drm_crtc_enable_irq(priv->dcrtc[pipe], VSYNC_IRQ_ENA);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void armada_drm_disable_vblank(struct drm_device *dev, unsigned int pipe)
|
||||
{
|
||||
struct armada_private *priv = dev->dev_private;
|
||||
armada_drm_crtc_disable_irq(priv->dcrtc[pipe], VSYNC_IRQ_ENA);
|
||||
}
|
||||
|
||||
static struct drm_ioctl_desc armada_ioctls[] = {
|
||||
DRM_IOCTL_DEF_DRV(ARMADA_GEM_CREATE, armada_gem_create_ioctl,0),
|
||||
DRM_IOCTL_DEF_DRV(ARMADA_GEM_MMAP, armada_gem_mmap_ioctl, 0),
|
||||
@ -87,9 +73,6 @@ static const struct file_operations armada_drm_fops = {
|
||||
|
||||
static struct drm_driver armada_drm_driver = {
|
||||
.lastclose = armada_drm_lastclose,
|
||||
.get_vblank_counter = drm_vblank_no_hw_counter,
|
||||
.enable_vblank = armada_drm_enable_vblank,
|
||||
.disable_vblank = armada_drm_disable_vblank,
|
||||
.gem_free_object_unlocked = armada_gem_free_object,
|
||||
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
|
||||
.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
|
||||
@ -226,9 +209,6 @@ static void armada_drm_unbind(struct device *dev)
|
||||
drm_kms_helper_poll_fini(&priv->drm);
|
||||
armada_fbdev_fini(&priv->drm);
|
||||
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
armada_drm_debugfs_cleanup(priv->drm.primary);
|
||||
#endif
|
||||
drm_dev_unregister(&priv->drm);
|
||||
|
||||
component_unbind_all(dev, &priv->drm);
|
||||
|
@ -157,7 +157,6 @@ int armada_fbdev_init(struct drm_device *dev)
|
||||
|
||||
return 0;
|
||||
err_fb_setup:
|
||||
drm_fb_helper_release_fbi(fbh);
|
||||
drm_fb_helper_fini(fbh);
|
||||
err_fb_helper:
|
||||
priv->fbdev = NULL;
|
||||
@ -179,7 +178,6 @@ void armada_fbdev_fini(struct drm_device *dev)
|
||||
|
||||
if (fbh) {
|
||||
drm_fb_helper_unregister_fbi(fbh);
|
||||
drm_fb_helper_release_fbi(fbh);
|
||||
|
||||
drm_fb_helper_fini(fbh);
|
||||
|
||||
|
@ -215,13 +215,13 @@ static int astfb_create(struct drm_fb_helper *helper,
|
||||
info = drm_fb_helper_alloc_fbi(helper);
|
||||
if (IS_ERR(info)) {
|
||||
ret = PTR_ERR(info);
|
||||
goto err_free_vram;
|
||||
goto out;
|
||||
}
|
||||
info->par = afbdev;
|
||||
|
||||
ret = ast_framebuffer_init(dev, &afbdev->afb, &mode_cmd, gobj);
|
||||
if (ret)
|
||||
goto err_release_fbi;
|
||||
goto out;
|
||||
|
||||
afbdev->sysram = sysram;
|
||||
afbdev->size = size;
|
||||
@ -250,9 +250,7 @@ static int astfb_create(struct drm_fb_helper *helper,
|
||||
|
||||
return 0;
|
||||
|
||||
err_release_fbi:
|
||||
drm_fb_helper_release_fbi(helper);
|
||||
err_free_vram:
|
||||
out:
|
||||
vfree(sysram);
|
||||
return ret;
|
||||
}
|
||||
@ -287,7 +285,6 @@ static void ast_fbdev_destroy(struct drm_device *dev,
|
||||
struct ast_framebuffer *afb = &afbdev->afb;
|
||||
|
||||
drm_fb_helper_unregister_fbi(&afbdev->helper);
|
||||
drm_fb_helper_release_fbi(&afbdev->helper);
|
||||
|
||||
if (afb->obj) {
|
||||
drm_gem_object_unreference_unlocked(afb->obj);
|
||||
|
@ -1,6 +1,5 @@
|
||||
atmel-hlcdc-dc-y := atmel_hlcdc_crtc.o \
|
||||
atmel_hlcdc_dc.o \
|
||||
atmel_hlcdc_layer.o \
|
||||
atmel_hlcdc_output.o \
|
||||
atmel_hlcdc_plane.o
|
||||
|
||||
|
@ -434,6 +434,25 @@ static void atmel_hlcdc_crtc_destroy_state(struct drm_crtc *crtc,
|
||||
kfree(state);
|
||||
}
|
||||
|
||||
static int atmel_hlcdc_crtc_enable_vblank(struct drm_crtc *c)
|
||||
{
|
||||
struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
|
||||
struct regmap *regmap = crtc->dc->hlcdc->regmap;
|
||||
|
||||
/* Enable SOF (Start Of Frame) interrupt for vblank counting */
|
||||
regmap_write(regmap, ATMEL_HLCDC_IER, ATMEL_HLCDC_SOF);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void atmel_hlcdc_crtc_disable_vblank(struct drm_crtc *c)
|
||||
{
|
||||
struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
|
||||
struct regmap *regmap = crtc->dc->hlcdc->regmap;
|
||||
|
||||
regmap_write(regmap, ATMEL_HLCDC_IDR, ATMEL_HLCDC_SOF);
|
||||
}
|
||||
|
||||
static const struct drm_crtc_funcs atmel_hlcdc_crtc_funcs = {
|
||||
.page_flip = drm_atomic_helper_page_flip,
|
||||
.set_config = drm_atomic_helper_set_config,
|
||||
@ -441,12 +460,14 @@ static const struct drm_crtc_funcs atmel_hlcdc_crtc_funcs = {
|
||||
.reset = atmel_hlcdc_crtc_reset,
|
||||
.atomic_duplicate_state = atmel_hlcdc_crtc_duplicate_state,
|
||||
.atomic_destroy_state = atmel_hlcdc_crtc_destroy_state,
|
||||
.enable_vblank = atmel_hlcdc_crtc_enable_vblank,
|
||||
.disable_vblank = atmel_hlcdc_crtc_disable_vblank,
|
||||
};
|
||||
|
||||
int atmel_hlcdc_crtc_create(struct drm_device *dev)
|
||||
{
|
||||
struct atmel_hlcdc_plane *primary = NULL, *cursor = NULL;
|
||||
struct atmel_hlcdc_dc *dc = dev->dev_private;
|
||||
struct atmel_hlcdc_planes *planes = dc->planes;
|
||||
struct atmel_hlcdc_crtc *crtc;
|
||||
int ret;
|
||||
int i;
|
||||
@ -457,20 +478,41 @@ int atmel_hlcdc_crtc_create(struct drm_device *dev)
|
||||
|
||||
crtc->dc = dc;
|
||||
|
||||
ret = drm_crtc_init_with_planes(dev, &crtc->base,
|
||||
&planes->primary->base,
|
||||
planes->cursor ? &planes->cursor->base : NULL,
|
||||
&atmel_hlcdc_crtc_funcs, NULL);
|
||||
for (i = 0; i < ATMEL_HLCDC_MAX_LAYERS; i++) {
|
||||
if (!dc->layers[i])
|
||||
continue;
|
||||
|
||||
switch (dc->layers[i]->desc->type) {
|
||||
case ATMEL_HLCDC_BASE_LAYER:
|
||||
primary = atmel_hlcdc_layer_to_plane(dc->layers[i]);
|
||||
break;
|
||||
|
||||
case ATMEL_HLCDC_CURSOR_LAYER:
|
||||
cursor = atmel_hlcdc_layer_to_plane(dc->layers[i]);
|
||||
break;
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
ret = drm_crtc_init_with_planes(dev, &crtc->base, &primary->base,
|
||||
&cursor->base, &atmel_hlcdc_crtc_funcs,
|
||||
NULL);
|
||||
if (ret < 0)
|
||||
goto fail;
|
||||
|
||||
crtc->id = drm_crtc_index(&crtc->base);
|
||||
|
||||
if (planes->cursor)
|
||||
planes->cursor->base.possible_crtcs = 1 << crtc->id;
|
||||
for (i = 0; i < ATMEL_HLCDC_MAX_LAYERS; i++) {
|
||||
struct atmel_hlcdc_plane *overlay;
|
||||
|
||||
for (i = 0; i < planes->noverlays; i++)
|
||||
planes->overlays[i]->base.possible_crtcs = 1 << crtc->id;
|
||||
if (dc->layers[i] &&
|
||||
dc->layers[i]->desc->type == ATMEL_HLCDC_OVERLAY_LAYER) {
|
||||
overlay = atmel_hlcdc_layer_to_plane(dc->layers[i]);
|
||||
overlay->base.possible_crtcs = 1 << crtc->id;
|
||||
}
|
||||
}
|
||||
|
||||
drm_crtc_helper_add(&crtc->base, &lcdc_crtc_helper_funcs);
|
||||
drm_crtc_vblank_reset(&crtc->base);
|
||||
|
@ -36,7 +36,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_at91sam9n12_layers[] = {
|
||||
.regs_offset = 0x40,
|
||||
.id = 0,
|
||||
.type = ATMEL_HLCDC_BASE_LAYER,
|
||||
.nconfigs = 5,
|
||||
.cfgs_offset = 0x2c,
|
||||
.layout = {
|
||||
.xstride = { 2 },
|
||||
.default_color = 3,
|
||||
@ -65,7 +65,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_at91sam9x5_layers[] = {
|
||||
.regs_offset = 0x40,
|
||||
.id = 0,
|
||||
.type = ATMEL_HLCDC_BASE_LAYER,
|
||||
.nconfigs = 5,
|
||||
.cfgs_offset = 0x2c,
|
||||
.layout = {
|
||||
.xstride = { 2 },
|
||||
.default_color = 3,
|
||||
@ -80,7 +80,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_at91sam9x5_layers[] = {
|
||||
.regs_offset = 0x100,
|
||||
.id = 1,
|
||||
.type = ATMEL_HLCDC_OVERLAY_LAYER,
|
||||
.nconfigs = 10,
|
||||
.cfgs_offset = 0x2c,
|
||||
.layout = {
|
||||
.pos = 2,
|
||||
.size = 3,
|
||||
@ -98,7 +98,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_at91sam9x5_layers[] = {
|
||||
.regs_offset = 0x280,
|
||||
.id = 2,
|
||||
.type = ATMEL_HLCDC_OVERLAY_LAYER,
|
||||
.nconfigs = 17,
|
||||
.cfgs_offset = 0x4c,
|
||||
.layout = {
|
||||
.pos = 2,
|
||||
.size = 3,
|
||||
@ -109,6 +109,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_at91sam9x5_layers[] = {
|
||||
.chroma_key = 10,
|
||||
.chroma_key_mask = 11,
|
||||
.general_config = 12,
|
||||
.scaler_config = 13,
|
||||
.csc = 14,
|
||||
},
|
||||
},
|
||||
@ -118,9 +119,9 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_at91sam9x5_layers[] = {
|
||||
.regs_offset = 0x340,
|
||||
.id = 3,
|
||||
.type = ATMEL_HLCDC_CURSOR_LAYER,
|
||||
.nconfigs = 10,
|
||||
.max_width = 128,
|
||||
.max_height = 128,
|
||||
.cfgs_offset = 0x2c,
|
||||
.layout = {
|
||||
.pos = 2,
|
||||
.size = 3,
|
||||
@ -153,7 +154,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
|
||||
.regs_offset = 0x40,
|
||||
.id = 0,
|
||||
.type = ATMEL_HLCDC_BASE_LAYER,
|
||||
.nconfigs = 7,
|
||||
.cfgs_offset = 0x2c,
|
||||
.layout = {
|
||||
.xstride = { 2 },
|
||||
.default_color = 3,
|
||||
@ -168,7 +169,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
|
||||
.regs_offset = 0x140,
|
||||
.id = 1,
|
||||
.type = ATMEL_HLCDC_OVERLAY_LAYER,
|
||||
.nconfigs = 10,
|
||||
.cfgs_offset = 0x2c,
|
||||
.layout = {
|
||||
.pos = 2,
|
||||
.size = 3,
|
||||
@ -186,7 +187,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
|
||||
.regs_offset = 0x240,
|
||||
.id = 2,
|
||||
.type = ATMEL_HLCDC_OVERLAY_LAYER,
|
||||
.nconfigs = 10,
|
||||
.cfgs_offset = 0x2c,
|
||||
.layout = {
|
||||
.pos = 2,
|
||||
.size = 3,
|
||||
@ -204,7 +205,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
|
||||
.regs_offset = 0x340,
|
||||
.id = 3,
|
||||
.type = ATMEL_HLCDC_OVERLAY_LAYER,
|
||||
.nconfigs = 42,
|
||||
.cfgs_offset = 0x4c,
|
||||
.layout = {
|
||||
.pos = 2,
|
||||
.size = 3,
|
||||
@ -215,6 +216,11 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
|
||||
.chroma_key = 10,
|
||||
.chroma_key_mask = 11,
|
||||
.general_config = 12,
|
||||
.scaler_config = 13,
|
||||
.phicoeffs = {
|
||||
.x = 17,
|
||||
.y = 33,
|
||||
},
|
||||
.csc = 14,
|
||||
},
|
||||
},
|
||||
@ -224,9 +230,9 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
|
||||
.regs_offset = 0x440,
|
||||
.id = 4,
|
||||
.type = ATMEL_HLCDC_CURSOR_LAYER,
|
||||
.nconfigs = 10,
|
||||
.max_width = 128,
|
||||
.max_height = 128,
|
||||
.cfgs_offset = 0x2c,
|
||||
.layout = {
|
||||
.pos = 2,
|
||||
.size = 3,
|
||||
@ -236,6 +242,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
|
||||
.chroma_key = 7,
|
||||
.chroma_key_mask = 8,
|
||||
.general_config = 9,
|
||||
.scaler_config = 13,
|
||||
},
|
||||
},
|
||||
};
|
||||
@ -260,7 +267,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d4_layers[] = {
|
||||
.regs_offset = 0x40,
|
||||
.id = 0,
|
||||
.type = ATMEL_HLCDC_BASE_LAYER,
|
||||
.nconfigs = 7,
|
||||
.cfgs_offset = 0x2c,
|
||||
.layout = {
|
||||
.xstride = { 2 },
|
||||
.default_color = 3,
|
||||
@ -275,7 +282,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d4_layers[] = {
|
||||
.regs_offset = 0x140,
|
||||
.id = 1,
|
||||
.type = ATMEL_HLCDC_OVERLAY_LAYER,
|
||||
.nconfigs = 10,
|
||||
.cfgs_offset = 0x2c,
|
||||
.layout = {
|
||||
.pos = 2,
|
||||
.size = 3,
|
||||
@ -293,7 +300,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d4_layers[] = {
|
||||
.regs_offset = 0x240,
|
||||
.id = 2,
|
||||
.type = ATMEL_HLCDC_OVERLAY_LAYER,
|
||||
.nconfigs = 10,
|
||||
.cfgs_offset = 0x2c,
|
||||
.layout = {
|
||||
.pos = 2,
|
||||
.size = 3,
|
||||
@ -311,7 +318,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d4_layers[] = {
|
||||
.regs_offset = 0x340,
|
||||
.id = 3,
|
||||
.type = ATMEL_HLCDC_OVERLAY_LAYER,
|
||||
.nconfigs = 42,
|
||||
.cfgs_offset = 0x4c,
|
||||
.layout = {
|
||||
.pos = 2,
|
||||
.size = 3,
|
||||
@ -322,6 +329,11 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d4_layers[] = {
|
||||
.chroma_key = 10,
|
||||
.chroma_key_mask = 11,
|
||||
.general_config = 12,
|
||||
.scaler_config = 13,
|
||||
.phicoeffs = {
|
||||
.x = 17,
|
||||
.y = 33,
|
||||
},
|
||||
.csc = 14,
|
||||
},
|
||||
},
|
||||
@ -392,6 +404,17 @@ int atmel_hlcdc_dc_mode_valid(struct atmel_hlcdc_dc *dc,
|
||||
return MODE_OK;
|
||||
}
|
||||
|
||||
static void atmel_hlcdc_layer_irq(struct atmel_hlcdc_layer *layer)
|
||||
{
|
||||
if (!layer)
|
||||
return;
|
||||
|
||||
if (layer->desc->type == ATMEL_HLCDC_BASE_LAYER ||
|
||||
layer->desc->type == ATMEL_HLCDC_OVERLAY_LAYER ||
|
||||
layer->desc->type == ATMEL_HLCDC_CURSOR_LAYER)
|
||||
atmel_hlcdc_plane_irq(atmel_hlcdc_layer_to_plane(layer));
|
||||
}
|
||||
|
||||
static irqreturn_t atmel_hlcdc_dc_irq_handler(int irq, void *data)
|
||||
{
|
||||
struct drm_device *dev = data;
|
||||
@ -410,12 +433,8 @@ static irqreturn_t atmel_hlcdc_dc_irq_handler(int irq, void *data)
|
||||
atmel_hlcdc_crtc_irq(dc->crtc);
|
||||
|
||||
for (i = 0; i < ATMEL_HLCDC_MAX_LAYERS; i++) {
|
||||
struct atmel_hlcdc_layer *layer = dc->layers[i];
|
||||
|
||||
if (!(ATMEL_HLCDC_LAYER_STATUS(i) & status) || !layer)
|
||||
continue;
|
||||
|
||||
atmel_hlcdc_layer_irq(layer);
|
||||
if (ATMEL_HLCDC_LAYER_STATUS(i) & status)
|
||||
atmel_hlcdc_layer_irq(dc->layers[i]);
|
||||
}
|
||||
|
||||
return IRQ_HANDLED;
|
||||
@ -537,9 +556,7 @@ static const struct drm_mode_config_funcs mode_config_funcs = {
|
||||
static int atmel_hlcdc_dc_modeset_init(struct drm_device *dev)
|
||||
{
|
||||
struct atmel_hlcdc_dc *dc = dev->dev_private;
|
||||
struct atmel_hlcdc_planes *planes;
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
drm_mode_config_init(dev);
|
||||
|
||||
@ -549,25 +566,12 @@ static int atmel_hlcdc_dc_modeset_init(struct drm_device *dev)
|
||||
return ret;
|
||||
}
|
||||
|
||||
planes = atmel_hlcdc_create_planes(dev);
|
||||
if (IS_ERR(planes)) {
|
||||
dev_err(dev->dev, "failed to create planes\n");
|
||||
return PTR_ERR(planes);
|
||||
ret = atmel_hlcdc_create_planes(dev);
|
||||
if (ret) {
|
||||
dev_err(dev->dev, "failed to create planes: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
dc->planes = planes;
|
||||
|
||||
dc->layers[planes->primary->layer.desc->id] =
|
||||
&planes->primary->layer;
|
||||
|
||||
if (planes->cursor)
|
||||
dc->layers[planes->cursor->layer.desc->id] =
|
||||
&planes->cursor->layer;
|
||||
|
||||
for (i = 0; i < planes->noverlays; i++)
|
||||
dc->layers[planes->overlays[i]->layer.desc->id] =
|
||||
&planes->overlays[i]->layer;
|
||||
|
||||
ret = atmel_hlcdc_crtc_create(dev);
|
||||
if (ret) {
|
||||
dev_err(dev->dev, "failed to create crtc\n");
|
||||
@ -720,25 +724,6 @@ static void atmel_hlcdc_dc_irq_uninstall(struct drm_device *dev)
|
||||
regmap_read(dc->hlcdc->regmap, ATMEL_HLCDC_ISR, &isr);
|
||||
}
|
||||
|
||||
static int atmel_hlcdc_dc_enable_vblank(struct drm_device *dev,
|
||||
unsigned int pipe)
|
||||
{
|
||||
struct atmel_hlcdc_dc *dc = dev->dev_private;
|
||||
|
||||
/* Enable SOF (Start Of Frame) interrupt for vblank counting */
|
||||
regmap_write(dc->hlcdc->regmap, ATMEL_HLCDC_IER, ATMEL_HLCDC_SOF);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void atmel_hlcdc_dc_disable_vblank(struct drm_device *dev,
|
||||
unsigned int pipe)
|
||||
{
|
||||
struct atmel_hlcdc_dc *dc = dev->dev_private;
|
||||
|
||||
regmap_write(dc->hlcdc->regmap, ATMEL_HLCDC_IDR, ATMEL_HLCDC_SOF);
|
||||
}
|
||||
|
||||
static const struct file_operations fops = {
|
||||
.owner = THIS_MODULE,
|
||||
.open = drm_open,
|
||||
@ -760,9 +745,6 @@ static struct drm_driver atmel_hlcdc_dc_driver = {
|
||||
.irq_preinstall = atmel_hlcdc_dc_irq_uninstall,
|
||||
.irq_postinstall = atmel_hlcdc_dc_irq_postinstall,
|
||||
.irq_uninstall = atmel_hlcdc_dc_irq_uninstall,
|
||||
.get_vblank_counter = drm_vblank_no_hw_counter,
|
||||
.enable_vblank = atmel_hlcdc_dc_enable_vblank,
|
||||
.disable_vblank = atmel_hlcdc_dc_disable_vblank,
|
||||
.gem_free_object_unlocked = drm_gem_cma_free_object,
|
||||
.gem_vm_ops = &drm_gem_cma_vm_ops,
|
||||
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
|
||||
|
@ -23,7 +23,9 @@
|
||||
#define DRM_ATMEL_HLCDC_H
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/dmapool.h>
|
||||
#include <linux/irqdomain.h>
|
||||
#include <linux/mfd/atmel-hlcdc.h>
|
||||
#include <linux/pwm.h>
|
||||
|
||||
#include <drm/drm_atomic.h>
|
||||
@ -36,14 +38,276 @@
|
||||
#include <drm/drm_plane_helper.h>
|
||||
#include <drm/drmP.h>
|
||||
|
||||
#include "atmel_hlcdc_layer.h"
|
||||
#define ATMEL_HLCDC_LAYER_CHER 0x0
|
||||
#define ATMEL_HLCDC_LAYER_CHDR 0x4
|
||||
#define ATMEL_HLCDC_LAYER_CHSR 0x8
|
||||
#define ATMEL_HLCDC_LAYER_EN BIT(0)
|
||||
#define ATMEL_HLCDC_LAYER_UPDATE BIT(1)
|
||||
#define ATMEL_HLCDC_LAYER_A2Q BIT(2)
|
||||
#define ATMEL_HLCDC_LAYER_RST BIT(8)
|
||||
|
||||
#define ATMEL_HLCDC_MAX_LAYERS 5
|
||||
#define ATMEL_HLCDC_LAYER_IER 0xc
|
||||
#define ATMEL_HLCDC_LAYER_IDR 0x10
|
||||
#define ATMEL_HLCDC_LAYER_IMR 0x14
|
||||
#define ATMEL_HLCDC_LAYER_ISR 0x18
|
||||
#define ATMEL_HLCDC_LAYER_DFETCH BIT(0)
|
||||
#define ATMEL_HLCDC_LAYER_LFETCH BIT(1)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_IRQ(p) BIT(2 + (8 * (p)))
|
||||
#define ATMEL_HLCDC_LAYER_DSCR_IRQ(p) BIT(3 + (8 * (p)))
|
||||
#define ATMEL_HLCDC_LAYER_ADD_IRQ(p) BIT(4 + (8 * (p)))
|
||||
#define ATMEL_HLCDC_LAYER_DONE_IRQ(p) BIT(5 + (8 * (p)))
|
||||
#define ATMEL_HLCDC_LAYER_OVR_IRQ(p) BIT(6 + (8 * (p)))
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_PLANE_HEAD(p) (((p) * 0x10) + 0x1c)
|
||||
#define ATMEL_HLCDC_LAYER_PLANE_ADDR(p) (((p) * 0x10) + 0x20)
|
||||
#define ATMEL_HLCDC_LAYER_PLANE_CTRL(p) (((p) * 0x10) + 0x24)
|
||||
#define ATMEL_HLCDC_LAYER_PLANE_NEXT(p) (((p) * 0x10) + 0x28)
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_DMA_CFG 0
|
||||
#define ATMEL_HLCDC_LAYER_DMA_SIF BIT(0)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_BLEN_MASK GENMASK(5, 4)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_BLEN_SINGLE (0 << 4)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_BLEN_INCR4 (1 << 4)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_BLEN_INCR8 (2 << 4)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_BLEN_INCR16 (3 << 4)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_DLBO BIT(8)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_ROTDIS BIT(12)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_LOCKDIS BIT(13)
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_FORMAT_CFG 1
|
||||
#define ATMEL_HLCDC_LAYER_RGB (0 << 0)
|
||||
#define ATMEL_HLCDC_LAYER_CLUT (1 << 0)
|
||||
#define ATMEL_HLCDC_LAYER_YUV (2 << 0)
|
||||
#define ATMEL_HLCDC_RGB_MODE(m) \
|
||||
(ATMEL_HLCDC_LAYER_RGB | (((m) & 0xf) << 4))
|
||||
#define ATMEL_HLCDC_CLUT_MODE(m) \
|
||||
(ATMEL_HLCDC_LAYER_CLUT | (((m) & 0x3) << 8))
|
||||
#define ATMEL_HLCDC_YUV_MODE(m) \
|
||||
(ATMEL_HLCDC_LAYER_YUV | (((m) & 0xf) << 12))
|
||||
#define ATMEL_HLCDC_YUV422ROT BIT(16)
|
||||
#define ATMEL_HLCDC_YUV422SWP BIT(17)
|
||||
#define ATMEL_HLCDC_DSCALEOPT BIT(20)
|
||||
|
||||
#define ATMEL_HLCDC_XRGB4444_MODE ATMEL_HLCDC_RGB_MODE(0)
|
||||
#define ATMEL_HLCDC_ARGB4444_MODE ATMEL_HLCDC_RGB_MODE(1)
|
||||
#define ATMEL_HLCDC_RGBA4444_MODE ATMEL_HLCDC_RGB_MODE(2)
|
||||
#define ATMEL_HLCDC_RGB565_MODE ATMEL_HLCDC_RGB_MODE(3)
|
||||
#define ATMEL_HLCDC_ARGB1555_MODE ATMEL_HLCDC_RGB_MODE(4)
|
||||
#define ATMEL_HLCDC_XRGB8888_MODE ATMEL_HLCDC_RGB_MODE(9)
|
||||
#define ATMEL_HLCDC_RGB888_MODE ATMEL_HLCDC_RGB_MODE(10)
|
||||
#define ATMEL_HLCDC_ARGB8888_MODE ATMEL_HLCDC_RGB_MODE(12)
|
||||
#define ATMEL_HLCDC_RGBA8888_MODE ATMEL_HLCDC_RGB_MODE(13)
|
||||
|
||||
#define ATMEL_HLCDC_AYUV_MODE ATMEL_HLCDC_YUV_MODE(0)
|
||||
#define ATMEL_HLCDC_YUYV_MODE ATMEL_HLCDC_YUV_MODE(1)
|
||||
#define ATMEL_HLCDC_UYVY_MODE ATMEL_HLCDC_YUV_MODE(2)
|
||||
#define ATMEL_HLCDC_YVYU_MODE ATMEL_HLCDC_YUV_MODE(3)
|
||||
#define ATMEL_HLCDC_VYUY_MODE ATMEL_HLCDC_YUV_MODE(4)
|
||||
#define ATMEL_HLCDC_NV61_MODE ATMEL_HLCDC_YUV_MODE(5)
|
||||
#define ATMEL_HLCDC_YUV422_MODE ATMEL_HLCDC_YUV_MODE(6)
|
||||
#define ATMEL_HLCDC_NV21_MODE ATMEL_HLCDC_YUV_MODE(7)
|
||||
#define ATMEL_HLCDC_YUV420_MODE ATMEL_HLCDC_YUV_MODE(8)
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_POS(x, y) ((x) | ((y) << 16))
|
||||
#define ATMEL_HLCDC_LAYER_SIZE(w, h) (((w) - 1) | (((h) - 1) << 16))
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_CRKEY BIT(0)
|
||||
#define ATMEL_HLCDC_LAYER_INV BIT(1)
|
||||
#define ATMEL_HLCDC_LAYER_ITER2BL BIT(2)
|
||||
#define ATMEL_HLCDC_LAYER_ITER BIT(3)
|
||||
#define ATMEL_HLCDC_LAYER_REVALPHA BIT(4)
|
||||
#define ATMEL_HLCDC_LAYER_GAEN BIT(5)
|
||||
#define ATMEL_HLCDC_LAYER_LAEN BIT(6)
|
||||
#define ATMEL_HLCDC_LAYER_OVR BIT(7)
|
||||
#define ATMEL_HLCDC_LAYER_DMA BIT(8)
|
||||
#define ATMEL_HLCDC_LAYER_REP BIT(9)
|
||||
#define ATMEL_HLCDC_LAYER_DSTKEY BIT(10)
|
||||
#define ATMEL_HLCDC_LAYER_DISCEN BIT(11)
|
||||
#define ATMEL_HLCDC_LAYER_GA_SHIFT 16
|
||||
#define ATMEL_HLCDC_LAYER_GA_MASK \
|
||||
GENMASK(23, ATMEL_HLCDC_LAYER_GA_SHIFT)
|
||||
#define ATMEL_HLCDC_LAYER_GA(x) \
|
||||
((x) << ATMEL_HLCDC_LAYER_GA_SHIFT)
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_DISC_POS(x, y) ((x) | ((y) << 16))
|
||||
#define ATMEL_HLCDC_LAYER_DISC_SIZE(w, h) (((w) - 1) | (((h) - 1) << 16))
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_SCALER_FACTORS(x, y) ((x) | ((y) << 16))
|
||||
#define ATMEL_HLCDC_LAYER_SCALER_ENABLE BIT(31)
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_MAX_PLANES 3
|
||||
|
||||
#define ATMEL_HLCDC_DMA_CHANNEL_DSCR_RESERVED BIT(0)
|
||||
#define ATMEL_HLCDC_DMA_CHANNEL_DSCR_LOADED BIT(1)
|
||||
#define ATMEL_HLCDC_DMA_CHANNEL_DSCR_DONE BIT(2)
|
||||
#define ATMEL_HLCDC_DMA_CHANNEL_DSCR_OVERRUN BIT(3)
|
||||
|
||||
#define ATMEL_HLCDC_MAX_LAYERS 6
|
||||
|
||||
/**
|
||||
* Atmel HLCDC Layer registers layout structure
|
||||
*
|
||||
* Each HLCDC layer has its own register organization and a given register
|
||||
* can be placed differently on 2 different layers depending on its
|
||||
* capabilities.
|
||||
* This structure stores common registers layout for a given layer and is
|
||||
* used by HLCDC layer code to choose the appropriate register to write to
|
||||
* or to read from.
|
||||
*
|
||||
* For all fields, a value of zero means "unsupported".
|
||||
*
|
||||
* See Atmel's datasheet for a detailled description of these registers.
|
||||
*
|
||||
* @xstride: xstride registers
|
||||
* @pstride: pstride registers
|
||||
* @pos: position register
|
||||
* @size: displayed size register
|
||||
* @memsize: memory size register
|
||||
* @default_color: default color register
|
||||
* @chroma_key: chroma key register
|
||||
* @chroma_key_mask: chroma key mask register
|
||||
* @general_config: general layer config register
|
||||
* @sacler_config: scaler factors register
|
||||
* @phicoeffs: X/Y PHI coefficient registers
|
||||
* @disc_pos: discard area position register
|
||||
* @disc_size: discard area size register
|
||||
* @csc: color space conversion register
|
||||
*/
|
||||
struct atmel_hlcdc_layer_cfg_layout {
|
||||
int xstride[ATMEL_HLCDC_LAYER_MAX_PLANES];
|
||||
int pstride[ATMEL_HLCDC_LAYER_MAX_PLANES];
|
||||
int pos;
|
||||
int size;
|
||||
int memsize;
|
||||
int default_color;
|
||||
int chroma_key;
|
||||
int chroma_key_mask;
|
||||
int general_config;
|
||||
int scaler_config;
|
||||
struct {
|
||||
int x;
|
||||
int y;
|
||||
} phicoeffs;
|
||||
int disc_pos;
|
||||
int disc_size;
|
||||
int csc;
|
||||
};
|
||||
|
||||
/**
|
||||
* Atmel HLCDC DMA descriptor structure
|
||||
*
|
||||
* This structure is used by the HLCDC DMA engine to schedule a DMA transfer.
|
||||
*
|
||||
* The structure fields must remain in this specific order, because they're
|
||||
* used by the HLCDC DMA engine, which expect them in this order.
|
||||
* HLCDC DMA descriptors must be aligned on 64 bits.
|
||||
*
|
||||
* @addr: buffer DMA address
|
||||
* @ctrl: DMA transfer options
|
||||
* @next: next DMA descriptor to fetch
|
||||
* @self: descriptor DMA address
|
||||
*/
|
||||
struct atmel_hlcdc_dma_channel_dscr {
|
||||
dma_addr_t addr;
|
||||
u32 ctrl;
|
||||
dma_addr_t next;
|
||||
dma_addr_t self;
|
||||
} __aligned(sizeof(u64));
|
||||
|
||||
/**
|
||||
* Atmel HLCDC layer types
|
||||
*/
|
||||
enum atmel_hlcdc_layer_type {
|
||||
ATMEL_HLCDC_NO_LAYER,
|
||||
ATMEL_HLCDC_BASE_LAYER,
|
||||
ATMEL_HLCDC_OVERLAY_LAYER,
|
||||
ATMEL_HLCDC_CURSOR_LAYER,
|
||||
ATMEL_HLCDC_PP_LAYER,
|
||||
};
|
||||
|
||||
/**
|
||||
* Atmel HLCDC Supported formats structure
|
||||
*
|
||||
* This structure list all the formats supported by a given layer.
|
||||
*
|
||||
* @nformats: number of supported formats
|
||||
* @formats: supported formats
|
||||
*/
|
||||
struct atmel_hlcdc_formats {
|
||||
int nformats;
|
||||
u32 *formats;
|
||||
};
|
||||
|
||||
/**
|
||||
* Atmel HLCDC Layer description structure
|
||||
*
|
||||
* This structure describes the capabilities provided by a given layer.
|
||||
*
|
||||
* @name: layer name
|
||||
* @type: layer type
|
||||
* @id: layer id
|
||||
* @regs_offset: offset of the layer registers from the HLCDC registers base
|
||||
* @cfgs_offset: CFGX registers offset from the layer registers base
|
||||
* @formats: supported formats
|
||||
* @layout: config registers layout
|
||||
* @max_width: maximum width supported by this layer (0 means unlimited)
|
||||
* @max_height: maximum height supported by this layer (0 means unlimited)
|
||||
*/
|
||||
struct atmel_hlcdc_layer_desc {
|
||||
const char *name;
|
||||
enum atmel_hlcdc_layer_type type;
|
||||
int id;
|
||||
int regs_offset;
|
||||
int cfgs_offset;
|
||||
struct atmel_hlcdc_formats *formats;
|
||||
struct atmel_hlcdc_layer_cfg_layout layout;
|
||||
int max_width;
|
||||
int max_height;
|
||||
};
|
||||
|
||||
/**
|
||||
* Atmel HLCDC Layer.
|
||||
*
|
||||
* A layer can be a DRM plane of a post processing layer used to render
|
||||
* HLCDC composition into memory.
|
||||
*
|
||||
* @desc: layer description
|
||||
* @regmap: pointer to the HLCDC regmap
|
||||
*/
|
||||
struct atmel_hlcdc_layer {
|
||||
const struct atmel_hlcdc_layer_desc *desc;
|
||||
struct regmap *regmap;
|
||||
};
|
||||
|
||||
/**
|
||||
* Atmel HLCDC Plane.
|
||||
*
|
||||
* @base: base DRM plane structure
|
||||
* @layer: HLCDC layer structure
|
||||
* @properties: pointer to the property definitions structure
|
||||
*/
|
||||
struct atmel_hlcdc_plane {
|
||||
struct drm_plane base;
|
||||
struct atmel_hlcdc_layer layer;
|
||||
struct atmel_hlcdc_plane_properties *properties;
|
||||
};
|
||||
|
||||
static inline struct atmel_hlcdc_plane *
|
||||
drm_plane_to_atmel_hlcdc_plane(struct drm_plane *p)
|
||||
{
|
||||
return container_of(p, struct atmel_hlcdc_plane, base);
|
||||
}
|
||||
|
||||
static inline struct atmel_hlcdc_plane *
|
||||
atmel_hlcdc_layer_to_plane(struct atmel_hlcdc_layer *layer)
|
||||
{
|
||||
return container_of(layer, struct atmel_hlcdc_plane, layer);
|
||||
}
|
||||
|
||||
/**
|
||||
* Atmel HLCDC Display Controller description structure.
|
||||
*
|
||||
* This structure describe the HLCDC IP capabilities and depends on the
|
||||
* This structure describes the HLCDC IP capabilities and depends on the
|
||||
* HLCDC IP version (or Atmel SoC family).
|
||||
*
|
||||
* @min_width: minimum width supported by the Display Controller
|
||||
@ -83,68 +347,25 @@ struct atmel_hlcdc_plane_properties {
|
||||
struct drm_property *alpha;
|
||||
};
|
||||
|
||||
/**
|
||||
* Atmel HLCDC Plane.
|
||||
*
|
||||
* @base: base DRM plane structure
|
||||
* @layer: HLCDC layer structure
|
||||
* @properties: pointer to the property definitions structure
|
||||
* @rotation: current rotation status
|
||||
*/
|
||||
struct atmel_hlcdc_plane {
|
||||
struct drm_plane base;
|
||||
struct atmel_hlcdc_layer layer;
|
||||
struct atmel_hlcdc_plane_properties *properties;
|
||||
};
|
||||
|
||||
static inline struct atmel_hlcdc_plane *
|
||||
drm_plane_to_atmel_hlcdc_plane(struct drm_plane *p)
|
||||
{
|
||||
return container_of(p, struct atmel_hlcdc_plane, base);
|
||||
}
|
||||
|
||||
static inline struct atmel_hlcdc_plane *
|
||||
atmel_hlcdc_layer_to_plane(struct atmel_hlcdc_layer *l)
|
||||
{
|
||||
return container_of(l, struct atmel_hlcdc_plane, layer);
|
||||
}
|
||||
|
||||
/**
|
||||
* Atmel HLCDC Planes.
|
||||
*
|
||||
* This structure stores the instantiated HLCDC Planes and can be accessed by
|
||||
* the HLCDC Display Controller or the HLCDC CRTC.
|
||||
*
|
||||
* @primary: primary plane
|
||||
* @cursor: hardware cursor plane
|
||||
* @overlays: overlay plane table
|
||||
* @noverlays: number of overlay planes
|
||||
*/
|
||||
struct atmel_hlcdc_planes {
|
||||
struct atmel_hlcdc_plane *primary;
|
||||
struct atmel_hlcdc_plane *cursor;
|
||||
struct atmel_hlcdc_plane **overlays;
|
||||
int noverlays;
|
||||
};
|
||||
|
||||
/**
|
||||
* Atmel HLCDC Display Controller.
|
||||
*
|
||||
* @desc: HLCDC Display Controller description
|
||||
* @dscrpool: DMA coherent pool used to allocate DMA descriptors
|
||||
* @hlcdc: pointer to the atmel_hlcdc structure provided by the MFD device
|
||||
* @fbdev: framebuffer device attached to the Display Controller
|
||||
* @crtc: CRTC provided by the display controller
|
||||
* @planes: instantiated planes
|
||||
* @layers: active HLCDC layer
|
||||
* @layers: active HLCDC layers
|
||||
* @wq: display controller workqueue
|
||||
* @commit: used for async commit handling
|
||||
*/
|
||||
struct atmel_hlcdc_dc {
|
||||
const struct atmel_hlcdc_dc_desc *desc;
|
||||
struct dma_pool *dscrpool;
|
||||
struct atmel_hlcdc *hlcdc;
|
||||
struct drm_fbdev_cma *fbdev;
|
||||
struct drm_crtc *crtc;
|
||||
struct atmel_hlcdc_planes *planes;
|
||||
struct atmel_hlcdc_layer *layers[ATMEL_HLCDC_MAX_LAYERS];
|
||||
struct workqueue_struct *wq;
|
||||
struct {
|
||||
@ -156,11 +377,51 @@ struct atmel_hlcdc_dc {
|
||||
extern struct atmel_hlcdc_formats atmel_hlcdc_plane_rgb_formats;
|
||||
extern struct atmel_hlcdc_formats atmel_hlcdc_plane_rgb_and_yuv_formats;
|
||||
|
||||
static inline void atmel_hlcdc_layer_write_reg(struct atmel_hlcdc_layer *layer,
|
||||
unsigned int reg, u32 val)
|
||||
{
|
||||
regmap_write(layer->regmap, layer->desc->regs_offset + reg, val);
|
||||
}
|
||||
|
||||
static inline u32 atmel_hlcdc_layer_read_reg(struct atmel_hlcdc_layer *layer,
|
||||
unsigned int reg)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
regmap_read(layer->regmap, layer->desc->regs_offset + reg, &val);
|
||||
|
||||
return val;
|
||||
}
|
||||
|
||||
static inline void atmel_hlcdc_layer_write_cfg(struct atmel_hlcdc_layer *layer,
|
||||
unsigned int cfgid, u32 val)
|
||||
{
|
||||
atmel_hlcdc_layer_write_reg(layer,
|
||||
layer->desc->cfgs_offset +
|
||||
(cfgid * sizeof(u32)), val);
|
||||
}
|
||||
|
||||
static inline u32 atmel_hlcdc_layer_read_cfg(struct atmel_hlcdc_layer *layer,
|
||||
unsigned int cfgid)
|
||||
{
|
||||
return atmel_hlcdc_layer_read_reg(layer,
|
||||
layer->desc->cfgs_offset +
|
||||
(cfgid * sizeof(u32)));
|
||||
}
|
||||
|
||||
static inline void atmel_hlcdc_layer_init(struct atmel_hlcdc_layer *layer,
|
||||
const struct atmel_hlcdc_layer_desc *desc,
|
||||
struct regmap *regmap)
|
||||
{
|
||||
layer->desc = desc;
|
||||
layer->regmap = regmap;
|
||||
}
|
||||
|
||||
int atmel_hlcdc_dc_mode_valid(struct atmel_hlcdc_dc *dc,
|
||||
struct drm_display_mode *mode);
|
||||
|
||||
struct atmel_hlcdc_planes *
|
||||
atmel_hlcdc_create_planes(struct drm_device *dev);
|
||||
int atmel_hlcdc_create_planes(struct drm_device *dev);
|
||||
void atmel_hlcdc_plane_irq(struct atmel_hlcdc_plane *plane);
|
||||
|
||||
int atmel_hlcdc_plane_prepare_disc_area(struct drm_crtc_state *c_state);
|
||||
int atmel_hlcdc_plane_prepare_ahb_routing(struct drm_crtc_state *c_state);
|
||||
|
@ -1,666 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2014 Free Electrons
|
||||
* Copyright (C) 2014 Atmel
|
||||
*
|
||||
* Author: Boris BREZILLON <boris.brezillon@free-electrons.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms of the GNU General Public License version 2 as published by
|
||||
* the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/interrupt.h>
|
||||
|
||||
#include "atmel_hlcdc_dc.h"
|
||||
|
||||
static void
|
||||
atmel_hlcdc_layer_fb_flip_release(struct drm_flip_work *work, void *val)
|
||||
{
|
||||
struct atmel_hlcdc_layer_fb_flip *flip = val;
|
||||
|
||||
if (flip->fb)
|
||||
drm_framebuffer_unreference(flip->fb);
|
||||
kfree(flip);
|
||||
}
|
||||
|
||||
static void
|
||||
atmel_hlcdc_layer_fb_flip_destroy(struct atmel_hlcdc_layer_fb_flip *flip)
|
||||
{
|
||||
if (flip->fb)
|
||||
drm_framebuffer_unreference(flip->fb);
|
||||
kfree(flip->task);
|
||||
kfree(flip);
|
||||
}
|
||||
|
||||
static void
|
||||
atmel_hlcdc_layer_fb_flip_release_queue(struct atmel_hlcdc_layer *layer,
|
||||
struct atmel_hlcdc_layer_fb_flip *flip)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (!flip)
|
||||
return;
|
||||
|
||||
for (i = 0; i < layer->max_planes; i++) {
|
||||
if (!flip->dscrs[i])
|
||||
break;
|
||||
|
||||
flip->dscrs[i]->status = 0;
|
||||
flip->dscrs[i] = NULL;
|
||||
}
|
||||
|
||||
drm_flip_work_queue_task(&layer->gc, flip->task);
|
||||
drm_flip_work_commit(&layer->gc, layer->wq);
|
||||
}
|
||||
|
||||
static void atmel_hlcdc_layer_update_reset(struct atmel_hlcdc_layer *layer,
|
||||
int id)
|
||||
{
|
||||
struct atmel_hlcdc_layer_update *upd = &layer->update;
|
||||
struct atmel_hlcdc_layer_update_slot *slot;
|
||||
|
||||
if (id < 0 || id > 1)
|
||||
return;
|
||||
|
||||
slot = &upd->slots[id];
|
||||
bitmap_clear(slot->updated_configs, 0, layer->desc->nconfigs);
|
||||
memset(slot->configs, 0,
|
||||
sizeof(*slot->configs) * layer->desc->nconfigs);
|
||||
|
||||
if (slot->fb_flip) {
|
||||
atmel_hlcdc_layer_fb_flip_release_queue(layer, slot->fb_flip);
|
||||
slot->fb_flip = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
static void atmel_hlcdc_layer_update_apply(struct atmel_hlcdc_layer *layer)
|
||||
{
|
||||
struct atmel_hlcdc_layer_dma_channel *dma = &layer->dma;
|
||||
const struct atmel_hlcdc_layer_desc *desc = layer->desc;
|
||||
struct atmel_hlcdc_layer_update *upd = &layer->update;
|
||||
struct regmap *regmap = layer->hlcdc->regmap;
|
||||
struct atmel_hlcdc_layer_update_slot *slot;
|
||||
struct atmel_hlcdc_layer_fb_flip *fb_flip;
|
||||
struct atmel_hlcdc_dma_channel_dscr *dscr;
|
||||
unsigned int cfg;
|
||||
u32 action = 0;
|
||||
int i = 0;
|
||||
|
||||
if (upd->pending < 0 || upd->pending > 1)
|
||||
return;
|
||||
|
||||
slot = &upd->slots[upd->pending];
|
||||
|
||||
for_each_set_bit(cfg, slot->updated_configs, layer->desc->nconfigs) {
|
||||
regmap_write(regmap,
|
||||
desc->regs_offset +
|
||||
ATMEL_HLCDC_LAYER_CFG(layer, cfg),
|
||||
slot->configs[cfg]);
|
||||
action |= ATMEL_HLCDC_LAYER_UPDATE;
|
||||
}
|
||||
|
||||
fb_flip = slot->fb_flip;
|
||||
|
||||
if (!fb_flip->fb)
|
||||
goto apply;
|
||||
|
||||
if (dma->status == ATMEL_HLCDC_LAYER_DISABLED) {
|
||||
for (i = 0; i < fb_flip->ngems; i++) {
|
||||
dscr = fb_flip->dscrs[i];
|
||||
dscr->ctrl = ATMEL_HLCDC_LAYER_DFETCH |
|
||||
ATMEL_HLCDC_LAYER_DMA_IRQ |
|
||||
ATMEL_HLCDC_LAYER_ADD_IRQ |
|
||||
ATMEL_HLCDC_LAYER_DONE_IRQ;
|
||||
|
||||
regmap_write(regmap,
|
||||
desc->regs_offset +
|
||||
ATMEL_HLCDC_LAYER_PLANE_ADDR(i),
|
||||
dscr->addr);
|
||||
regmap_write(regmap,
|
||||
desc->regs_offset +
|
||||
ATMEL_HLCDC_LAYER_PLANE_CTRL(i),
|
||||
dscr->ctrl);
|
||||
regmap_write(regmap,
|
||||
desc->regs_offset +
|
||||
ATMEL_HLCDC_LAYER_PLANE_NEXT(i),
|
||||
dscr->next);
|
||||
}
|
||||
|
||||
action |= ATMEL_HLCDC_LAYER_DMA_CHAN;
|
||||
dma->status = ATMEL_HLCDC_LAYER_ENABLED;
|
||||
} else {
|
||||
for (i = 0; i < fb_flip->ngems; i++) {
|
||||
dscr = fb_flip->dscrs[i];
|
||||
dscr->ctrl = ATMEL_HLCDC_LAYER_DFETCH |
|
||||
ATMEL_HLCDC_LAYER_DMA_IRQ |
|
||||
ATMEL_HLCDC_LAYER_DSCR_IRQ |
|
||||
ATMEL_HLCDC_LAYER_DONE_IRQ;
|
||||
|
||||
regmap_write(regmap,
|
||||
desc->regs_offset +
|
||||
ATMEL_HLCDC_LAYER_PLANE_HEAD(i),
|
||||
dscr->next);
|
||||
}
|
||||
|
||||
action |= ATMEL_HLCDC_LAYER_A2Q;
|
||||
}
|
||||
|
||||
/* Release unneeded descriptors */
|
||||
for (i = fb_flip->ngems; i < layer->max_planes; i++) {
|
||||
fb_flip->dscrs[i]->status = 0;
|
||||
fb_flip->dscrs[i] = NULL;
|
||||
}
|
||||
|
||||
dma->queue = fb_flip;
|
||||
slot->fb_flip = NULL;
|
||||
|
||||
apply:
|
||||
if (action)
|
||||
regmap_write(regmap,
|
||||
desc->regs_offset + ATMEL_HLCDC_LAYER_CHER,
|
||||
action);
|
||||
|
||||
atmel_hlcdc_layer_update_reset(layer, upd->pending);
|
||||
|
||||
upd->pending = -1;
|
||||
}
|
||||
|
||||
void atmel_hlcdc_layer_irq(struct atmel_hlcdc_layer *layer)
|
||||
{
|
||||
struct atmel_hlcdc_layer_dma_channel *dma = &layer->dma;
|
||||
const struct atmel_hlcdc_layer_desc *desc = layer->desc;
|
||||
struct regmap *regmap = layer->hlcdc->regmap;
|
||||
struct atmel_hlcdc_layer_fb_flip *flip;
|
||||
unsigned long flags;
|
||||
unsigned int isr, imr;
|
||||
unsigned int status;
|
||||
unsigned int plane_status;
|
||||
u32 flip_status;
|
||||
|
||||
int i;
|
||||
|
||||
regmap_read(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_IMR, &imr);
|
||||
regmap_read(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_ISR, &isr);
|
||||
status = imr & isr;
|
||||
if (!status)
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&layer->lock, flags);
|
||||
|
||||
flip = dma->queue ? dma->queue : dma->cur;
|
||||
|
||||
if (!flip) {
|
||||
spin_unlock_irqrestore(&layer->lock, flags);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Set LOADED and DONE flags: they'll be cleared if at least one
|
||||
* memory plane is not LOADED or DONE.
|
||||
*/
|
||||
flip_status = ATMEL_HLCDC_DMA_CHANNEL_DSCR_LOADED |
|
||||
ATMEL_HLCDC_DMA_CHANNEL_DSCR_DONE;
|
||||
for (i = 0; i < flip->ngems; i++) {
|
||||
plane_status = (status >> (8 * i));
|
||||
|
||||
if (plane_status &
|
||||
(ATMEL_HLCDC_LAYER_ADD_IRQ |
|
||||
ATMEL_HLCDC_LAYER_DSCR_IRQ) &
|
||||
~flip->dscrs[i]->ctrl) {
|
||||
flip->dscrs[i]->status |=
|
||||
ATMEL_HLCDC_DMA_CHANNEL_DSCR_LOADED;
|
||||
flip->dscrs[i]->ctrl |=
|
||||
ATMEL_HLCDC_LAYER_ADD_IRQ |
|
||||
ATMEL_HLCDC_LAYER_DSCR_IRQ;
|
||||
}
|
||||
|
||||
if (plane_status &
|
||||
ATMEL_HLCDC_LAYER_DONE_IRQ &
|
||||
~flip->dscrs[i]->ctrl) {
|
||||
flip->dscrs[i]->status |=
|
||||
ATMEL_HLCDC_DMA_CHANNEL_DSCR_DONE;
|
||||
flip->dscrs[i]->ctrl |=
|
||||
ATMEL_HLCDC_LAYER_DONE_IRQ;
|
||||
}
|
||||
|
||||
if (plane_status & ATMEL_HLCDC_LAYER_OVR_IRQ)
|
||||
flip->dscrs[i]->status |=
|
||||
ATMEL_HLCDC_DMA_CHANNEL_DSCR_OVERRUN;
|
||||
|
||||
/*
|
||||
* Clear LOADED and DONE flags if the memory plane is either
|
||||
* not LOADED or not DONE.
|
||||
*/
|
||||
if (!(flip->dscrs[i]->status &
|
||||
ATMEL_HLCDC_DMA_CHANNEL_DSCR_LOADED))
|
||||
flip_status &= ~ATMEL_HLCDC_DMA_CHANNEL_DSCR_LOADED;
|
||||
|
||||
if (!(flip->dscrs[i]->status &
|
||||
ATMEL_HLCDC_DMA_CHANNEL_DSCR_DONE))
|
||||
flip_status &= ~ATMEL_HLCDC_DMA_CHANNEL_DSCR_DONE;
|
||||
|
||||
/*
|
||||
* An overrun on one memory plane impact the whole framebuffer
|
||||
* transfer, hence we set the OVERRUN flag as soon as there's
|
||||
* one memory plane reporting such an overrun.
|
||||
*/
|
||||
flip_status |= flip->dscrs[i]->status &
|
||||
ATMEL_HLCDC_DMA_CHANNEL_DSCR_OVERRUN;
|
||||
}
|
||||
|
||||
/* Get changed bits */
|
||||
flip_status ^= flip->status;
|
||||
flip->status |= flip_status;
|
||||
|
||||
if (flip_status & ATMEL_HLCDC_DMA_CHANNEL_DSCR_LOADED) {
|
||||
atmel_hlcdc_layer_fb_flip_release_queue(layer, dma->cur);
|
||||
dma->cur = dma->queue;
|
||||
dma->queue = NULL;
|
||||
}
|
||||
|
||||
if (flip_status & ATMEL_HLCDC_DMA_CHANNEL_DSCR_DONE) {
|
||||
atmel_hlcdc_layer_fb_flip_release_queue(layer, dma->cur);
|
||||
dma->cur = NULL;
|
||||
}
|
||||
|
||||
if (flip_status & ATMEL_HLCDC_DMA_CHANNEL_DSCR_OVERRUN) {
|
||||
regmap_write(regmap,
|
||||
desc->regs_offset + ATMEL_HLCDC_LAYER_CHDR,
|
||||
ATMEL_HLCDC_LAYER_RST);
|
||||
if (dma->queue)
|
||||
atmel_hlcdc_layer_fb_flip_release_queue(layer,
|
||||
dma->queue);
|
||||
|
||||
if (dma->cur)
|
||||
atmel_hlcdc_layer_fb_flip_release_queue(layer,
|
||||
dma->cur);
|
||||
|
||||
dma->cur = NULL;
|
||||
dma->queue = NULL;
|
||||
}
|
||||
|
||||
if (!dma->queue) {
|
||||
atmel_hlcdc_layer_update_apply(layer);
|
||||
|
||||
if (!dma->cur)
|
||||
dma->status = ATMEL_HLCDC_LAYER_DISABLED;
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&layer->lock, flags);
|
||||
}
|
||||
|
||||
void atmel_hlcdc_layer_disable(struct atmel_hlcdc_layer *layer)
|
||||
{
|
||||
struct atmel_hlcdc_layer_dma_channel *dma = &layer->dma;
|
||||
struct atmel_hlcdc_layer_update *upd = &layer->update;
|
||||
struct regmap *regmap = layer->hlcdc->regmap;
|
||||
const struct atmel_hlcdc_layer_desc *desc = layer->desc;
|
||||
unsigned long flags;
|
||||
unsigned int isr;
|
||||
|
||||
spin_lock_irqsave(&layer->lock, flags);
|
||||
|
||||
/* Disable the layer */
|
||||
regmap_write(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_CHDR,
|
||||
ATMEL_HLCDC_LAYER_RST | ATMEL_HLCDC_LAYER_A2Q |
|
||||
ATMEL_HLCDC_LAYER_UPDATE);
|
||||
|
||||
/* Clear all pending interrupts */
|
||||
regmap_read(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_ISR, &isr);
|
||||
|
||||
/* Discard current and queued framebuffer transfers. */
|
||||
if (dma->cur) {
|
||||
atmel_hlcdc_layer_fb_flip_release_queue(layer, dma->cur);
|
||||
dma->cur = NULL;
|
||||
}
|
||||
|
||||
if (dma->queue) {
|
||||
atmel_hlcdc_layer_fb_flip_release_queue(layer, dma->queue);
|
||||
dma->queue = NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Then discard the pending update request (if any) to prevent
|
||||
* DMA irq handler from restarting the DMA channel after it has
|
||||
* been disabled.
|
||||
*/
|
||||
if (upd->pending >= 0) {
|
||||
atmel_hlcdc_layer_update_reset(layer, upd->pending);
|
||||
upd->pending = -1;
|
||||
}
|
||||
|
||||
dma->status = ATMEL_HLCDC_LAYER_DISABLED;
|
||||
|
||||
spin_unlock_irqrestore(&layer->lock, flags);
|
||||
}
|
||||
|
||||
int atmel_hlcdc_layer_update_start(struct atmel_hlcdc_layer *layer)
|
||||
{
|
||||
struct atmel_hlcdc_layer_dma_channel *dma = &layer->dma;
|
||||
struct atmel_hlcdc_layer_update *upd = &layer->update;
|
||||
struct regmap *regmap = layer->hlcdc->regmap;
|
||||
struct atmel_hlcdc_layer_fb_flip *fb_flip;
|
||||
struct atmel_hlcdc_layer_update_slot *slot;
|
||||
unsigned long flags;
|
||||
int i, j = 0;
|
||||
|
||||
fb_flip = kzalloc(sizeof(*fb_flip), GFP_KERNEL);
|
||||
if (!fb_flip)
|
||||
return -ENOMEM;
|
||||
|
||||
fb_flip->task = drm_flip_work_allocate_task(fb_flip, GFP_KERNEL);
|
||||
if (!fb_flip->task) {
|
||||
kfree(fb_flip);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&layer->lock, flags);
|
||||
|
||||
upd->next = upd->pending ? 0 : 1;
|
||||
|
||||
slot = &upd->slots[upd->next];
|
||||
|
||||
for (i = 0; i < layer->max_planes * 4; i++) {
|
||||
if (!dma->dscrs[i].status) {
|
||||
fb_flip->dscrs[j++] = &dma->dscrs[i];
|
||||
dma->dscrs[i].status =
|
||||
ATMEL_HLCDC_DMA_CHANNEL_DSCR_RESERVED;
|
||||
if (j == layer->max_planes)
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (j < layer->max_planes) {
|
||||
for (i = 0; i < j; i++)
|
||||
fb_flip->dscrs[i]->status = 0;
|
||||
}
|
||||
|
||||
if (j < layer->max_planes) {
|
||||
spin_unlock_irqrestore(&layer->lock, flags);
|
||||
atmel_hlcdc_layer_fb_flip_destroy(fb_flip);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
slot->fb_flip = fb_flip;
|
||||
|
||||
if (upd->pending >= 0) {
|
||||
memcpy(slot->configs,
|
||||
upd->slots[upd->pending].configs,
|
||||
layer->desc->nconfigs * sizeof(u32));
|
||||
memcpy(slot->updated_configs,
|
||||
upd->slots[upd->pending].updated_configs,
|
||||
DIV_ROUND_UP(layer->desc->nconfigs,
|
||||
BITS_PER_BYTE * sizeof(unsigned long)) *
|
||||
sizeof(unsigned long));
|
||||
slot->fb_flip->fb = upd->slots[upd->pending].fb_flip->fb;
|
||||
if (upd->slots[upd->pending].fb_flip->fb) {
|
||||
slot->fb_flip->fb =
|
||||
upd->slots[upd->pending].fb_flip->fb;
|
||||
slot->fb_flip->ngems =
|
||||
upd->slots[upd->pending].fb_flip->ngems;
|
||||
drm_framebuffer_reference(slot->fb_flip->fb);
|
||||
}
|
||||
} else {
|
||||
regmap_bulk_read(regmap,
|
||||
layer->desc->regs_offset +
|
||||
ATMEL_HLCDC_LAYER_CFG(layer, 0),
|
||||
upd->slots[upd->next].configs,
|
||||
layer->desc->nconfigs);
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&layer->lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void atmel_hlcdc_layer_update_rollback(struct atmel_hlcdc_layer *layer)
|
||||
{
|
||||
struct atmel_hlcdc_layer_update *upd = &layer->update;
|
||||
|
||||
atmel_hlcdc_layer_update_reset(layer, upd->next);
|
||||
upd->next = -1;
|
||||
}
|
||||
|
||||
void atmel_hlcdc_layer_update_set_fb(struct atmel_hlcdc_layer *layer,
|
||||
struct drm_framebuffer *fb,
|
||||
unsigned int *offsets)
|
||||
{
|
||||
struct atmel_hlcdc_layer_update *upd = &layer->update;
|
||||
struct atmel_hlcdc_layer_fb_flip *fb_flip;
|
||||
struct atmel_hlcdc_layer_update_slot *slot;
|
||||
struct atmel_hlcdc_dma_channel_dscr *dscr;
|
||||
struct drm_framebuffer *old_fb;
|
||||
int nplanes = 0;
|
||||
int i;
|
||||
|
||||
if (upd->next < 0 || upd->next > 1)
|
||||
return;
|
||||
|
||||
if (fb)
|
||||
nplanes = fb->format->num_planes;
|
||||
|
||||
if (nplanes > layer->max_planes)
|
||||
return;
|
||||
|
||||
slot = &upd->slots[upd->next];
|
||||
|
||||
fb_flip = slot->fb_flip;
|
||||
old_fb = slot->fb_flip->fb;
|
||||
|
||||
for (i = 0; i < nplanes; i++) {
|
||||
struct drm_gem_cma_object *gem;
|
||||
|
||||
dscr = slot->fb_flip->dscrs[i];
|
||||
gem = drm_fb_cma_get_gem_obj(fb, i);
|
||||
dscr->addr = gem->paddr + offsets[i];
|
||||
}
|
||||
|
||||
fb_flip->ngems = nplanes;
|
||||
fb_flip->fb = fb;
|
||||
|
||||
if (fb)
|
||||
drm_framebuffer_reference(fb);
|
||||
|
||||
if (old_fb)
|
||||
drm_framebuffer_unreference(old_fb);
|
||||
}
|
||||
|
||||
void atmel_hlcdc_layer_update_cfg(struct atmel_hlcdc_layer *layer, int cfg,
|
||||
u32 mask, u32 val)
|
||||
{
|
||||
struct atmel_hlcdc_layer_update *upd = &layer->update;
|
||||
struct atmel_hlcdc_layer_update_slot *slot;
|
||||
|
||||
if (upd->next < 0 || upd->next > 1)
|
||||
return;
|
||||
|
||||
if (cfg >= layer->desc->nconfigs)
|
||||
return;
|
||||
|
||||
slot = &upd->slots[upd->next];
|
||||
slot->configs[cfg] &= ~mask;
|
||||
slot->configs[cfg] |= (val & mask);
|
||||
set_bit(cfg, slot->updated_configs);
|
||||
}
|
||||
|
||||
void atmel_hlcdc_layer_update_commit(struct atmel_hlcdc_layer *layer)
|
||||
{
|
||||
struct atmel_hlcdc_layer_dma_channel *dma = &layer->dma;
|
||||
struct atmel_hlcdc_layer_update *upd = &layer->update;
|
||||
struct atmel_hlcdc_layer_update_slot *slot;
|
||||
unsigned long flags;
|
||||
|
||||
if (upd->next < 0 || upd->next > 1)
|
||||
return;
|
||||
|
||||
slot = &upd->slots[upd->next];
|
||||
|
||||
spin_lock_irqsave(&layer->lock, flags);
|
||||
|
||||
/*
|
||||
* Release pending update request and replace it by the new one.
|
||||
*/
|
||||
if (upd->pending >= 0)
|
||||
atmel_hlcdc_layer_update_reset(layer, upd->pending);
|
||||
|
||||
upd->pending = upd->next;
|
||||
upd->next = -1;
|
||||
|
||||
if (!dma->queue)
|
||||
atmel_hlcdc_layer_update_apply(layer);
|
||||
|
||||
spin_unlock_irqrestore(&layer->lock, flags);
|
||||
|
||||
|
||||
upd->next = -1;
|
||||
}
|
||||
|
||||
static int atmel_hlcdc_layer_dma_init(struct drm_device *dev,
|
||||
struct atmel_hlcdc_layer *layer)
|
||||
{
|
||||
struct atmel_hlcdc_layer_dma_channel *dma = &layer->dma;
|
||||
dma_addr_t dma_addr;
|
||||
int i;
|
||||
|
||||
dma->dscrs = dma_alloc_coherent(dev->dev,
|
||||
layer->max_planes * 4 *
|
||||
sizeof(*dma->dscrs),
|
||||
&dma_addr, GFP_KERNEL);
|
||||
if (!dma->dscrs)
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < layer->max_planes * 4; i++) {
|
||||
struct atmel_hlcdc_dma_channel_dscr *dscr = &dma->dscrs[i];
|
||||
|
||||
dscr->next = dma_addr + (i * sizeof(*dscr));
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void atmel_hlcdc_layer_dma_cleanup(struct drm_device *dev,
|
||||
struct atmel_hlcdc_layer *layer)
|
||||
{
|
||||
struct atmel_hlcdc_layer_dma_channel *dma = &layer->dma;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < layer->max_planes * 4; i++) {
|
||||
struct atmel_hlcdc_dma_channel_dscr *dscr = &dma->dscrs[i];
|
||||
|
||||
dscr->status = 0;
|
||||
}
|
||||
|
||||
dma_free_coherent(dev->dev, layer->max_planes * 4 *
|
||||
sizeof(*dma->dscrs), dma->dscrs,
|
||||
dma->dscrs[0].next);
|
||||
}
|
||||
|
||||
static int atmel_hlcdc_layer_update_init(struct drm_device *dev,
|
||||
struct atmel_hlcdc_layer *layer,
|
||||
const struct atmel_hlcdc_layer_desc *desc)
|
||||
{
|
||||
struct atmel_hlcdc_layer_update *upd = &layer->update;
|
||||
int updated_size;
|
||||
void *buffer;
|
||||
int i;
|
||||
|
||||
updated_size = DIV_ROUND_UP(desc->nconfigs,
|
||||
BITS_PER_BYTE *
|
||||
sizeof(unsigned long));
|
||||
|
||||
buffer = devm_kzalloc(dev->dev,
|
||||
((desc->nconfigs * sizeof(u32)) +
|
||||
(updated_size * sizeof(unsigned long))) * 2,
|
||||
GFP_KERNEL);
|
||||
if (!buffer)
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < 2; i++) {
|
||||
upd->slots[i].updated_configs = buffer;
|
||||
buffer += updated_size * sizeof(unsigned long);
|
||||
upd->slots[i].configs = buffer;
|
||||
buffer += desc->nconfigs * sizeof(u32);
|
||||
}
|
||||
|
||||
upd->pending = -1;
|
||||
upd->next = -1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int atmel_hlcdc_layer_init(struct drm_device *dev,
|
||||
struct atmel_hlcdc_layer *layer,
|
||||
const struct atmel_hlcdc_layer_desc *desc)
|
||||
{
|
||||
struct atmel_hlcdc_dc *dc = dev->dev_private;
|
||||
struct regmap *regmap = dc->hlcdc->regmap;
|
||||
unsigned int tmp;
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
layer->hlcdc = dc->hlcdc;
|
||||
layer->wq = dc->wq;
|
||||
layer->desc = desc;
|
||||
|
||||
regmap_write(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_CHDR,
|
||||
ATMEL_HLCDC_LAYER_RST);
|
||||
for (i = 0; i < desc->formats->nformats; i++) {
|
||||
int nplanes = drm_format_num_planes(desc->formats->formats[i]);
|
||||
|
||||
if (nplanes > layer->max_planes)
|
||||
layer->max_planes = nplanes;
|
||||
}
|
||||
|
||||
spin_lock_init(&layer->lock);
|
||||
drm_flip_work_init(&layer->gc, desc->name,
|
||||
atmel_hlcdc_layer_fb_flip_release);
|
||||
ret = atmel_hlcdc_layer_dma_init(dev, layer);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = atmel_hlcdc_layer_update_init(dev, layer, desc);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Flush Status Register */
|
||||
regmap_write(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_IDR,
|
||||
0xffffffff);
|
||||
regmap_read(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_ISR,
|
||||
&tmp);
|
||||
|
||||
tmp = 0;
|
||||
for (i = 0; i < layer->max_planes; i++)
|
||||
tmp |= (ATMEL_HLCDC_LAYER_DMA_IRQ |
|
||||
ATMEL_HLCDC_LAYER_DSCR_IRQ |
|
||||
ATMEL_HLCDC_LAYER_ADD_IRQ |
|
||||
ATMEL_HLCDC_LAYER_DONE_IRQ |
|
||||
ATMEL_HLCDC_LAYER_OVR_IRQ) << (8 * i);
|
||||
|
||||
regmap_write(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_IER, tmp);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void atmel_hlcdc_layer_cleanup(struct drm_device *dev,
|
||||
struct atmel_hlcdc_layer *layer)
|
||||
{
|
||||
const struct atmel_hlcdc_layer_desc *desc = layer->desc;
|
||||
struct regmap *regmap = layer->hlcdc->regmap;
|
||||
|
||||
regmap_write(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_IDR,
|
||||
0xffffffff);
|
||||
regmap_write(regmap, desc->regs_offset + ATMEL_HLCDC_LAYER_CHDR,
|
||||
ATMEL_HLCDC_LAYER_RST);
|
||||
|
||||
atmel_hlcdc_layer_dma_cleanup(dev, layer);
|
||||
drm_flip_work_cleanup(&layer->gc);
|
||||
}
|
@ -1,399 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2014 Free Electrons
|
||||
* Copyright (C) 2014 Atmel
|
||||
*
|
||||
* Author: Boris BREZILLON <boris.brezillon@free-electrons.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms of the GNU General Public License version 2 as published by
|
||||
* the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#ifndef DRM_ATMEL_HLCDC_LAYER_H
|
||||
#define DRM_ATMEL_HLCDC_LAYER_H
|
||||
|
||||
#include <linux/mfd/atmel-hlcdc.h>
|
||||
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_flip_work.h>
|
||||
#include <drm/drmP.h>
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_CHER 0x0
|
||||
#define ATMEL_HLCDC_LAYER_CHDR 0x4
|
||||
#define ATMEL_HLCDC_LAYER_CHSR 0x8
|
||||
#define ATMEL_HLCDC_LAYER_DMA_CHAN BIT(0)
|
||||
#define ATMEL_HLCDC_LAYER_UPDATE BIT(1)
|
||||
#define ATMEL_HLCDC_LAYER_A2Q BIT(2)
|
||||
#define ATMEL_HLCDC_LAYER_RST BIT(8)
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_IER 0xc
|
||||
#define ATMEL_HLCDC_LAYER_IDR 0x10
|
||||
#define ATMEL_HLCDC_LAYER_IMR 0x14
|
||||
#define ATMEL_HLCDC_LAYER_ISR 0x18
|
||||
#define ATMEL_HLCDC_LAYER_DFETCH BIT(0)
|
||||
#define ATMEL_HLCDC_LAYER_LFETCH BIT(1)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_IRQ BIT(2)
|
||||
#define ATMEL_HLCDC_LAYER_DSCR_IRQ BIT(3)
|
||||
#define ATMEL_HLCDC_LAYER_ADD_IRQ BIT(4)
|
||||
#define ATMEL_HLCDC_LAYER_DONE_IRQ BIT(5)
|
||||
#define ATMEL_HLCDC_LAYER_OVR_IRQ BIT(6)
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_PLANE_HEAD(n) (((n) * 0x10) + 0x1c)
|
||||
#define ATMEL_HLCDC_LAYER_PLANE_ADDR(n) (((n) * 0x10) + 0x20)
|
||||
#define ATMEL_HLCDC_LAYER_PLANE_CTRL(n) (((n) * 0x10) + 0x24)
|
||||
#define ATMEL_HLCDC_LAYER_PLANE_NEXT(n) (((n) * 0x10) + 0x28)
|
||||
#define ATMEL_HLCDC_LAYER_CFG(p, c) (((c) * 4) + ((p)->max_planes * 0x10) + 0x1c)
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_DMA_CFG_ID 0
|
||||
#define ATMEL_HLCDC_LAYER_DMA_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, ATMEL_HLCDC_LAYER_DMA_CFG_ID)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_SIF BIT(0)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_BLEN_MASK GENMASK(5, 4)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_BLEN_SINGLE (0 << 4)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_BLEN_INCR4 (1 << 4)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_BLEN_INCR8 (2 << 4)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_BLEN_INCR16 (3 << 4)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_DLBO BIT(8)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_ROTDIS BIT(12)
|
||||
#define ATMEL_HLCDC_LAYER_DMA_LOCKDIS BIT(13)
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_FORMAT_CFG_ID 1
|
||||
#define ATMEL_HLCDC_LAYER_FORMAT_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, ATMEL_HLCDC_LAYER_FORMAT_CFG_ID)
|
||||
#define ATMEL_HLCDC_LAYER_RGB (0 << 0)
|
||||
#define ATMEL_HLCDC_LAYER_CLUT (1 << 0)
|
||||
#define ATMEL_HLCDC_LAYER_YUV (2 << 0)
|
||||
#define ATMEL_HLCDC_RGB_MODE(m) (((m) & 0xf) << 4)
|
||||
#define ATMEL_HLCDC_CLUT_MODE(m) (((m) & 0x3) << 8)
|
||||
#define ATMEL_HLCDC_YUV_MODE(m) (((m) & 0xf) << 12)
|
||||
#define ATMEL_HLCDC_YUV422ROT BIT(16)
|
||||
#define ATMEL_HLCDC_YUV422SWP BIT(17)
|
||||
#define ATMEL_HLCDC_DSCALEOPT BIT(20)
|
||||
|
||||
#define ATMEL_HLCDC_XRGB4444_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(0))
|
||||
#define ATMEL_HLCDC_ARGB4444_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(1))
|
||||
#define ATMEL_HLCDC_RGBA4444_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(2))
|
||||
#define ATMEL_HLCDC_RGB565_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(3))
|
||||
#define ATMEL_HLCDC_ARGB1555_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(4))
|
||||
#define ATMEL_HLCDC_XRGB8888_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(9))
|
||||
#define ATMEL_HLCDC_RGB888_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(10))
|
||||
#define ATMEL_HLCDC_ARGB8888_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(12))
|
||||
#define ATMEL_HLCDC_RGBA8888_MODE (ATMEL_HLCDC_LAYER_RGB | ATMEL_HLCDC_RGB_MODE(13))
|
||||
|
||||
#define ATMEL_HLCDC_AYUV_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(0))
|
||||
#define ATMEL_HLCDC_YUYV_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(1))
|
||||
#define ATMEL_HLCDC_UYVY_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(2))
|
||||
#define ATMEL_HLCDC_YVYU_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(3))
|
||||
#define ATMEL_HLCDC_VYUY_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(4))
|
||||
#define ATMEL_HLCDC_NV61_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(5))
|
||||
#define ATMEL_HLCDC_YUV422_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(6))
|
||||
#define ATMEL_HLCDC_NV21_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(7))
|
||||
#define ATMEL_HLCDC_YUV420_MODE (ATMEL_HLCDC_LAYER_YUV | ATMEL_HLCDC_YUV_MODE(8))
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_POS_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.pos)
|
||||
#define ATMEL_HLCDC_LAYER_SIZE_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.size)
|
||||
#define ATMEL_HLCDC_LAYER_MEMSIZE_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.memsize)
|
||||
#define ATMEL_HLCDC_LAYER_XSTRIDE_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.xstride)
|
||||
#define ATMEL_HLCDC_LAYER_PSTRIDE_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.pstride)
|
||||
#define ATMEL_HLCDC_LAYER_DFLTCOLOR_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.default_color)
|
||||
#define ATMEL_HLCDC_LAYER_CRKEY_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.chroma_key)
|
||||
#define ATMEL_HLCDC_LAYER_CRKEY_MASK_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.chroma_key_mask)
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_GENERAL_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.general_config)
|
||||
#define ATMEL_HLCDC_LAYER_CRKEY BIT(0)
|
||||
#define ATMEL_HLCDC_LAYER_INV BIT(1)
|
||||
#define ATMEL_HLCDC_LAYER_ITER2BL BIT(2)
|
||||
#define ATMEL_HLCDC_LAYER_ITER BIT(3)
|
||||
#define ATMEL_HLCDC_LAYER_REVALPHA BIT(4)
|
||||
#define ATMEL_HLCDC_LAYER_GAEN BIT(5)
|
||||
#define ATMEL_HLCDC_LAYER_LAEN BIT(6)
|
||||
#define ATMEL_HLCDC_LAYER_OVR BIT(7)
|
||||
#define ATMEL_HLCDC_LAYER_DMA BIT(8)
|
||||
#define ATMEL_HLCDC_LAYER_REP BIT(9)
|
||||
#define ATMEL_HLCDC_LAYER_DSTKEY BIT(10)
|
||||
#define ATMEL_HLCDC_LAYER_DISCEN BIT(11)
|
||||
#define ATMEL_HLCDC_LAYER_GA_SHIFT 16
|
||||
#define ATMEL_HLCDC_LAYER_GA_MASK GENMASK(23, ATMEL_HLCDC_LAYER_GA_SHIFT)
|
||||
#define ATMEL_HLCDC_LAYER_GA(x) ((x) << ATMEL_HLCDC_LAYER_GA_SHIFT)
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_CSC_CFG(p, o) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.csc + o)
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_DISC_POS_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.disc_pos)
|
||||
|
||||
#define ATMEL_HLCDC_LAYER_DISC_SIZE_CFG(p) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.disc_size)
|
||||
|
||||
#define ATMEL_HLCDC_MAX_PLANES 3
|
||||
|
||||
#define ATMEL_HLCDC_DMA_CHANNEL_DSCR_RESERVED BIT(0)
|
||||
#define ATMEL_HLCDC_DMA_CHANNEL_DSCR_LOADED BIT(1)
|
||||
#define ATMEL_HLCDC_DMA_CHANNEL_DSCR_DONE BIT(2)
|
||||
#define ATMEL_HLCDC_DMA_CHANNEL_DSCR_OVERRUN BIT(3)
|
||||
|
||||
/**
|
||||
* Atmel HLCDC Layer registers layout structure
|
||||
*
|
||||
* Each HLCDC layer has its own register organization and a given register
|
||||
* can be placed differently on 2 different layers depending on its
|
||||
* capabilities.
|
||||
* This structure stores common registers layout for a given layer and is
|
||||
* used by HLCDC layer code to choose the appropriate register to write to
|
||||
* or to read from.
|
||||
*
|
||||
* For all fields, a value of zero means "unsupported".
|
||||
*
|
||||
* See Atmel's datasheet for a detailled description of these registers.
|
||||
*
|
||||
* @xstride: xstride registers
|
||||
* @pstride: pstride registers
|
||||
* @pos: position register
|
||||
* @size: displayed size register
|
||||
* @memsize: memory size register
|
||||
* @default_color: default color register
|
||||
* @chroma_key: chroma key register
|
||||
* @chroma_key_mask: chroma key mask register
|
||||
* @general_config: general layer config register
|
||||
* @disc_pos: discard area position register
|
||||
* @disc_size: discard area size register
|
||||
* @csc: color space conversion register
|
||||
*/
|
||||
struct atmel_hlcdc_layer_cfg_layout {
|
||||
int xstride[ATMEL_HLCDC_MAX_PLANES];
|
||||
int pstride[ATMEL_HLCDC_MAX_PLANES];
|
||||
int pos;
|
||||
int size;
|
||||
int memsize;
|
||||
int default_color;
|
||||
int chroma_key;
|
||||
int chroma_key_mask;
|
||||
int general_config;
|
||||
int disc_pos;
|
||||
int disc_size;
|
||||
int csc;
|
||||
};
|
||||
|
||||
/**
|
||||
* Atmel HLCDC framebuffer flip structure
|
||||
*
|
||||
* This structure is allocated when someone asked for a layer update (most
|
||||
* likely a DRM plane update, either primary, overlay or cursor plane) and
|
||||
* released when the layer do not need to reference the framebuffer object
|
||||
* anymore (i.e. the layer was disabled or updated).
|
||||
*
|
||||
* @dscrs: DMA descriptors
|
||||
* @fb: the referenced framebuffer object
|
||||
* @ngems: number of GEM objects referenced by the fb element
|
||||
* @status: fb flip operation status
|
||||
*/
|
||||
struct atmel_hlcdc_layer_fb_flip {
|
||||
struct atmel_hlcdc_dma_channel_dscr *dscrs[ATMEL_HLCDC_MAX_PLANES];
|
||||
struct drm_flip_task *task;
|
||||
struct drm_framebuffer *fb;
|
||||
int ngems;
|
||||
u32 status;
|
||||
};
|
||||
|
||||
/**
|
||||
* Atmel HLCDC DMA descriptor structure
|
||||
*
|
||||
* This structure is used by the HLCDC DMA engine to schedule a DMA transfer.
|
||||
*
|
||||
* The structure fields must remain in this specific order, because they're
|
||||
* used by the HLCDC DMA engine, which expect them in this order.
|
||||
* HLCDC DMA descriptors must be aligned on 64 bits.
|
||||
*
|
||||
* @addr: buffer DMA address
|
||||
* @ctrl: DMA transfer options
|
||||
* @next: next DMA descriptor to fetch
|
||||
* @gem_flip: the attached gem_flip operation
|
||||
*/
|
||||
struct atmel_hlcdc_dma_channel_dscr {
|
||||
dma_addr_t addr;
|
||||
u32 ctrl;
|
||||
dma_addr_t next;
|
||||
u32 status;
|
||||
} __aligned(sizeof(u64));
|
||||
|
||||
/**
|
||||
* Atmel HLCDC layer types
|
||||
*/
|
||||
enum atmel_hlcdc_layer_type {
|
||||
ATMEL_HLCDC_BASE_LAYER,
|
||||
ATMEL_HLCDC_OVERLAY_LAYER,
|
||||
ATMEL_HLCDC_CURSOR_LAYER,
|
||||
ATMEL_HLCDC_PP_LAYER,
|
||||
};
|
||||
|
||||
/**
|
||||
* Atmel HLCDC Supported formats structure
|
||||
*
|
||||
* This structure list all the formats supported by a given layer.
|
||||
*
|
||||
* @nformats: number of supported formats
|
||||
* @formats: supported formats
|
||||
*/
|
||||
struct atmel_hlcdc_formats {
|
||||
int nformats;
|
||||
uint32_t *formats;
|
||||
};
|
||||
|
||||
/**
|
||||
* Atmel HLCDC Layer description structure
|
||||
*
|
||||
* This structure describe the capabilities provided by a given layer.
|
||||
*
|
||||
* @name: layer name
|
||||
* @type: layer type
|
||||
* @id: layer id
|
||||
* @regs_offset: offset of the layer registers from the HLCDC registers base
|
||||
* @nconfigs: number of config registers provided by this layer
|
||||
* @formats: supported formats
|
||||
* @layout: config registers layout
|
||||
* @max_width: maximum width supported by this layer (0 means unlimited)
|
||||
* @max_height: maximum height supported by this layer (0 means unlimited)
|
||||
*/
|
||||
struct atmel_hlcdc_layer_desc {
|
||||
const char *name;
|
||||
enum atmel_hlcdc_layer_type type;
|
||||
int id;
|
||||
int regs_offset;
|
||||
int nconfigs;
|
||||
struct atmel_hlcdc_formats *formats;
|
||||
struct atmel_hlcdc_layer_cfg_layout layout;
|
||||
int max_width;
|
||||
int max_height;
|
||||
};
|
||||
|
||||
/**
|
||||
* Atmel HLCDC Layer Update Slot structure
|
||||
*
|
||||
* This structure stores layer update requests to be applied on next frame.
|
||||
* This is the base structure behind the atomic layer update infrastructure.
|
||||
*
|
||||
* Atomic layer update provides a way to update all layer's parameters
|
||||
* simultaneously. This is needed to avoid incompatible sequential updates
|
||||
* like this one:
|
||||
* 1) update layer format from RGB888 (1 plane/buffer) to YUV422
|
||||
* (2 planes/buffers)
|
||||
* 2) the format update is applied but the DMA channel for the second
|
||||
* plane/buffer is not enabled
|
||||
* 3) enable the DMA channel for the second plane
|
||||
*
|
||||
* @fb_flip: fb_flip object
|
||||
* @updated_configs: bitmask used to record modified configs
|
||||
* @configs: new config values
|
||||
*/
|
||||
struct atmel_hlcdc_layer_update_slot {
|
||||
struct atmel_hlcdc_layer_fb_flip *fb_flip;
|
||||
unsigned long *updated_configs;
|
||||
u32 *configs;
|
||||
};
|
||||
|
||||
/**
|
||||
* Atmel HLCDC Layer Update structure
|
||||
*
|
||||
* This structure provides a way to queue layer update requests.
|
||||
*
|
||||
* At a given time there is at most:
|
||||
* - one pending update request, which means the update request has been
|
||||
* committed (or validated) and is waiting for the DMA channel(s) to be
|
||||
* available
|
||||
* - one request being prepared, which means someone started a layer update
|
||||
* but has not committed it yet. There cannot be more than one started
|
||||
* request, because the update lock is taken when starting a layer update
|
||||
* and release when committing or rolling back the request.
|
||||
*
|
||||
* @slots: update slots. One is used for pending request and the other one
|
||||
* for started update request
|
||||
* @pending: the pending slot index or -1 if no request is pending
|
||||
* @next: the started update slot index or -1 no update has been started
|
||||
*/
|
||||
struct atmel_hlcdc_layer_update {
|
||||
struct atmel_hlcdc_layer_update_slot slots[2];
|
||||
int pending;
|
||||
int next;
|
||||
};
|
||||
|
||||
enum atmel_hlcdc_layer_dma_channel_status {
|
||||
ATMEL_HLCDC_LAYER_DISABLED,
|
||||
ATMEL_HLCDC_LAYER_ENABLED,
|
||||
ATMEL_HLCDC_LAYER_DISABLING,
|
||||
};
|
||||
|
||||
/**
|
||||
* Atmel HLCDC Layer DMA channel structure
|
||||
*
|
||||
* This structure stores information on the DMA channel associated to a
|
||||
* given layer.
|
||||
*
|
||||
* @status: DMA channel status
|
||||
* @cur: current framebuffer
|
||||
* @queue: next framebuffer
|
||||
* @dscrs: allocated DMA descriptors
|
||||
*/
|
||||
struct atmel_hlcdc_layer_dma_channel {
|
||||
enum atmel_hlcdc_layer_dma_channel_status status;
|
||||
struct atmel_hlcdc_layer_fb_flip *cur;
|
||||
struct atmel_hlcdc_layer_fb_flip *queue;
|
||||
struct atmel_hlcdc_dma_channel_dscr *dscrs;
|
||||
};
|
||||
|
||||
/**
|
||||
* Atmel HLCDC Layer structure
|
||||
*
|
||||
* This structure stores information on the layer instance.
|
||||
*
|
||||
* @desc: layer description
|
||||
* @max_planes: maximum planes/buffers that can be associated with this layer.
|
||||
* This depends on the supported formats.
|
||||
* @hlcdc: pointer to the atmel_hlcdc structure provided by the MFD device
|
||||
* @dma: dma channel
|
||||
* @gc: fb flip garbage collector
|
||||
* @update: update handler
|
||||
* @lock: layer lock
|
||||
*/
|
||||
struct atmel_hlcdc_layer {
|
||||
const struct atmel_hlcdc_layer_desc *desc;
|
||||
int max_planes;
|
||||
struct atmel_hlcdc *hlcdc;
|
||||
struct workqueue_struct *wq;
|
||||
struct drm_flip_work gc;
|
||||
struct atmel_hlcdc_layer_dma_channel dma;
|
||||
struct atmel_hlcdc_layer_update update;
|
||||
spinlock_t lock;
|
||||
};
|
||||
|
||||
void atmel_hlcdc_layer_irq(struct atmel_hlcdc_layer *layer);
|
||||
|
||||
int atmel_hlcdc_layer_init(struct drm_device *dev,
|
||||
struct atmel_hlcdc_layer *layer,
|
||||
const struct atmel_hlcdc_layer_desc *desc);
|
||||
|
||||
void atmel_hlcdc_layer_cleanup(struct drm_device *dev,
|
||||
struct atmel_hlcdc_layer *layer);
|
||||
|
||||
void atmel_hlcdc_layer_disable(struct atmel_hlcdc_layer *layer);
|
||||
|
||||
int atmel_hlcdc_layer_update_start(struct atmel_hlcdc_layer *layer);
|
||||
|
||||
void atmel_hlcdc_layer_update_cfg(struct atmel_hlcdc_layer *layer, int cfg,
|
||||
u32 mask, u32 val);
|
||||
|
||||
void atmel_hlcdc_layer_update_set_fb(struct atmel_hlcdc_layer *layer,
|
||||
struct drm_framebuffer *fb,
|
||||
unsigned int *offsets);
|
||||
|
||||
void atmel_hlcdc_layer_update_set_finished(struct atmel_hlcdc_layer *layer,
|
||||
void (*finished)(void *data),
|
||||
void *finished_data);
|
||||
|
||||
void atmel_hlcdc_layer_update_rollback(struct atmel_hlcdc_layer *layer);
|
||||
|
||||
void atmel_hlcdc_layer_update_commit(struct atmel_hlcdc_layer *layer);
|
||||
|
||||
#endif /* DRM_ATMEL_HLCDC_LAYER_H */
|
@ -32,12 +32,16 @@
|
||||
* @src_w: buffer width
|
||||
* @src_h: buffer height
|
||||
* @alpha: alpha blending of the plane
|
||||
* @disc_x: x discard position
|
||||
* @disc_y: y discard position
|
||||
* @disc_w: discard width
|
||||
* @disc_h: discard height
|
||||
* @bpp: bytes per pixel deduced from pixel_format
|
||||
* @offsets: offsets to apply to the GEM buffers
|
||||
* @xstride: value to add to the pixel pointer between each line
|
||||
* @pstride: value to add to the pixel pointer between each pixel
|
||||
* @nplanes: number of planes (deduced from pixel_format)
|
||||
* @prepared: plane update has been prepared
|
||||
* @dscrs: DMA descriptors
|
||||
*/
|
||||
struct atmel_hlcdc_plane_state {
|
||||
struct drm_plane_state base;
|
||||
@ -52,8 +56,6 @@ struct atmel_hlcdc_plane_state {
|
||||
|
||||
u8 alpha;
|
||||
|
||||
bool disc_updated;
|
||||
|
||||
int disc_x;
|
||||
int disc_y;
|
||||
int disc_w;
|
||||
@ -62,12 +64,14 @@ struct atmel_hlcdc_plane_state {
|
||||
int ahb_id;
|
||||
|
||||
/* These fields are private and should not be touched */
|
||||
int bpp[ATMEL_HLCDC_MAX_PLANES];
|
||||
unsigned int offsets[ATMEL_HLCDC_MAX_PLANES];
|
||||
int xstride[ATMEL_HLCDC_MAX_PLANES];
|
||||
int pstride[ATMEL_HLCDC_MAX_PLANES];
|
||||
int bpp[ATMEL_HLCDC_LAYER_MAX_PLANES];
|
||||
unsigned int offsets[ATMEL_HLCDC_LAYER_MAX_PLANES];
|
||||
int xstride[ATMEL_HLCDC_LAYER_MAX_PLANES];
|
||||
int pstride[ATMEL_HLCDC_LAYER_MAX_PLANES];
|
||||
int nplanes;
|
||||
bool prepared;
|
||||
|
||||
/* DMA descriptors. */
|
||||
struct atmel_hlcdc_dma_channel_dscr *dscrs[ATMEL_HLCDC_LAYER_MAX_PLANES];
|
||||
};
|
||||
|
||||
static inline struct atmel_hlcdc_plane_state *
|
||||
@ -259,125 +263,145 @@ static u32 heo_upscaling_ycoef[] = {
|
||||
0x00205907,
|
||||
};
|
||||
|
||||
#define ATMEL_HLCDC_XPHIDEF 4
|
||||
#define ATMEL_HLCDC_YPHIDEF 4
|
||||
|
||||
static u32 atmel_hlcdc_plane_phiscaler_get_factor(u32 srcsize,
|
||||
u32 dstsize,
|
||||
u32 phidef)
|
||||
{
|
||||
u32 factor, max_memsize;
|
||||
|
||||
factor = (256 * ((8 * (srcsize - 1)) - phidef)) / (dstsize - 1);
|
||||
max_memsize = ((factor * (dstsize - 1)) + (256 * phidef)) / 2048;
|
||||
|
||||
if (max_memsize > srcsize - 1)
|
||||
factor--;
|
||||
|
||||
return factor;
|
||||
}
|
||||
|
||||
static void
|
||||
atmel_hlcdc_plane_scaler_set_phicoeff(struct atmel_hlcdc_plane *plane,
|
||||
const u32 *coeff_tab, int size,
|
||||
unsigned int cfg_offs)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < size; i++)
|
||||
atmel_hlcdc_layer_write_cfg(&plane->layer, cfg_offs + i,
|
||||
coeff_tab[i]);
|
||||
}
|
||||
|
||||
void atmel_hlcdc_plane_setup_scaler(struct atmel_hlcdc_plane *plane,
|
||||
struct atmel_hlcdc_plane_state *state)
|
||||
{
|
||||
const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc;
|
||||
u32 xfactor, yfactor;
|
||||
|
||||
if (!desc->layout.scaler_config)
|
||||
return;
|
||||
|
||||
if (state->crtc_w == state->src_w && state->crtc_h == state->src_h) {
|
||||
atmel_hlcdc_layer_write_cfg(&plane->layer,
|
||||
desc->layout.scaler_config, 0);
|
||||
return;
|
||||
}
|
||||
|
||||
if (desc->layout.phicoeffs.x) {
|
||||
xfactor = atmel_hlcdc_plane_phiscaler_get_factor(state->src_w,
|
||||
state->crtc_w,
|
||||
ATMEL_HLCDC_XPHIDEF);
|
||||
|
||||
yfactor = atmel_hlcdc_plane_phiscaler_get_factor(state->src_h,
|
||||
state->crtc_h,
|
||||
ATMEL_HLCDC_YPHIDEF);
|
||||
|
||||
atmel_hlcdc_plane_scaler_set_phicoeff(plane,
|
||||
state->crtc_w < state->src_w ?
|
||||
heo_downscaling_xcoef :
|
||||
heo_upscaling_xcoef,
|
||||
ARRAY_SIZE(heo_upscaling_xcoef),
|
||||
desc->layout.phicoeffs.x);
|
||||
|
||||
atmel_hlcdc_plane_scaler_set_phicoeff(plane,
|
||||
state->crtc_h < state->src_h ?
|
||||
heo_downscaling_ycoef :
|
||||
heo_upscaling_ycoef,
|
||||
ARRAY_SIZE(heo_upscaling_ycoef),
|
||||
desc->layout.phicoeffs.y);
|
||||
} else {
|
||||
xfactor = (1024 * state->src_w) / state->crtc_w;
|
||||
yfactor = (1024 * state->src_h) / state->crtc_h;
|
||||
}
|
||||
|
||||
atmel_hlcdc_layer_write_cfg(&plane->layer, desc->layout.scaler_config,
|
||||
ATMEL_HLCDC_LAYER_SCALER_ENABLE |
|
||||
ATMEL_HLCDC_LAYER_SCALER_FACTORS(xfactor,
|
||||
yfactor));
|
||||
}
|
||||
|
||||
static void
|
||||
atmel_hlcdc_plane_update_pos_and_size(struct atmel_hlcdc_plane *plane,
|
||||
struct atmel_hlcdc_plane_state *state)
|
||||
{
|
||||
const struct atmel_hlcdc_layer_cfg_layout *layout =
|
||||
&plane->layer.desc->layout;
|
||||
const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc;
|
||||
|
||||
if (layout->size)
|
||||
atmel_hlcdc_layer_update_cfg(&plane->layer,
|
||||
layout->size,
|
||||
0xffffffff,
|
||||
(state->crtc_w - 1) |
|
||||
((state->crtc_h - 1) << 16));
|
||||
if (desc->layout.size)
|
||||
atmel_hlcdc_layer_write_cfg(&plane->layer, desc->layout.size,
|
||||
ATMEL_HLCDC_LAYER_SIZE(state->crtc_w,
|
||||
state->crtc_h));
|
||||
|
||||
if (layout->memsize)
|
||||
atmel_hlcdc_layer_update_cfg(&plane->layer,
|
||||
layout->memsize,
|
||||
0xffffffff,
|
||||
(state->src_w - 1) |
|
||||
((state->src_h - 1) << 16));
|
||||
if (desc->layout.memsize)
|
||||
atmel_hlcdc_layer_write_cfg(&plane->layer,
|
||||
desc->layout.memsize,
|
||||
ATMEL_HLCDC_LAYER_SIZE(state->src_w,
|
||||
state->src_h));
|
||||
|
||||
if (layout->pos)
|
||||
atmel_hlcdc_layer_update_cfg(&plane->layer,
|
||||
layout->pos,
|
||||
0xffffffff,
|
||||
state->crtc_x |
|
||||
(state->crtc_y << 16));
|
||||
if (desc->layout.pos)
|
||||
atmel_hlcdc_layer_write_cfg(&plane->layer, desc->layout.pos,
|
||||
ATMEL_HLCDC_LAYER_POS(state->crtc_x,
|
||||
state->crtc_y));
|
||||
|
||||
/* TODO: rework the rescaling part */
|
||||
if (state->crtc_w != state->src_w || state->crtc_h != state->src_h) {
|
||||
u32 factor_reg = 0;
|
||||
|
||||
if (state->crtc_w != state->src_w) {
|
||||
int i;
|
||||
u32 factor;
|
||||
u32 *coeff_tab = heo_upscaling_xcoef;
|
||||
u32 max_memsize;
|
||||
|
||||
if (state->crtc_w < state->src_w)
|
||||
coeff_tab = heo_downscaling_xcoef;
|
||||
for (i = 0; i < ARRAY_SIZE(heo_upscaling_xcoef); i++)
|
||||
atmel_hlcdc_layer_update_cfg(&plane->layer,
|
||||
17 + i,
|
||||
0xffffffff,
|
||||
coeff_tab[i]);
|
||||
factor = ((8 * 256 * state->src_w) - (256 * 4)) /
|
||||
state->crtc_w;
|
||||
factor++;
|
||||
max_memsize = ((factor * state->crtc_w) + (256 * 4)) /
|
||||
2048;
|
||||
if (max_memsize > state->src_w)
|
||||
factor--;
|
||||
factor_reg |= factor | 0x80000000;
|
||||
}
|
||||
|
||||
if (state->crtc_h != state->src_h) {
|
||||
int i;
|
||||
u32 factor;
|
||||
u32 *coeff_tab = heo_upscaling_ycoef;
|
||||
u32 max_memsize;
|
||||
|
||||
if (state->crtc_h < state->src_h)
|
||||
coeff_tab = heo_downscaling_ycoef;
|
||||
for (i = 0; i < ARRAY_SIZE(heo_upscaling_ycoef); i++)
|
||||
atmel_hlcdc_layer_update_cfg(&plane->layer,
|
||||
33 + i,
|
||||
0xffffffff,
|
||||
coeff_tab[i]);
|
||||
factor = ((8 * 256 * state->src_h) - (256 * 4)) /
|
||||
state->crtc_h;
|
||||
factor++;
|
||||
max_memsize = ((factor * state->crtc_h) + (256 * 4)) /
|
||||
2048;
|
||||
if (max_memsize > state->src_h)
|
||||
factor--;
|
||||
factor_reg |= (factor << 16) | 0x80000000;
|
||||
}
|
||||
|
||||
atmel_hlcdc_layer_update_cfg(&plane->layer, 13, 0xffffffff,
|
||||
factor_reg);
|
||||
} else {
|
||||
atmel_hlcdc_layer_update_cfg(&plane->layer, 13, 0xffffffff, 0);
|
||||
}
|
||||
atmel_hlcdc_plane_setup_scaler(plane, state);
|
||||
}
|
||||
|
||||
static void
|
||||
atmel_hlcdc_plane_update_general_settings(struct atmel_hlcdc_plane *plane,
|
||||
struct atmel_hlcdc_plane_state *state)
|
||||
{
|
||||
const struct atmel_hlcdc_layer_cfg_layout *layout =
|
||||
&plane->layer.desc->layout;
|
||||
unsigned int cfg = ATMEL_HLCDC_LAYER_DMA;
|
||||
unsigned int cfg = ATMEL_HLCDC_LAYER_DMA_BLEN_INCR16 | state->ahb_id;
|
||||
const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc;
|
||||
u32 format = state->base.fb->format->format;
|
||||
|
||||
/*
|
||||
* Rotation optimization is not working on RGB888 (rotation is still
|
||||
* working but without any optimization).
|
||||
*/
|
||||
if (format == DRM_FORMAT_RGB888)
|
||||
cfg |= ATMEL_HLCDC_LAYER_DMA_ROTDIS;
|
||||
|
||||
atmel_hlcdc_layer_write_cfg(&plane->layer, ATMEL_HLCDC_LAYER_DMA_CFG,
|
||||
cfg);
|
||||
|
||||
cfg = ATMEL_HLCDC_LAYER_DMA;
|
||||
|
||||
if (plane->base.type != DRM_PLANE_TYPE_PRIMARY) {
|
||||
cfg |= ATMEL_HLCDC_LAYER_OVR | ATMEL_HLCDC_LAYER_ITER2BL |
|
||||
ATMEL_HLCDC_LAYER_ITER;
|
||||
|
||||
if (atmel_hlcdc_format_embeds_alpha(state->base.fb->format->format))
|
||||
if (atmel_hlcdc_format_embeds_alpha(format))
|
||||
cfg |= ATMEL_HLCDC_LAYER_LAEN;
|
||||
else
|
||||
cfg |= ATMEL_HLCDC_LAYER_GAEN |
|
||||
ATMEL_HLCDC_LAYER_GA(state->alpha);
|
||||
}
|
||||
|
||||
atmel_hlcdc_layer_update_cfg(&plane->layer,
|
||||
ATMEL_HLCDC_LAYER_DMA_CFG_ID,
|
||||
ATMEL_HLCDC_LAYER_DMA_BLEN_MASK |
|
||||
ATMEL_HLCDC_LAYER_DMA_SIF,
|
||||
ATMEL_HLCDC_LAYER_DMA_BLEN_INCR16 |
|
||||
state->ahb_id);
|
||||
if (state->disc_h && state->disc_w)
|
||||
cfg |= ATMEL_HLCDC_LAYER_DISCEN;
|
||||
|
||||
atmel_hlcdc_layer_update_cfg(&plane->layer, layout->general_config,
|
||||
ATMEL_HLCDC_LAYER_ITER2BL |
|
||||
ATMEL_HLCDC_LAYER_ITER |
|
||||
ATMEL_HLCDC_LAYER_GAEN |
|
||||
ATMEL_HLCDC_LAYER_GA_MASK |
|
||||
ATMEL_HLCDC_LAYER_LAEN |
|
||||
ATMEL_HLCDC_LAYER_OVR |
|
||||
ATMEL_HLCDC_LAYER_DMA, cfg);
|
||||
atmel_hlcdc_layer_write_cfg(&plane->layer, desc->layout.general_config,
|
||||
cfg);
|
||||
}
|
||||
|
||||
static void atmel_hlcdc_plane_update_format(struct atmel_hlcdc_plane *plane,
|
||||
@ -396,50 +420,50 @@ static void atmel_hlcdc_plane_update_format(struct atmel_hlcdc_plane *plane,
|
||||
drm_rotation_90_or_270(state->base.rotation))
|
||||
cfg |= ATMEL_HLCDC_YUV422ROT;
|
||||
|
||||
atmel_hlcdc_layer_update_cfg(&plane->layer,
|
||||
ATMEL_HLCDC_LAYER_FORMAT_CFG_ID,
|
||||
0xffffffff,
|
||||
cfg);
|
||||
|
||||
/*
|
||||
* Rotation optimization is not working on RGB888 (rotation is still
|
||||
* working but without any optimization).
|
||||
*/
|
||||
if (state->base.fb->format->format == DRM_FORMAT_RGB888)
|
||||
cfg = ATMEL_HLCDC_LAYER_DMA_ROTDIS;
|
||||
else
|
||||
cfg = 0;
|
||||
|
||||
atmel_hlcdc_layer_update_cfg(&plane->layer,
|
||||
ATMEL_HLCDC_LAYER_DMA_CFG_ID,
|
||||
ATMEL_HLCDC_LAYER_DMA_ROTDIS, cfg);
|
||||
atmel_hlcdc_layer_write_cfg(&plane->layer,
|
||||
ATMEL_HLCDC_LAYER_FORMAT_CFG, cfg);
|
||||
}
|
||||
|
||||
static void atmel_hlcdc_plane_update_buffers(struct atmel_hlcdc_plane *plane,
|
||||
struct atmel_hlcdc_plane_state *state)
|
||||
{
|
||||
struct atmel_hlcdc_layer *layer = &plane->layer;
|
||||
const struct atmel_hlcdc_layer_cfg_layout *layout =
|
||||
&layer->desc->layout;
|
||||
const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc;
|
||||
struct drm_framebuffer *fb = state->base.fb;
|
||||
u32 sr;
|
||||
int i;
|
||||
|
||||
atmel_hlcdc_layer_update_set_fb(&plane->layer, state->base.fb,
|
||||
state->offsets);
|
||||
sr = atmel_hlcdc_layer_read_reg(&plane->layer, ATMEL_HLCDC_LAYER_CHSR);
|
||||
|
||||
for (i = 0; i < state->nplanes; i++) {
|
||||
if (layout->xstride[i]) {
|
||||
atmel_hlcdc_layer_update_cfg(&plane->layer,
|
||||
layout->xstride[i],
|
||||
0xffffffff,
|
||||
state->xstride[i]);
|
||||
struct drm_gem_cma_object *gem = drm_fb_cma_get_gem_obj(fb, i);
|
||||
|
||||
state->dscrs[i]->addr = gem->paddr + state->offsets[i];
|
||||
|
||||
atmel_hlcdc_layer_write_reg(&plane->layer,
|
||||
ATMEL_HLCDC_LAYER_PLANE_HEAD(i),
|
||||
state->dscrs[i]->self);
|
||||
|
||||
if (!(sr & ATMEL_HLCDC_LAYER_EN)) {
|
||||
atmel_hlcdc_layer_write_reg(&plane->layer,
|
||||
ATMEL_HLCDC_LAYER_PLANE_ADDR(i),
|
||||
state->dscrs[i]->addr);
|
||||
atmel_hlcdc_layer_write_reg(&plane->layer,
|
||||
ATMEL_HLCDC_LAYER_PLANE_CTRL(i),
|
||||
state->dscrs[i]->ctrl);
|
||||
atmel_hlcdc_layer_write_reg(&plane->layer,
|
||||
ATMEL_HLCDC_LAYER_PLANE_NEXT(i),
|
||||
state->dscrs[i]->self);
|
||||
}
|
||||
|
||||
if (layout->pstride[i]) {
|
||||
atmel_hlcdc_layer_update_cfg(&plane->layer,
|
||||
layout->pstride[i],
|
||||
0xffffffff,
|
||||
state->pstride[i]);
|
||||
}
|
||||
if (desc->layout.xstride[i])
|
||||
atmel_hlcdc_layer_write_cfg(&plane->layer,
|
||||
desc->layout.xstride[i],
|
||||
state->xstride[i]);
|
||||
|
||||
if (desc->layout.pstride[i])
|
||||
atmel_hlcdc_layer_write_cfg(&plane->layer,
|
||||
desc->layout.pstride[i],
|
||||
state->pstride[i]);
|
||||
}
|
||||
}
|
||||
|
||||
@ -528,18 +552,10 @@ atmel_hlcdc_plane_prepare_disc_area(struct drm_crtc_state *c_state)
|
||||
disc_w = ovl_state->crtc_w;
|
||||
}
|
||||
|
||||
if (disc_x == primary_state->disc_x &&
|
||||
disc_y == primary_state->disc_y &&
|
||||
disc_w == primary_state->disc_w &&
|
||||
disc_h == primary_state->disc_h)
|
||||
return 0;
|
||||
|
||||
|
||||
primary_state->disc_x = disc_x;
|
||||
primary_state->disc_y = disc_y;
|
||||
primary_state->disc_w = disc_w;
|
||||
primary_state->disc_h = disc_h;
|
||||
primary_state->disc_updated = true;
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -548,32 +564,19 @@ static void
|
||||
atmel_hlcdc_plane_update_disc_area(struct atmel_hlcdc_plane *plane,
|
||||
struct atmel_hlcdc_plane_state *state)
|
||||
{
|
||||
const struct atmel_hlcdc_layer_cfg_layout *layout =
|
||||
&plane->layer.desc->layout;
|
||||
int disc_surface = 0;
|
||||
const struct atmel_hlcdc_layer_cfg_layout *layout;
|
||||
|
||||
if (!state->disc_updated)
|
||||
layout = &plane->layer.desc->layout;
|
||||
if (!layout->disc_pos || !layout->disc_size)
|
||||
return;
|
||||
|
||||
disc_surface = state->disc_h * state->disc_w;
|
||||
atmel_hlcdc_layer_write_cfg(&plane->layer, layout->disc_pos,
|
||||
ATMEL_HLCDC_LAYER_DISC_POS(state->disc_x,
|
||||
state->disc_y));
|
||||
|
||||
atmel_hlcdc_layer_update_cfg(&plane->layer, layout->general_config,
|
||||
ATMEL_HLCDC_LAYER_DISCEN,
|
||||
disc_surface ? ATMEL_HLCDC_LAYER_DISCEN : 0);
|
||||
|
||||
if (!disc_surface)
|
||||
return;
|
||||
|
||||
atmel_hlcdc_layer_update_cfg(&plane->layer,
|
||||
layout->disc_pos,
|
||||
0xffffffff,
|
||||
state->disc_x | (state->disc_y << 16));
|
||||
|
||||
atmel_hlcdc_layer_update_cfg(&plane->layer,
|
||||
layout->disc_size,
|
||||
0xffffffff,
|
||||
(state->disc_w - 1) |
|
||||
((state->disc_h - 1) << 16));
|
||||
atmel_hlcdc_layer_write_cfg(&plane->layer, layout->disc_size,
|
||||
ATMEL_HLCDC_LAYER_DISC_SIZE(state->disc_w,
|
||||
state->disc_h));
|
||||
}
|
||||
|
||||
static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
|
||||
@ -582,8 +585,7 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
|
||||
struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p);
|
||||
struct atmel_hlcdc_plane_state *state =
|
||||
drm_plane_state_to_atmel_hlcdc_plane_state(s);
|
||||
const struct atmel_hlcdc_layer_cfg_layout *layout =
|
||||
&plane->layer.desc->layout;
|
||||
const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc;
|
||||
struct drm_framebuffer *fb = state->base.fb;
|
||||
const struct drm_display_mode *mode;
|
||||
struct drm_crtc_state *crtc_state;
|
||||
@ -622,7 +624,7 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
|
||||
state->src_h >>= 16;
|
||||
|
||||
state->nplanes = fb->format->num_planes;
|
||||
if (state->nplanes > ATMEL_HLCDC_MAX_PLANES)
|
||||
if (state->nplanes > ATMEL_HLCDC_LAYER_MAX_PLANES)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
@ -726,21 +728,19 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
|
||||
state->crtc_w = patched_crtc_w;
|
||||
state->crtc_h = patched_crtc_h;
|
||||
|
||||
if (!layout->size &&
|
||||
if (!desc->layout.size &&
|
||||
(mode->hdisplay != state->crtc_w ||
|
||||
mode->vdisplay != state->crtc_h))
|
||||
return -EINVAL;
|
||||
|
||||
if (plane->layer.desc->max_height &&
|
||||
state->crtc_h > plane->layer.desc->max_height)
|
||||
if (desc->max_height && state->crtc_h > desc->max_height)
|
||||
return -EINVAL;
|
||||
|
||||
if (plane->layer.desc->max_width &&
|
||||
state->crtc_w > plane->layer.desc->max_width)
|
||||
if (desc->max_width && state->crtc_w > desc->max_width)
|
||||
return -EINVAL;
|
||||
|
||||
if ((state->crtc_h != state->src_h || state->crtc_w != state->src_w) &&
|
||||
(!layout->memsize ||
|
||||
(!desc->layout.memsize ||
|
||||
atmel_hlcdc_format_embeds_alpha(state->base.fb->format->format)))
|
||||
return -EINVAL;
|
||||
|
||||
@ -754,65 +754,13 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int atmel_hlcdc_plane_prepare_fb(struct drm_plane *p,
|
||||
struct drm_plane_state *new_state)
|
||||
{
|
||||
/*
|
||||
* FIXME: we should avoid this const -> non-const cast but it's
|
||||
* currently the only solution we have to modify the ->prepared
|
||||
* state and rollback the update request.
|
||||
* Ideally, we should rework the code to attach all the resources
|
||||
* to atmel_hlcdc_plane_state (including the DMA desc allocation),
|
||||
* but this require a complete rework of the atmel_hlcdc_layer
|
||||
* code.
|
||||
*/
|
||||
struct drm_plane_state *s = (struct drm_plane_state *)new_state;
|
||||
struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p);
|
||||
struct atmel_hlcdc_plane_state *state =
|
||||
drm_plane_state_to_atmel_hlcdc_plane_state(s);
|
||||
int ret;
|
||||
|
||||
ret = atmel_hlcdc_layer_update_start(&plane->layer);
|
||||
if (!ret)
|
||||
state->prepared = true;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void atmel_hlcdc_plane_cleanup_fb(struct drm_plane *p,
|
||||
struct drm_plane_state *old_state)
|
||||
{
|
||||
/*
|
||||
* FIXME: we should avoid this const -> non-const cast but it's
|
||||
* currently the only solution we have to modify the ->prepared
|
||||
* state and rollback the update request.
|
||||
* Ideally, we should rework the code to attach all the resources
|
||||
* to atmel_hlcdc_plane_state (including the DMA desc allocation),
|
||||
* but this require a complete rework of the atmel_hlcdc_layer
|
||||
* code.
|
||||
*/
|
||||
struct drm_plane_state *s = (struct drm_plane_state *)old_state;
|
||||
struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p);
|
||||
struct atmel_hlcdc_plane_state *state =
|
||||
drm_plane_state_to_atmel_hlcdc_plane_state(s);
|
||||
|
||||
/*
|
||||
* The Request has already been applied or cancelled, nothing to do
|
||||
* here.
|
||||
*/
|
||||
if (!state->prepared)
|
||||
return;
|
||||
|
||||
atmel_hlcdc_layer_update_rollback(&plane->layer);
|
||||
state->prepared = false;
|
||||
}
|
||||
|
||||
static void atmel_hlcdc_plane_atomic_update(struct drm_plane *p,
|
||||
struct drm_plane_state *old_s)
|
||||
{
|
||||
struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p);
|
||||
struct atmel_hlcdc_plane_state *state =
|
||||
drm_plane_state_to_atmel_hlcdc_plane_state(p->state);
|
||||
u32 sr;
|
||||
|
||||
if (!p->state->crtc || !p->state->fb)
|
||||
return;
|
||||
@ -823,7 +771,18 @@ static void atmel_hlcdc_plane_atomic_update(struct drm_plane *p,
|
||||
atmel_hlcdc_plane_update_buffers(plane, state);
|
||||
atmel_hlcdc_plane_update_disc_area(plane, state);
|
||||
|
||||
atmel_hlcdc_layer_update_commit(&plane->layer);
|
||||
/* Enable the overrun interrupts. */
|
||||
atmel_hlcdc_layer_write_reg(&plane->layer, ATMEL_HLCDC_LAYER_IER,
|
||||
ATMEL_HLCDC_LAYER_OVR_IRQ(0) |
|
||||
ATMEL_HLCDC_LAYER_OVR_IRQ(1) |
|
||||
ATMEL_HLCDC_LAYER_OVR_IRQ(2));
|
||||
|
||||
/* Apply the new config at the next SOF event. */
|
||||
sr = atmel_hlcdc_layer_read_reg(&plane->layer, ATMEL_HLCDC_LAYER_CHSR);
|
||||
atmel_hlcdc_layer_write_reg(&plane->layer, ATMEL_HLCDC_LAYER_CHER,
|
||||
ATMEL_HLCDC_LAYER_UPDATE |
|
||||
(sr & ATMEL_HLCDC_LAYER_EN ?
|
||||
ATMEL_HLCDC_LAYER_A2Q : ATMEL_HLCDC_LAYER_EN));
|
||||
}
|
||||
|
||||
static void atmel_hlcdc_plane_atomic_disable(struct drm_plane *p,
|
||||
@ -831,7 +790,18 @@ static void atmel_hlcdc_plane_atomic_disable(struct drm_plane *p,
|
||||
{
|
||||
struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p);
|
||||
|
||||
atmel_hlcdc_layer_disable(&plane->layer);
|
||||
/* Disable interrupts */
|
||||
atmel_hlcdc_layer_write_reg(&plane->layer, ATMEL_HLCDC_LAYER_IDR,
|
||||
0xffffffff);
|
||||
|
||||
/* Disable the layer */
|
||||
atmel_hlcdc_layer_write_reg(&plane->layer, ATMEL_HLCDC_LAYER_CHDR,
|
||||
ATMEL_HLCDC_LAYER_RST |
|
||||
ATMEL_HLCDC_LAYER_A2Q |
|
||||
ATMEL_HLCDC_LAYER_UPDATE);
|
||||
|
||||
/* Clear all pending interrupts */
|
||||
atmel_hlcdc_layer_read_reg(&plane->layer, ATMEL_HLCDC_LAYER_ISR);
|
||||
}
|
||||
|
||||
static void atmel_hlcdc_plane_destroy(struct drm_plane *p)
|
||||
@ -841,10 +811,7 @@ static void atmel_hlcdc_plane_destroy(struct drm_plane *p)
|
||||
if (plane->base.fb)
|
||||
drm_framebuffer_unreference(plane->base.fb);
|
||||
|
||||
atmel_hlcdc_layer_cleanup(p->dev, &plane->layer);
|
||||
|
||||
drm_plane_cleanup(p);
|
||||
devm_kfree(p->dev->dev, plane);
|
||||
}
|
||||
|
||||
static int atmel_hlcdc_plane_atomic_set_property(struct drm_plane *p,
|
||||
@ -884,24 +851,15 @@ static int atmel_hlcdc_plane_atomic_get_property(struct drm_plane *p,
|
||||
}
|
||||
|
||||
static int atmel_hlcdc_plane_init_properties(struct atmel_hlcdc_plane *plane,
|
||||
const struct atmel_hlcdc_layer_desc *desc,
|
||||
struct atmel_hlcdc_plane_properties *props)
|
||||
struct atmel_hlcdc_plane_properties *props)
|
||||
{
|
||||
struct regmap *regmap = plane->layer.hlcdc->regmap;
|
||||
const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc;
|
||||
|
||||
if (desc->type == ATMEL_HLCDC_OVERLAY_LAYER ||
|
||||
desc->type == ATMEL_HLCDC_CURSOR_LAYER) {
|
||||
desc->type == ATMEL_HLCDC_CURSOR_LAYER)
|
||||
drm_object_attach_property(&plane->base.base,
|
||||
props->alpha, 255);
|
||||
|
||||
/* Set default alpha value */
|
||||
regmap_update_bits(regmap,
|
||||
desc->regs_offset +
|
||||
ATMEL_HLCDC_LAYER_GENERAL_CFG(&plane->layer),
|
||||
ATMEL_HLCDC_LAYER_GA_MASK,
|
||||
ATMEL_HLCDC_LAYER_GA_MASK);
|
||||
}
|
||||
|
||||
if (desc->layout.xstride && desc->layout.pstride) {
|
||||
int ret;
|
||||
|
||||
@ -920,31 +878,78 @@ static int atmel_hlcdc_plane_init_properties(struct atmel_hlcdc_plane *plane,
|
||||
* TODO: decare a "yuv-to-rgb-conv-factors" property to let
|
||||
* userspace modify these factors (using a BLOB property ?).
|
||||
*/
|
||||
regmap_write(regmap,
|
||||
desc->regs_offset +
|
||||
ATMEL_HLCDC_LAYER_CSC_CFG(&plane->layer, 0),
|
||||
0x4c900091);
|
||||
regmap_write(regmap,
|
||||
desc->regs_offset +
|
||||
ATMEL_HLCDC_LAYER_CSC_CFG(&plane->layer, 1),
|
||||
0x7a5f5090);
|
||||
regmap_write(regmap,
|
||||
desc->regs_offset +
|
||||
ATMEL_HLCDC_LAYER_CSC_CFG(&plane->layer, 2),
|
||||
0x40040890);
|
||||
atmel_hlcdc_layer_write_cfg(&plane->layer,
|
||||
desc->layout.csc,
|
||||
0x4c900091);
|
||||
atmel_hlcdc_layer_write_cfg(&plane->layer,
|
||||
desc->layout.csc + 1,
|
||||
0x7a5f5090);
|
||||
atmel_hlcdc_layer_write_cfg(&plane->layer,
|
||||
desc->layout.csc + 2,
|
||||
0x40040890);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void atmel_hlcdc_plane_irq(struct atmel_hlcdc_plane *plane)
|
||||
{
|
||||
const struct atmel_hlcdc_layer_desc *desc = plane->layer.desc;
|
||||
u32 isr;
|
||||
|
||||
isr = atmel_hlcdc_layer_read_reg(&plane->layer, ATMEL_HLCDC_LAYER_ISR);
|
||||
|
||||
/*
|
||||
* There's not much we can do in case of overrun except informing
|
||||
* the user. However, we are in interrupt context here, hence the
|
||||
* use of dev_dbg().
|
||||
*/
|
||||
if (isr &
|
||||
(ATMEL_HLCDC_LAYER_OVR_IRQ(0) | ATMEL_HLCDC_LAYER_OVR_IRQ(1) |
|
||||
ATMEL_HLCDC_LAYER_OVR_IRQ(2)))
|
||||
dev_dbg(plane->base.dev->dev, "overrun on plane %s\n",
|
||||
desc->name);
|
||||
}
|
||||
|
||||
static struct drm_plane_helper_funcs atmel_hlcdc_layer_plane_helper_funcs = {
|
||||
.prepare_fb = atmel_hlcdc_plane_prepare_fb,
|
||||
.cleanup_fb = atmel_hlcdc_plane_cleanup_fb,
|
||||
.atomic_check = atmel_hlcdc_plane_atomic_check,
|
||||
.atomic_update = atmel_hlcdc_plane_atomic_update,
|
||||
.atomic_disable = atmel_hlcdc_plane_atomic_disable,
|
||||
};
|
||||
|
||||
static int atmel_hlcdc_plane_alloc_dscrs(struct drm_plane *p,
|
||||
struct atmel_hlcdc_plane_state *state)
|
||||
{
|
||||
struct atmel_hlcdc_dc *dc = p->dev->dev_private;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(state->dscrs); i++) {
|
||||
struct atmel_hlcdc_dma_channel_dscr *dscr;
|
||||
dma_addr_t dscr_dma;
|
||||
|
||||
dscr = dma_pool_alloc(dc->dscrpool, GFP_KERNEL, &dscr_dma);
|
||||
if (!dscr)
|
||||
goto err;
|
||||
|
||||
dscr->addr = 0;
|
||||
dscr->next = dscr_dma;
|
||||
dscr->self = dscr_dma;
|
||||
dscr->ctrl = ATMEL_HLCDC_LAYER_DFETCH;
|
||||
|
||||
state->dscrs[i] = dscr;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err:
|
||||
for (i--; i >= 0; i--) {
|
||||
dma_pool_free(dc->dscrpool, state->dscrs[i],
|
||||
state->dscrs[i]->self);
|
||||
}
|
||||
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
static void atmel_hlcdc_plane_reset(struct drm_plane *p)
|
||||
{
|
||||
struct atmel_hlcdc_plane_state *state;
|
||||
@ -961,6 +966,13 @@ static void atmel_hlcdc_plane_reset(struct drm_plane *p)
|
||||
|
||||
state = kzalloc(sizeof(*state), GFP_KERNEL);
|
||||
if (state) {
|
||||
if (atmel_hlcdc_plane_alloc_dscrs(p, state)) {
|
||||
kfree(state);
|
||||
dev_err(p->dev->dev,
|
||||
"Failed to allocate initial plane state\n");
|
||||
return;
|
||||
}
|
||||
|
||||
state->alpha = 255;
|
||||
p->state = &state->base;
|
||||
p->state->plane = p;
|
||||
@ -978,8 +990,10 @@ atmel_hlcdc_plane_atomic_duplicate_state(struct drm_plane *p)
|
||||
if (!copy)
|
||||
return NULL;
|
||||
|
||||
copy->disc_updated = false;
|
||||
copy->prepared = false;
|
||||
if (atmel_hlcdc_plane_alloc_dscrs(p, copy)) {
|
||||
kfree(copy);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (copy->base.fb)
|
||||
drm_framebuffer_reference(copy->base.fb);
|
||||
@ -987,11 +1001,18 @@ atmel_hlcdc_plane_atomic_duplicate_state(struct drm_plane *p)
|
||||
return ©->base;
|
||||
}
|
||||
|
||||
static void atmel_hlcdc_plane_atomic_destroy_state(struct drm_plane *plane,
|
||||
static void atmel_hlcdc_plane_atomic_destroy_state(struct drm_plane *p,
|
||||
struct drm_plane_state *s)
|
||||
{
|
||||
struct atmel_hlcdc_plane_state *state =
|
||||
drm_plane_state_to_atmel_hlcdc_plane_state(s);
|
||||
struct atmel_hlcdc_dc *dc = p->dev->dev_private;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(state->dscrs); i++) {
|
||||
dma_pool_free(dc->dscrpool, state->dscrs[i],
|
||||
state->dscrs[i]->self);
|
||||
}
|
||||
|
||||
if (s->fb)
|
||||
drm_framebuffer_unreference(s->fb);
|
||||
@ -1011,22 +1032,21 @@ static struct drm_plane_funcs layer_plane_funcs = {
|
||||
.atomic_get_property = atmel_hlcdc_plane_atomic_get_property,
|
||||
};
|
||||
|
||||
static struct atmel_hlcdc_plane *
|
||||
atmel_hlcdc_plane_create(struct drm_device *dev,
|
||||
const struct atmel_hlcdc_layer_desc *desc,
|
||||
struct atmel_hlcdc_plane_properties *props)
|
||||
static int atmel_hlcdc_plane_create(struct drm_device *dev,
|
||||
const struct atmel_hlcdc_layer_desc *desc,
|
||||
struct atmel_hlcdc_plane_properties *props)
|
||||
{
|
||||
struct atmel_hlcdc_dc *dc = dev->dev_private;
|
||||
struct atmel_hlcdc_plane *plane;
|
||||
enum drm_plane_type type;
|
||||
int ret;
|
||||
|
||||
plane = devm_kzalloc(dev->dev, sizeof(*plane), GFP_KERNEL);
|
||||
if (!plane)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
return -ENOMEM;
|
||||
|
||||
ret = atmel_hlcdc_layer_init(dev, &plane->layer, desc);
|
||||
if (ret)
|
||||
return ERR_PTR(ret);
|
||||
atmel_hlcdc_layer_init(&plane->layer, desc, dc->hlcdc->regmap);
|
||||
plane->properties = props;
|
||||
|
||||
if (desc->type == ATMEL_HLCDC_BASE_LAYER)
|
||||
type = DRM_PLANE_TYPE_PRIMARY;
|
||||
@ -1040,17 +1060,19 @@ atmel_hlcdc_plane_create(struct drm_device *dev,
|
||||
desc->formats->formats,
|
||||
desc->formats->nformats, type, NULL);
|
||||
if (ret)
|
||||
return ERR_PTR(ret);
|
||||
return ret;
|
||||
|
||||
drm_plane_helper_add(&plane->base,
|
||||
&atmel_hlcdc_layer_plane_helper_funcs);
|
||||
|
||||
/* Set default property values*/
|
||||
ret = atmel_hlcdc_plane_init_properties(plane, desc, props);
|
||||
ret = atmel_hlcdc_plane_init_properties(plane, props);
|
||||
if (ret)
|
||||
return ERR_PTR(ret);
|
||||
return ret;
|
||||
|
||||
return plane;
|
||||
dc->layers[desc->id] = &plane->layer;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct atmel_hlcdc_plane_properties *
|
||||
@ -1069,72 +1091,34 @@ atmel_hlcdc_plane_create_properties(struct drm_device *dev)
|
||||
return props;
|
||||
}
|
||||
|
||||
struct atmel_hlcdc_planes *
|
||||
atmel_hlcdc_create_planes(struct drm_device *dev)
|
||||
int atmel_hlcdc_create_planes(struct drm_device *dev)
|
||||
{
|
||||
struct atmel_hlcdc_dc *dc = dev->dev_private;
|
||||
struct atmel_hlcdc_plane_properties *props;
|
||||
struct atmel_hlcdc_planes *planes;
|
||||
const struct atmel_hlcdc_layer_desc *descs = dc->desc->layers;
|
||||
int nlayers = dc->desc->nlayers;
|
||||
int i;
|
||||
|
||||
planes = devm_kzalloc(dev->dev, sizeof(*planes), GFP_KERNEL);
|
||||
if (!planes)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
for (i = 0; i < nlayers; i++) {
|
||||
if (descs[i].type == ATMEL_HLCDC_OVERLAY_LAYER)
|
||||
planes->noverlays++;
|
||||
}
|
||||
|
||||
if (planes->noverlays) {
|
||||
planes->overlays = devm_kzalloc(dev->dev,
|
||||
planes->noverlays *
|
||||
sizeof(*planes->overlays),
|
||||
GFP_KERNEL);
|
||||
if (!planes->overlays)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
}
|
||||
int i, ret;
|
||||
|
||||
props = atmel_hlcdc_plane_create_properties(dev);
|
||||
if (IS_ERR(props))
|
||||
return ERR_CAST(props);
|
||||
return PTR_ERR(props);
|
||||
|
||||
dc->dscrpool = dmam_pool_create("atmel-hlcdc-dscr", dev->dev,
|
||||
sizeof(struct atmel_hlcdc_dma_channel_dscr),
|
||||
sizeof(u64), 0);
|
||||
if (!dc->dscrpool)
|
||||
return -ENOMEM;
|
||||
|
||||
planes->noverlays = 0;
|
||||
for (i = 0; i < nlayers; i++) {
|
||||
struct atmel_hlcdc_plane *plane;
|
||||
|
||||
if (descs[i].type == ATMEL_HLCDC_PP_LAYER)
|
||||
if (descs[i].type != ATMEL_HLCDC_BASE_LAYER &&
|
||||
descs[i].type != ATMEL_HLCDC_OVERLAY_LAYER &&
|
||||
descs[i].type != ATMEL_HLCDC_CURSOR_LAYER)
|
||||
continue;
|
||||
|
||||
plane = atmel_hlcdc_plane_create(dev, &descs[i], props);
|
||||
if (IS_ERR(plane))
|
||||
return ERR_CAST(plane);
|
||||
|
||||
plane->properties = props;
|
||||
|
||||
switch (descs[i].type) {
|
||||
case ATMEL_HLCDC_BASE_LAYER:
|
||||
if (planes->primary)
|
||||
return ERR_PTR(-EINVAL);
|
||||
planes->primary = plane;
|
||||
break;
|
||||
|
||||
case ATMEL_HLCDC_OVERLAY_LAYER:
|
||||
planes->overlays[planes->noverlays++] = plane;
|
||||
break;
|
||||
|
||||
case ATMEL_HLCDC_CURSOR_LAYER:
|
||||
if (planes->cursor)
|
||||
return ERR_PTR(-EINVAL);
|
||||
planes->cursor = plane;
|
||||
break;
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
ret = atmel_hlcdc_plane_create(dev, &descs[i], props);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return planes;
|
||||
return 0;
|
||||
}
|
||||
|
@ -107,10 +107,8 @@ static int bochsfb_create(struct drm_fb_helper *helper,
|
||||
info->par = &bochs->fb.helper;
|
||||
|
||||
ret = bochs_framebuffer_init(bochs->dev, &bochs->fb.gfb, &mode_cmd, gobj);
|
||||
if (ret) {
|
||||
drm_fb_helper_release_fbi(helper);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
bochs->fb.size = size;
|
||||
|
||||
@ -144,7 +142,6 @@ static int bochs_fbdev_destroy(struct bochs_device *bochs)
|
||||
DRM_DEBUG_DRIVER("\n");
|
||||
|
||||
drm_fb_helper_unregister_fbi(&bochs->fb.helper);
|
||||
drm_fb_helper_release_fbi(&bochs->fb.helper);
|
||||
|
||||
if (gfb->obj) {
|
||||
drm_gem_object_unreference_unlocked(gfb->obj);
|
||||
|
@ -2184,6 +2184,10 @@ static int sii8620_probe(struct i2c_client *client,
|
||||
sii8620_irq_thread,
|
||||
IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
|
||||
"sii8620", ctx);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to install IRQ handler\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ctx->gpio_reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH);
|
||||
if (IS_ERR(ctx->gpio_reset)) {
|
||||
|
@ -220,7 +220,7 @@ static const struct of_device_id tfp410_match[] = {
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, tfp410_match);
|
||||
|
||||
struct platform_driver tfp410_platform_driver = {
|
||||
static struct platform_driver tfp410_platform_driver = {
|
||||
.probe = tfp410_probe,
|
||||
.remove = tfp410_remove,
|
||||
.driver = {
|
||||
|
@ -250,7 +250,6 @@ static int cirrus_fbdev_destroy(struct drm_device *dev,
|
||||
struct cirrus_framebuffer *gfb = &gfbdev->gfb;
|
||||
|
||||
drm_fb_helper_unregister_fbi(&gfbdev->helper);
|
||||
drm_fb_helper_release_fbi(&gfbdev->helper);
|
||||
|
||||
if (gfb->obj) {
|
||||
drm_gem_object_unreference_unlocked(gfb->obj);
|
||||
|
@ -150,7 +150,7 @@ void drm_atomic_state_default_clear(struct drm_atomic_state *state)
|
||||
state->connectors[i].state);
|
||||
state->connectors[i].ptr = NULL;
|
||||
state->connectors[i].state = NULL;
|
||||
drm_connector_unreference(connector);
|
||||
drm_connector_put(connector);
|
||||
}
|
||||
|
||||
for (i = 0; i < config->num_crtc; i++) {
|
||||
@ -275,6 +275,8 @@ drm_atomic_get_crtc_state(struct drm_atomic_state *state,
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
state->crtcs[index].state = crtc_state;
|
||||
state->crtcs[index].old_state = crtc->state;
|
||||
state->crtcs[index].new_state = crtc_state;
|
||||
state->crtcs[index].ptr = crtc;
|
||||
crtc_state->state = state;
|
||||
|
||||
@ -322,7 +324,7 @@ int drm_atomic_set_mode_for_crtc(struct drm_crtc_state *state,
|
||||
if (mode && memcmp(&state->mode, mode, sizeof(*mode)) == 0)
|
||||
return 0;
|
||||
|
||||
drm_property_unreference_blob(state->mode_blob);
|
||||
drm_property_blob_put(state->mode_blob);
|
||||
state->mode_blob = NULL;
|
||||
|
||||
if (mode) {
|
||||
@ -368,7 +370,7 @@ int drm_atomic_set_mode_prop_for_crtc(struct drm_crtc_state *state,
|
||||
if (blob == state->mode_blob)
|
||||
return 0;
|
||||
|
||||
drm_property_unreference_blob(state->mode_blob);
|
||||
drm_property_blob_put(state->mode_blob);
|
||||
state->mode_blob = NULL;
|
||||
|
||||
memset(&state->mode, 0, sizeof(state->mode));
|
||||
@ -380,7 +382,7 @@ int drm_atomic_set_mode_prop_for_crtc(struct drm_crtc_state *state,
|
||||
blob->data))
|
||||
return -EINVAL;
|
||||
|
||||
state->mode_blob = drm_property_reference_blob(blob);
|
||||
state->mode_blob = drm_property_blob_get(blob);
|
||||
state->enable = true;
|
||||
DRM_DEBUG_ATOMIC("Set [MODE:%s] for CRTC state %p\n",
|
||||
state->mode.name, state);
|
||||
@ -413,9 +415,9 @@ drm_atomic_replace_property_blob(struct drm_property_blob **blob,
|
||||
if (old_blob == new_blob)
|
||||
return;
|
||||
|
||||
drm_property_unreference_blob(old_blob);
|
||||
drm_property_blob_put(old_blob);
|
||||
if (new_blob)
|
||||
drm_property_reference_blob(new_blob);
|
||||
drm_property_blob_get(new_blob);
|
||||
*blob = new_blob;
|
||||
*replaced = true;
|
||||
|
||||
@ -437,13 +439,13 @@ drm_atomic_replace_property_blob_from_id(struct drm_crtc *crtc,
|
||||
return -EINVAL;
|
||||
|
||||
if (expected_size > 0 && expected_size != new_blob->length) {
|
||||
drm_property_unreference_blob(new_blob);
|
||||
drm_property_blob_put(new_blob);
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
drm_atomic_replace_property_blob(blob, new_blob, replaced);
|
||||
drm_property_unreference_blob(new_blob);
|
||||
drm_property_blob_put(new_blob);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -478,7 +480,7 @@ int drm_atomic_crtc_set_property(struct drm_crtc *crtc,
|
||||
struct drm_property_blob *mode =
|
||||
drm_property_lookup_blob(dev, val);
|
||||
ret = drm_atomic_set_mode_prop_for_crtc(state, mode);
|
||||
drm_property_unreference_blob(mode);
|
||||
drm_property_blob_put(mode);
|
||||
return ret;
|
||||
} else if (property == config->degamma_lut_property) {
|
||||
ret = drm_atomic_replace_property_blob_from_id(crtc,
|
||||
@ -621,8 +623,8 @@ static int drm_atomic_crtc_check(struct drm_crtc *crtc,
|
||||
* pipe.
|
||||
*/
|
||||
if (state->event && !state->active && !crtc->state->active) {
|
||||
DRM_DEBUG_ATOMIC("[CRTC:%d] requesting event but off\n",
|
||||
crtc->base.id);
|
||||
DRM_DEBUG_ATOMIC("[CRTC:%d:%s] requesting event but off\n",
|
||||
crtc->base.id, crtc->name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@ -689,6 +691,8 @@ drm_atomic_get_plane_state(struct drm_atomic_state *state,
|
||||
|
||||
state->planes[index].state = plane_state;
|
||||
state->planes[index].ptr = plane;
|
||||
state->planes[index].old_state = plane->state;
|
||||
state->planes[index].new_state = plane_state;
|
||||
plane_state->state = state;
|
||||
|
||||
DRM_DEBUG_ATOMIC("Added [PLANE:%d:%s] %p state to %p\n",
|
||||
@ -733,7 +737,7 @@ int drm_atomic_plane_set_property(struct drm_plane *plane,
|
||||
struct drm_framebuffer *fb = drm_framebuffer_lookup(dev, val);
|
||||
drm_atomic_set_fb_for_plane(state, fb);
|
||||
if (fb)
|
||||
drm_framebuffer_unreference(fb);
|
||||
drm_framebuffer_put(fb);
|
||||
} else if (property == config->prop_in_fence_fd) {
|
||||
if (state->fence)
|
||||
return -EINVAL;
|
||||
@ -1026,13 +1030,16 @@ drm_atomic_get_connector_state(struct drm_atomic_state *state,
|
||||
if (!connector_state)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
drm_connector_reference(connector);
|
||||
drm_connector_get(connector);
|
||||
state->connectors[index].state = connector_state;
|
||||
state->connectors[index].old_state = connector->state;
|
||||
state->connectors[index].new_state = connector_state;
|
||||
state->connectors[index].ptr = connector;
|
||||
connector_state->state = state;
|
||||
|
||||
DRM_DEBUG_ATOMIC("Added [CONNECTOR:%d] %p state to %p\n",
|
||||
connector->base.id, connector_state, state);
|
||||
DRM_DEBUG_ATOMIC("Added [CONNECTOR:%d:%s] %p state to %p\n",
|
||||
connector->base.id, connector->name,
|
||||
connector_state, state);
|
||||
|
||||
if (connector_state->crtc) {
|
||||
struct drm_crtc_state *crtc_state;
|
||||
@ -1102,6 +1109,20 @@ int drm_atomic_connector_set_property(struct drm_connector *connector,
|
||||
state->tv.saturation = val;
|
||||
} else if (property == config->tv_hue_property) {
|
||||
state->tv.hue = val;
|
||||
} else if (property == config->link_status_property) {
|
||||
/* Never downgrade from GOOD to BAD on userspace's request here,
|
||||
* only hw issues can do that.
|
||||
*
|
||||
* For an atomic property the userspace doesn't need to be able
|
||||
* to understand all the properties, but needs to be able to
|
||||
* restore the state it wants on VT switch. So if the userspace
|
||||
* tries to change the link_status from GOOD to BAD, driver
|
||||
* silently rejects it and returns a 0. This prevents userspace
|
||||
* from accidently breaking the display when it restores the
|
||||
* state.
|
||||
*/
|
||||
if (state->link_status != DRM_LINK_STATUS_GOOD)
|
||||
state->link_status = val;
|
||||
} else if (connector->funcs->atomic_set_property) {
|
||||
return connector->funcs->atomic_set_property(connector,
|
||||
state, property, val);
|
||||
@ -1176,6 +1197,8 @@ drm_atomic_connector_get_property(struct drm_connector *connector,
|
||||
*val = state->tv.saturation;
|
||||
} else if (property == config->tv_hue_property) {
|
||||
*val = state->tv.hue;
|
||||
} else if (property == config->link_status_property) {
|
||||
*val = state->link_status;
|
||||
} else if (connector->funcs->atomic_get_property) {
|
||||
return connector->funcs->atomic_get_property(connector,
|
||||
state, property, val);
|
||||
@ -1357,7 +1380,7 @@ drm_atomic_set_crtc_for_connector(struct drm_connector_state *conn_state,
|
||||
crtc_state->connector_mask &=
|
||||
~(1 << drm_connector_index(conn_state->connector));
|
||||
|
||||
drm_connector_unreference(conn_state->connector);
|
||||
drm_connector_put(conn_state->connector);
|
||||
conn_state->crtc = NULL;
|
||||
}
|
||||
|
||||
@ -1369,7 +1392,7 @@ drm_atomic_set_crtc_for_connector(struct drm_connector_state *conn_state,
|
||||
crtc_state->connector_mask |=
|
||||
1 << drm_connector_index(conn_state->connector);
|
||||
|
||||
drm_connector_reference(conn_state->connector);
|
||||
drm_connector_get(conn_state->connector);
|
||||
conn_state->crtc = crtc;
|
||||
|
||||
DRM_DEBUG_ATOMIC("Link connector state %p to [CRTC:%d:%s]\n",
|
||||
@ -1408,8 +1431,13 @@ drm_atomic_add_affected_connectors(struct drm_atomic_state *state,
|
||||
struct drm_connector *connector;
|
||||
struct drm_connector_state *conn_state;
|
||||
struct drm_connector_list_iter conn_iter;
|
||||
struct drm_crtc_state *crtc_state;
|
||||
int ret;
|
||||
|
||||
crtc_state = drm_atomic_get_crtc_state(state, crtc);
|
||||
if (IS_ERR(crtc_state))
|
||||
return PTR_ERR(crtc_state);
|
||||
|
||||
ret = drm_modeset_lock(&config->connection_mutex, state->acquire_ctx);
|
||||
if (ret)
|
||||
return ret;
|
||||
@ -1418,21 +1446,21 @@ drm_atomic_add_affected_connectors(struct drm_atomic_state *state,
|
||||
crtc->base.id, crtc->name, state);
|
||||
|
||||
/*
|
||||
* Changed connectors are already in @state, so only need to look at the
|
||||
* current configuration.
|
||||
* Changed connectors are already in @state, so only need to look
|
||||
* at the connector_mask in crtc_state.
|
||||
*/
|
||||
drm_connector_list_iter_get(state->dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(state->dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
if (connector->state->crtc != crtc)
|
||||
if (!(crtc_state->connector_mask & (1 << drm_connector_index(connector))))
|
||||
continue;
|
||||
|
||||
conn_state = drm_atomic_get_connector_state(state, connector);
|
||||
if (IS_ERR(conn_state)) {
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
return PTR_ERR(conn_state);
|
||||
}
|
||||
}
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -1546,7 +1574,7 @@ int drm_atomic_check_only(struct drm_atomic_state *state)
|
||||
|
||||
DRM_DEBUG_ATOMIC("checking %p\n", state);
|
||||
|
||||
for_each_plane_in_state(state, plane, plane_state, i) {
|
||||
for_each_new_plane_in_state(state, plane, plane_state, i) {
|
||||
ret = drm_atomic_plane_check(plane, plane_state);
|
||||
if (ret) {
|
||||
DRM_DEBUG_ATOMIC("[PLANE:%d:%s] atomic core check failed\n",
|
||||
@ -1555,7 +1583,7 @@ int drm_atomic_check_only(struct drm_atomic_state *state)
|
||||
}
|
||||
}
|
||||
|
||||
for_each_crtc_in_state(state, crtc, crtc_state, i) {
|
||||
for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
|
||||
ret = drm_atomic_crtc_check(crtc, crtc_state);
|
||||
if (ret) {
|
||||
DRM_DEBUG_ATOMIC("[CRTC:%d:%s] atomic core check failed\n",
|
||||
@ -1568,7 +1596,7 @@ int drm_atomic_check_only(struct drm_atomic_state *state)
|
||||
ret = config->funcs->atomic_check(state->dev, state);
|
||||
|
||||
if (!state->allow_modeset) {
|
||||
for_each_crtc_in_state(state, crtc, crtc_state, i) {
|
||||
for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
|
||||
if (drm_atomic_crtc_needs_modeset(crtc_state)) {
|
||||
DRM_DEBUG_ATOMIC("[CRTC:%d:%s] requires full modeset\n",
|
||||
crtc->base.id, crtc->name);
|
||||
@ -1652,13 +1680,13 @@ static void drm_atomic_print_state(const struct drm_atomic_state *state)
|
||||
|
||||
DRM_DEBUG_ATOMIC("checking %p\n", state);
|
||||
|
||||
for_each_plane_in_state(state, plane, plane_state, i)
|
||||
for_each_new_plane_in_state(state, plane, plane_state, i)
|
||||
drm_atomic_plane_print_state(&p, plane_state);
|
||||
|
||||
for_each_crtc_in_state(state, crtc, crtc_state, i)
|
||||
for_each_new_crtc_in_state(state, crtc, crtc_state, i)
|
||||
drm_atomic_crtc_print_state(&p, crtc_state);
|
||||
|
||||
for_each_connector_in_state(state, connector, connector_state, i)
|
||||
for_each_new_connector_in_state(state, connector, connector_state, i)
|
||||
drm_atomic_connector_print_state(&p, connector_state);
|
||||
}
|
||||
|
||||
@ -1694,10 +1722,10 @@ void drm_state_dump(struct drm_device *dev, struct drm_printer *p)
|
||||
list_for_each_entry(crtc, &config->crtc_list, head)
|
||||
drm_atomic_crtc_print_state(p, crtc->state);
|
||||
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter)
|
||||
drm_atomic_connector_print_state(p, connector->state);
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_state_dump);
|
||||
|
||||
@ -1837,12 +1865,12 @@ void drm_atomic_clean_old_fb(struct drm_device *dev,
|
||||
if (ret == 0) {
|
||||
struct drm_framebuffer *new_fb = plane->state->fb;
|
||||
if (new_fb)
|
||||
drm_framebuffer_reference(new_fb);
|
||||
drm_framebuffer_get(new_fb);
|
||||
plane->fb = new_fb;
|
||||
plane->crtc = plane->state->crtc;
|
||||
|
||||
if (plane->old_fb)
|
||||
drm_framebuffer_unreference(plane->old_fb);
|
||||
drm_framebuffer_put(plane->old_fb);
|
||||
}
|
||||
plane->old_fb = NULL;
|
||||
}
|
||||
@ -1938,7 +1966,7 @@ static int prepare_crtc_signaling(struct drm_device *dev,
|
||||
if (arg->flags & DRM_MODE_ATOMIC_TEST_ONLY)
|
||||
return 0;
|
||||
|
||||
for_each_crtc_in_state(state, crtc, crtc_state, i) {
|
||||
for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
|
||||
s32 __user *fence_ptr;
|
||||
|
||||
fence_ptr = get_out_fence_for_crtc(crtc_state->state, crtc);
|
||||
@ -2018,7 +2046,7 @@ static void complete_crtc_signaling(struct drm_device *dev,
|
||||
return;
|
||||
}
|
||||
|
||||
for_each_crtc_in_state(state, crtc, crtc_state, i) {
|
||||
for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
|
||||
struct drm_pending_vblank_event *event = crtc_state->event;
|
||||
/*
|
||||
* Free the allocated event. drm_atomic_helper_setup_commit
|
||||
@ -2049,6 +2077,94 @@ static void complete_crtc_signaling(struct drm_device *dev,
|
||||
kfree(fence_state);
|
||||
}
|
||||
|
||||
int drm_atomic_remove_fb(struct drm_framebuffer *fb)
|
||||
{
|
||||
struct drm_modeset_acquire_ctx ctx;
|
||||
struct drm_device *dev = fb->dev;
|
||||
struct drm_atomic_state *state;
|
||||
struct drm_plane *plane;
|
||||
struct drm_connector *conn;
|
||||
struct drm_connector_state *conn_state;
|
||||
int i, ret = 0;
|
||||
unsigned plane_mask;
|
||||
|
||||
state = drm_atomic_state_alloc(dev);
|
||||
if (!state)
|
||||
return -ENOMEM;
|
||||
|
||||
drm_modeset_acquire_init(&ctx, 0);
|
||||
state->acquire_ctx = &ctx;
|
||||
|
||||
retry:
|
||||
plane_mask = 0;
|
||||
ret = drm_modeset_lock_all_ctx(dev, &ctx);
|
||||
if (ret)
|
||||
goto unlock;
|
||||
|
||||
drm_for_each_plane(plane, dev) {
|
||||
struct drm_plane_state *plane_state;
|
||||
|
||||
if (plane->state->fb != fb)
|
||||
continue;
|
||||
|
||||
plane_state = drm_atomic_get_plane_state(state, plane);
|
||||
if (IS_ERR(plane_state)) {
|
||||
ret = PTR_ERR(plane_state);
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
if (plane_state->crtc->primary == plane) {
|
||||
struct drm_crtc_state *crtc_state;
|
||||
|
||||
crtc_state = drm_atomic_get_existing_crtc_state(state, plane_state->crtc);
|
||||
|
||||
ret = drm_atomic_add_affected_connectors(state, plane_state->crtc);
|
||||
if (ret)
|
||||
goto unlock;
|
||||
|
||||
crtc_state->active = false;
|
||||
ret = drm_atomic_set_mode_for_crtc(crtc_state, NULL);
|
||||
if (ret)
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
drm_atomic_set_fb_for_plane(plane_state, NULL);
|
||||
ret = drm_atomic_set_crtc_for_plane(plane_state, NULL);
|
||||
if (ret)
|
||||
goto unlock;
|
||||
|
||||
plane_mask |= BIT(drm_plane_index(plane));
|
||||
|
||||
plane->old_fb = plane->fb;
|
||||
}
|
||||
|
||||
for_each_connector_in_state(state, conn, conn_state, i) {
|
||||
ret = drm_atomic_set_crtc_for_connector(conn_state, NULL);
|
||||
|
||||
if (ret)
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
if (plane_mask)
|
||||
ret = drm_atomic_commit(state);
|
||||
|
||||
unlock:
|
||||
if (plane_mask)
|
||||
drm_atomic_clean_old_fb(dev, plane_mask, ret);
|
||||
|
||||
if (ret == -EDEADLK) {
|
||||
drm_modeset_backoff(&ctx);
|
||||
goto retry;
|
||||
}
|
||||
|
||||
drm_atomic_state_put(state);
|
||||
|
||||
drm_modeset_drop_locks(&ctx);
|
||||
drm_modeset_acquire_fini(&ctx);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int drm_mode_atomic_ioctl(struct drm_device *dev,
|
||||
void *data, struct drm_file *file_priv)
|
||||
{
|
||||
@ -2122,13 +2238,13 @@ retry:
|
||||
}
|
||||
|
||||
if (!obj->properties) {
|
||||
drm_mode_object_unreference(obj);
|
||||
drm_mode_object_put(obj);
|
||||
ret = -ENOENT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (get_user(count_props, count_props_ptr + copied_objs)) {
|
||||
drm_mode_object_unreference(obj);
|
||||
drm_mode_object_put(obj);
|
||||
ret = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
@ -2141,14 +2257,14 @@ retry:
|
||||
struct drm_property *prop;
|
||||
|
||||
if (get_user(prop_id, props_ptr + copied_props)) {
|
||||
drm_mode_object_unreference(obj);
|
||||
drm_mode_object_put(obj);
|
||||
ret = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
prop = drm_mode_obj_find_prop_id(obj, prop_id);
|
||||
if (!prop) {
|
||||
drm_mode_object_unreference(obj);
|
||||
drm_mode_object_put(obj);
|
||||
ret = -ENOENT;
|
||||
goto out;
|
||||
}
|
||||
@ -2156,14 +2272,14 @@ retry:
|
||||
if (copy_from_user(&prop_value,
|
||||
prop_values_ptr + copied_props,
|
||||
sizeof(prop_value))) {
|
||||
drm_mode_object_unreference(obj);
|
||||
drm_mode_object_put(obj);
|
||||
ret = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = atomic_set_prop(state, obj, prop, prop_value);
|
||||
if (ret) {
|
||||
drm_mode_object_unreference(obj);
|
||||
drm_mode_object_put(obj);
|
||||
goto out;
|
||||
}
|
||||
|
||||
@ -2176,7 +2292,7 @@ retry:
|
||||
plane_mask |= (1 << drm_plane_index(plane));
|
||||
plane->old_fb = plane->fb;
|
||||
}
|
||||
drm_mode_object_unreference(obj);
|
||||
drm_mode_object_put(obj);
|
||||
}
|
||||
|
||||
ret = prepare_crtc_signaling(dev, state, arg, file_priv, &fence_state,
|
||||
|
@ -145,7 +145,7 @@ static int handle_conflicting_encoders(struct drm_atomic_state *state,
|
||||
* and the crtc is disabled if no encoder is left. This preserves
|
||||
* compatibility with the legacy set_config behavior.
|
||||
*/
|
||||
drm_connector_list_iter_get(state->dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(state->dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
struct drm_crtc_state *crtc_state;
|
||||
|
||||
@ -193,7 +193,7 @@ static int handle_conflicting_encoders(struct drm_atomic_state *state,
|
||||
}
|
||||
}
|
||||
out:
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -322,10 +322,11 @@ update_connector_routing(struct drm_atomic_state *state,
|
||||
}
|
||||
|
||||
if (!drm_encoder_crtc_ok(new_encoder, connector_state->crtc)) {
|
||||
DRM_DEBUG_ATOMIC("[ENCODER:%d:%s] incompatible with [CRTC:%d]\n",
|
||||
DRM_DEBUG_ATOMIC("[ENCODER:%d:%s] incompatible with [CRTC:%d:%s]\n",
|
||||
new_encoder->base.id,
|
||||
new_encoder->name,
|
||||
connector_state->crtc->base.id);
|
||||
connector_state->crtc->base.id,
|
||||
connector_state->crtc->name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@ -529,6 +530,13 @@ drm_atomic_helper_check_modeset(struct drm_device *dev,
|
||||
connector_state);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (connector->state->crtc) {
|
||||
crtc_state = drm_atomic_get_existing_crtc_state(state,
|
||||
connector->state->crtc);
|
||||
if (connector->state->link_status !=
|
||||
connector_state->link_status)
|
||||
crtc_state->connectors_changed = true;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
@ -1119,7 +1127,8 @@ drm_atomic_helper_wait_for_vblanks(struct drm_device *dev,
|
||||
drm_crtc_vblank_count(crtc),
|
||||
msecs_to_jiffies(50));
|
||||
|
||||
WARN(!ret, "[CRTC:%d] vblank wait timed out\n", crtc->base.id);
|
||||
WARN(!ret, "[CRTC:%d:%s] vblank wait timed out\n",
|
||||
crtc->base.id, crtc->name);
|
||||
|
||||
drm_crtc_vblank_put(crtc);
|
||||
}
|
||||
@ -1170,7 +1179,7 @@ EXPORT_SYMBOL(drm_atomic_helper_commit_tail);
|
||||
static void commit_tail(struct drm_atomic_state *old_state)
|
||||
{
|
||||
struct drm_device *dev = old_state->dev;
|
||||
struct drm_mode_config_helper_funcs *funcs;
|
||||
const struct drm_mode_config_helper_funcs *funcs;
|
||||
|
||||
funcs = dev->mode_config.helper_private;
|
||||
|
||||
@ -1977,11 +1986,11 @@ void drm_atomic_helper_swap_state(struct drm_atomic_state *state,
|
||||
int i;
|
||||
long ret;
|
||||
struct drm_connector *connector;
|
||||
struct drm_connector_state *conn_state;
|
||||
struct drm_connector_state *conn_state, *old_conn_state;
|
||||
struct drm_crtc *crtc;
|
||||
struct drm_crtc_state *crtc_state;
|
||||
struct drm_crtc_state *crtc_state, *old_crtc_state;
|
||||
struct drm_plane *plane;
|
||||
struct drm_plane_state *plane_state;
|
||||
struct drm_plane_state *plane_state, *old_plane_state;
|
||||
struct drm_crtc_commit *commit;
|
||||
|
||||
if (stall) {
|
||||
@ -2005,13 +2014,17 @@ void drm_atomic_helper_swap_state(struct drm_atomic_state *state,
|
||||
}
|
||||
}
|
||||
|
||||
for_each_connector_in_state(state, connector, conn_state, i) {
|
||||
for_each_oldnew_connector_in_state(state, connector, old_conn_state, conn_state, i) {
|
||||
WARN_ON(connector->state != old_conn_state);
|
||||
|
||||
connector->state->state = state;
|
||||
swap(state->connectors[i].state, connector->state);
|
||||
connector->state->state = NULL;
|
||||
}
|
||||
|
||||
for_each_crtc_in_state(state, crtc, crtc_state, i) {
|
||||
for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, crtc_state, i) {
|
||||
WARN_ON(crtc->state != old_crtc_state);
|
||||
|
||||
crtc->state->state = state;
|
||||
swap(state->crtcs[i].state, crtc->state);
|
||||
crtc->state->state = NULL;
|
||||
@ -2026,7 +2039,9 @@ void drm_atomic_helper_swap_state(struct drm_atomic_state *state,
|
||||
}
|
||||
}
|
||||
|
||||
for_each_plane_in_state(state, plane, plane_state, i) {
|
||||
for_each_oldnew_plane_in_state(state, plane, old_plane_state, plane_state, i) {
|
||||
WARN_ON(plane->state != old_plane_state);
|
||||
|
||||
plane->state->state = state;
|
||||
swap(state->planes[i].state, plane->state);
|
||||
plane->state->state = NULL;
|
||||
@ -2233,6 +2248,8 @@ static int update_output_state(struct drm_atomic_state *state,
|
||||
NULL);
|
||||
if (ret)
|
||||
return ret;
|
||||
/* Make sure legacy setCrtc always re-trains */
|
||||
conn_state->link_status = DRM_LINK_STATUS_GOOD;
|
||||
}
|
||||
}
|
||||
|
||||
@ -2276,6 +2293,12 @@ static int update_output_state(struct drm_atomic_state *state,
|
||||
*
|
||||
* Provides a default crtc set_config handler using the atomic driver interface.
|
||||
*
|
||||
* NOTE: For backwards compatibility with old userspace this automatically
|
||||
* resets the "link-status" property to GOOD, to force any link
|
||||
* re-training. The SETCRTC ioctl does not define whether an update does
|
||||
* need a full modeset or just a plane update, hence we're allowed to do
|
||||
* that. See also drm_mode_connector_set_link_status_property().
|
||||
*
|
||||
* Returns:
|
||||
* Returns 0 on success, negative errno numbers on failure.
|
||||
*/
|
||||
@ -2419,9 +2442,13 @@ int drm_atomic_helper_disable_all(struct drm_device *dev,
|
||||
struct drm_modeset_acquire_ctx *ctx)
|
||||
{
|
||||
struct drm_atomic_state *state;
|
||||
struct drm_connector_state *conn_state;
|
||||
struct drm_connector *conn;
|
||||
struct drm_connector_list_iter conn_iter;
|
||||
int err;
|
||||
struct drm_plane_state *plane_state;
|
||||
struct drm_plane *plane;
|
||||
struct drm_crtc_state *crtc_state;
|
||||
struct drm_crtc *crtc;
|
||||
int ret, i;
|
||||
|
||||
state = drm_atomic_state_alloc(dev);
|
||||
if (!state)
|
||||
@ -2429,29 +2456,48 @@ int drm_atomic_helper_disable_all(struct drm_device *dev,
|
||||
|
||||
state->acquire_ctx = ctx;
|
||||
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(conn, &conn_iter) {
|
||||
struct drm_crtc *crtc = conn->state->crtc;
|
||||
struct drm_crtc_state *crtc_state;
|
||||
|
||||
if (!crtc || conn->dpms != DRM_MODE_DPMS_ON)
|
||||
continue;
|
||||
|
||||
drm_for_each_crtc(crtc, dev) {
|
||||
crtc_state = drm_atomic_get_crtc_state(state, crtc);
|
||||
if (IS_ERR(crtc_state)) {
|
||||
err = PTR_ERR(crtc_state);
|
||||
ret = PTR_ERR(crtc_state);
|
||||
goto free;
|
||||
}
|
||||
|
||||
crtc_state->active = false;
|
||||
|
||||
ret = drm_atomic_set_mode_prop_for_crtc(crtc_state, NULL);
|
||||
if (ret < 0)
|
||||
goto free;
|
||||
|
||||
ret = drm_atomic_add_affected_planes(state, crtc);
|
||||
if (ret < 0)
|
||||
goto free;
|
||||
|
||||
ret = drm_atomic_add_affected_connectors(state, crtc);
|
||||
if (ret < 0)
|
||||
goto free;
|
||||
}
|
||||
|
||||
err = drm_atomic_commit(state);
|
||||
for_each_connector_in_state(state, conn, conn_state, i) {
|
||||
ret = drm_atomic_set_crtc_for_connector(conn_state, NULL);
|
||||
if (ret < 0)
|
||||
goto free;
|
||||
}
|
||||
|
||||
for_each_plane_in_state(state, plane, plane_state, i) {
|
||||
ret = drm_atomic_set_crtc_for_plane(plane_state, NULL);
|
||||
if (ret < 0)
|
||||
goto free;
|
||||
|
||||
drm_atomic_set_fb_for_plane(plane_state, NULL);
|
||||
}
|
||||
|
||||
ret = drm_atomic_commit(state);
|
||||
free:
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_atomic_state_put(state);
|
||||
return err;
|
||||
return ret;
|
||||
}
|
||||
|
||||
EXPORT_SYMBOL(drm_atomic_helper_disable_all);
|
||||
|
||||
/**
|
||||
@ -2477,7 +2523,7 @@ EXPORT_SYMBOL(drm_atomic_helper_disable_all);
|
||||
*
|
||||
* See also:
|
||||
* drm_atomic_helper_duplicate_state(), drm_atomic_helper_disable_all(),
|
||||
* drm_atomic_helper_resume()
|
||||
* drm_atomic_helper_resume(), drm_atomic_helper_commit_duplicated_state()
|
||||
*/
|
||||
struct drm_atomic_state *drm_atomic_helper_suspend(struct drm_device *dev)
|
||||
{
|
||||
@ -2517,6 +2563,47 @@ unlock:
|
||||
}
|
||||
EXPORT_SYMBOL(drm_atomic_helper_suspend);
|
||||
|
||||
/**
|
||||
* drm_atomic_helper_commit_duplicated_state - commit duplicated state
|
||||
* @state: duplicated atomic state to commit
|
||||
* @ctx: pointer to acquire_ctx to use for commit.
|
||||
*
|
||||
* The state returned by drm_atomic_helper_duplicate_state() and
|
||||
* drm_atomic_helper_suspend() is partially invalid, and needs to
|
||||
* be fixed up before commit.
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success or a negative error code on failure.
|
||||
*
|
||||
* See also:
|
||||
* drm_atomic_helper_suspend()
|
||||
*/
|
||||
int drm_atomic_helper_commit_duplicated_state(struct drm_atomic_state *state,
|
||||
struct drm_modeset_acquire_ctx *ctx)
|
||||
{
|
||||
int i;
|
||||
struct drm_plane *plane;
|
||||
struct drm_plane_state *plane_state;
|
||||
struct drm_connector *connector;
|
||||
struct drm_connector_state *conn_state;
|
||||
struct drm_crtc *crtc;
|
||||
struct drm_crtc_state *crtc_state;
|
||||
|
||||
state->acquire_ctx = ctx;
|
||||
|
||||
for_each_new_plane_in_state(state, plane, plane_state, i)
|
||||
state->planes[i].old_state = plane->state;
|
||||
|
||||
for_each_new_crtc_in_state(state, crtc, crtc_state, i)
|
||||
state->crtcs[i].old_state = crtc->state;
|
||||
|
||||
for_each_new_connector_in_state(state, connector, conn_state, i)
|
||||
state->connectors[i].old_state = connector->state;
|
||||
|
||||
return drm_atomic_commit(state);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_atomic_helper_commit_duplicated_state);
|
||||
|
||||
/**
|
||||
* drm_atomic_helper_resume - subsystem-level resume helper
|
||||
* @dev: DRM device
|
||||
@ -2540,9 +2627,9 @@ int drm_atomic_helper_resume(struct drm_device *dev,
|
||||
int err;
|
||||
|
||||
drm_mode_config_reset(dev);
|
||||
|
||||
drm_modeset_lock_all(dev);
|
||||
state->acquire_ctx = config->acquire_ctx;
|
||||
err = drm_atomic_commit(state);
|
||||
err = drm_atomic_helper_commit_duplicated_state(state, config->acquire_ctx);
|
||||
drm_modeset_unlock_all(dev);
|
||||
|
||||
return err;
|
||||
@ -2718,7 +2805,8 @@ static int page_flip_common(
|
||||
struct drm_atomic_state *state,
|
||||
struct drm_crtc *crtc,
|
||||
struct drm_framebuffer *fb,
|
||||
struct drm_pending_vblank_event *event)
|
||||
struct drm_pending_vblank_event *event,
|
||||
uint32_t flags)
|
||||
{
|
||||
struct drm_plane *plane = crtc->primary;
|
||||
struct drm_plane_state *plane_state;
|
||||
@ -2730,12 +2818,12 @@ static int page_flip_common(
|
||||
return PTR_ERR(crtc_state);
|
||||
|
||||
crtc_state->event = event;
|
||||
crtc_state->pageflip_flags = flags;
|
||||
|
||||
plane_state = drm_atomic_get_plane_state(state, plane);
|
||||
if (IS_ERR(plane_state))
|
||||
return PTR_ERR(plane_state);
|
||||
|
||||
|
||||
ret = drm_atomic_set_crtc_for_plane(plane_state, crtc);
|
||||
if (ret != 0)
|
||||
return ret;
|
||||
@ -2744,8 +2832,8 @@ static int page_flip_common(
|
||||
/* Make sure we don't accidentally do a full modeset. */
|
||||
state->allow_modeset = false;
|
||||
if (!crtc_state->active) {
|
||||
DRM_DEBUG_ATOMIC("[CRTC:%d] disabled, rejecting legacy flip\n",
|
||||
crtc->base.id);
|
||||
DRM_DEBUG_ATOMIC("[CRTC:%d:%s] disabled, rejecting legacy flip\n",
|
||||
crtc->base.id, crtc->name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@ -2762,10 +2850,6 @@ static int page_flip_common(
|
||||
* Provides a default &drm_crtc_funcs.page_flip implementation
|
||||
* using the atomic driver interface.
|
||||
*
|
||||
* Note that for now so called async page flips (i.e. updates which are not
|
||||
* synchronized to vblank) are not supported, since the atomic interfaces have
|
||||
* no provisions for this yet.
|
||||
*
|
||||
* Returns:
|
||||
* Returns 0 on success, negative errno numbers on failure.
|
||||
*
|
||||
@ -2781,9 +2865,6 @@ int drm_atomic_helper_page_flip(struct drm_crtc *crtc,
|
||||
struct drm_atomic_state *state;
|
||||
int ret = 0;
|
||||
|
||||
if (flags & DRM_MODE_PAGE_FLIP_ASYNC)
|
||||
return -EINVAL;
|
||||
|
||||
state = drm_atomic_state_alloc(plane->dev);
|
||||
if (!state)
|
||||
return -ENOMEM;
|
||||
@ -2791,7 +2872,7 @@ int drm_atomic_helper_page_flip(struct drm_crtc *crtc,
|
||||
state->acquire_ctx = drm_modeset_legacy_acquire_ctx(crtc);
|
||||
|
||||
retry:
|
||||
ret = page_flip_common(state, crtc, fb, event);
|
||||
ret = page_flip_common(state, crtc, fb, event, flags);
|
||||
if (ret != 0)
|
||||
goto fail;
|
||||
|
||||
@ -2846,9 +2927,6 @@ int drm_atomic_helper_page_flip_target(
|
||||
struct drm_crtc_state *crtc_state;
|
||||
int ret = 0;
|
||||
|
||||
if (flags & DRM_MODE_PAGE_FLIP_ASYNC)
|
||||
return -EINVAL;
|
||||
|
||||
state = drm_atomic_state_alloc(plane->dev);
|
||||
if (!state)
|
||||
return -ENOMEM;
|
||||
@ -2856,7 +2934,7 @@ int drm_atomic_helper_page_flip_target(
|
||||
state->acquire_ctx = drm_modeset_legacy_acquire_ctx(crtc);
|
||||
|
||||
retry:
|
||||
ret = page_flip_common(state, crtc, fb, event);
|
||||
ret = page_flip_common(state, crtc, fb, event, flags);
|
||||
if (ret != 0)
|
||||
goto fail;
|
||||
|
||||
@ -2940,7 +3018,7 @@ retry:
|
||||
|
||||
WARN_ON(!drm_modeset_is_locked(&config->connection_mutex));
|
||||
|
||||
drm_connector_list_iter_get(connector->dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(connector->dev, &conn_iter);
|
||||
drm_for_each_connector_iter(tmp_connector, &conn_iter) {
|
||||
if (tmp_connector->state->crtc != crtc)
|
||||
continue;
|
||||
@ -2950,7 +3028,7 @@ retry:
|
||||
break;
|
||||
}
|
||||
}
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
crtc_state->active = active;
|
||||
|
||||
ret = drm_atomic_commit(state);
|
||||
@ -3042,13 +3120,13 @@ void __drm_atomic_helper_crtc_duplicate_state(struct drm_crtc *crtc,
|
||||
memcpy(state, crtc->state, sizeof(*state));
|
||||
|
||||
if (state->mode_blob)
|
||||
drm_property_reference_blob(state->mode_blob);
|
||||
drm_property_blob_get(state->mode_blob);
|
||||
if (state->degamma_lut)
|
||||
drm_property_reference_blob(state->degamma_lut);
|
||||
drm_property_blob_get(state->degamma_lut);
|
||||
if (state->ctm)
|
||||
drm_property_reference_blob(state->ctm);
|
||||
drm_property_blob_get(state->ctm);
|
||||
if (state->gamma_lut)
|
||||
drm_property_reference_blob(state->gamma_lut);
|
||||
drm_property_blob_get(state->gamma_lut);
|
||||
state->mode_changed = false;
|
||||
state->active_changed = false;
|
||||
state->planes_changed = false;
|
||||
@ -3056,6 +3134,7 @@ void __drm_atomic_helper_crtc_duplicate_state(struct drm_crtc *crtc,
|
||||
state->color_mgmt_changed = false;
|
||||
state->zpos_changed = false;
|
||||
state->event = NULL;
|
||||
state->pageflip_flags = 0;
|
||||
}
|
||||
EXPORT_SYMBOL(__drm_atomic_helper_crtc_duplicate_state);
|
||||
|
||||
@ -3092,10 +3171,10 @@ EXPORT_SYMBOL(drm_atomic_helper_crtc_duplicate_state);
|
||||
*/
|
||||
void __drm_atomic_helper_crtc_destroy_state(struct drm_crtc_state *state)
|
||||
{
|
||||
drm_property_unreference_blob(state->mode_blob);
|
||||
drm_property_unreference_blob(state->degamma_lut);
|
||||
drm_property_unreference_blob(state->ctm);
|
||||
drm_property_unreference_blob(state->gamma_lut);
|
||||
drm_property_blob_put(state->mode_blob);
|
||||
drm_property_blob_put(state->degamma_lut);
|
||||
drm_property_blob_put(state->ctm);
|
||||
drm_property_blob_put(state->gamma_lut);
|
||||
}
|
||||
EXPORT_SYMBOL(__drm_atomic_helper_crtc_destroy_state);
|
||||
|
||||
@ -3151,7 +3230,7 @@ void __drm_atomic_helper_plane_duplicate_state(struct drm_plane *plane,
|
||||
memcpy(state, plane->state, sizeof(*state));
|
||||
|
||||
if (state->fb)
|
||||
drm_framebuffer_reference(state->fb);
|
||||
drm_framebuffer_get(state->fb);
|
||||
|
||||
state->fence = NULL;
|
||||
}
|
||||
@ -3191,7 +3270,7 @@ EXPORT_SYMBOL(drm_atomic_helper_plane_duplicate_state);
|
||||
void __drm_atomic_helper_plane_destroy_state(struct drm_plane_state *state)
|
||||
{
|
||||
if (state->fb)
|
||||
drm_framebuffer_unreference(state->fb);
|
||||
drm_framebuffer_put(state->fb);
|
||||
|
||||
if (state->fence)
|
||||
dma_fence_put(state->fence);
|
||||
@ -3272,7 +3351,7 @@ __drm_atomic_helper_connector_duplicate_state(struct drm_connector *connector,
|
||||
{
|
||||
memcpy(state, connector->state, sizeof(*state));
|
||||
if (state->crtc)
|
||||
drm_connector_reference(connector);
|
||||
drm_connector_get(connector);
|
||||
}
|
||||
EXPORT_SYMBOL(__drm_atomic_helper_connector_duplicate_state);
|
||||
|
||||
@ -3360,18 +3439,18 @@ drm_atomic_helper_duplicate_state(struct drm_device *dev,
|
||||
}
|
||||
}
|
||||
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(conn, &conn_iter) {
|
||||
struct drm_connector_state *conn_state;
|
||||
|
||||
conn_state = drm_atomic_get_connector_state(state, conn);
|
||||
if (IS_ERR(conn_state)) {
|
||||
err = PTR_ERR(conn_state);
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
goto free;
|
||||
}
|
||||
}
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
/* clear the acquire context so that it isn't accidentally reused */
|
||||
state->acquire_ctx = NULL;
|
||||
@ -3398,7 +3477,7 @@ void
|
||||
__drm_atomic_helper_connector_destroy_state(struct drm_connector_state *state)
|
||||
{
|
||||
if (state->crtc)
|
||||
drm_connector_unreference(state->connector);
|
||||
drm_connector_put(state->connector);
|
||||
}
|
||||
EXPORT_SYMBOL(__drm_atomic_helper_connector_destroy_state);
|
||||
|
||||
@ -3493,7 +3572,7 @@ fail:
|
||||
goto backoff;
|
||||
|
||||
drm_atomic_state_put(state);
|
||||
drm_property_unreference_blob(blob);
|
||||
drm_property_blob_put(blob);
|
||||
return ret;
|
||||
|
||||
backoff:
|
||||
|
@ -88,7 +88,7 @@ drm_clflush_pages(struct page *pages[], unsigned long num_pages)
|
||||
}
|
||||
|
||||
if (wbinvd_on_all_cpus())
|
||||
printk(KERN_ERR "Timed out waiting for cache flush.\n");
|
||||
pr_err("Timed out waiting for cache flush\n");
|
||||
|
||||
#elif defined(__powerpc__)
|
||||
unsigned long i;
|
||||
@ -105,7 +105,7 @@ drm_clflush_pages(struct page *pages[], unsigned long num_pages)
|
||||
kunmap_atomic(page_virtual);
|
||||
}
|
||||
#else
|
||||
printk(KERN_ERR "Architecture has no drm_cache.c support\n");
|
||||
pr_err("Architecture has no drm_cache.c support\n");
|
||||
WARN_ON_ONCE(1);
|
||||
#endif
|
||||
}
|
||||
@ -134,9 +134,9 @@ drm_clflush_sg(struct sg_table *st)
|
||||
}
|
||||
|
||||
if (wbinvd_on_all_cpus())
|
||||
printk(KERN_ERR "Timed out waiting for cache flush.\n");
|
||||
pr_err("Timed out waiting for cache flush\n");
|
||||
#else
|
||||
printk(KERN_ERR "Architecture has no drm_cache.c support\n");
|
||||
pr_err("Architecture has no drm_cache.c support\n");
|
||||
WARN_ON_ONCE(1);
|
||||
#endif
|
||||
}
|
||||
@ -167,9 +167,9 @@ drm_clflush_virt_range(void *addr, unsigned long length)
|
||||
}
|
||||
|
||||
if (wbinvd_on_all_cpus())
|
||||
printk(KERN_ERR "Timed out waiting for cache flush.\n");
|
||||
pr_err("Timed out waiting for cache flush\n");
|
||||
#else
|
||||
printk(KERN_ERR "Architecture has no drm_cache.c support\n");
|
||||
pr_err("Architecture has no drm_cache.c support\n");
|
||||
WARN_ON_ONCE(1);
|
||||
#endif
|
||||
}
|
||||
|
@ -35,8 +35,8 @@
|
||||
* als fixed panels or anything else that can display pixels in some form. As
|
||||
* opposed to all other KMS objects representing hardware (like CRTC, encoder or
|
||||
* plane abstractions) connectors can be hotplugged and unplugged at runtime.
|
||||
* Hence they are reference-counted using drm_connector_reference() and
|
||||
* drm_connector_unreference().
|
||||
* Hence they are reference-counted using drm_connector_get() and
|
||||
* drm_connector_put().
|
||||
*
|
||||
* KMS driver must create, initialize, register and attach at a &struct
|
||||
* drm_connector for each such sink. The instance is created as other KMS
|
||||
@ -128,22 +128,8 @@ static void drm_connector_get_cmdline_mode(struct drm_connector *connector)
|
||||
return;
|
||||
|
||||
if (mode->force) {
|
||||
const char *s;
|
||||
|
||||
switch (mode->force) {
|
||||
case DRM_FORCE_OFF:
|
||||
s = "OFF";
|
||||
break;
|
||||
case DRM_FORCE_ON_DIGITAL:
|
||||
s = "ON - dig";
|
||||
break;
|
||||
default:
|
||||
case DRM_FORCE_ON:
|
||||
s = "ON";
|
||||
break;
|
||||
}
|
||||
|
||||
DRM_INFO("forcing %s connector %s\n", connector->name, s);
|
||||
DRM_INFO("forcing %s connector %s\n", connector->name,
|
||||
drm_get_connector_force_name(mode->force));
|
||||
connector->force = mode->force;
|
||||
}
|
||||
|
||||
@ -189,9 +175,9 @@ int drm_connector_init(struct drm_device *dev,
|
||||
struct ida *connector_ida =
|
||||
&drm_connector_enum_list[connector_type].ida;
|
||||
|
||||
ret = drm_mode_object_get_reg(dev, &connector->base,
|
||||
DRM_MODE_OBJECT_CONNECTOR,
|
||||
false, drm_connector_free);
|
||||
ret = __drm_mode_object_add(dev, &connector->base,
|
||||
DRM_MODE_OBJECT_CONNECTOR,
|
||||
false, drm_connector_free);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -244,6 +230,10 @@ int drm_connector_init(struct drm_device *dev,
|
||||
drm_object_attach_property(&connector->base,
|
||||
config->dpms_property, 0);
|
||||
|
||||
drm_object_attach_property(&connector->base,
|
||||
config->link_status_property,
|
||||
0);
|
||||
|
||||
if (drm_core_check_feature(dev, DRIVER_ATOMIC)) {
|
||||
drm_object_attach_property(&connector->base, config->prop_crtc_id, 0);
|
||||
}
|
||||
@ -445,10 +435,10 @@ void drm_connector_unregister_all(struct drm_device *dev)
|
||||
struct drm_connector *connector;
|
||||
struct drm_connector_list_iter conn_iter;
|
||||
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter)
|
||||
drm_connector_unregister(connector);
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
}
|
||||
|
||||
int drm_connector_register_all(struct drm_device *dev)
|
||||
@ -457,13 +447,13 @@ int drm_connector_register_all(struct drm_device *dev)
|
||||
struct drm_connector_list_iter conn_iter;
|
||||
int ret = 0;
|
||||
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
ret = drm_connector_register(connector);
|
||||
if (ret)
|
||||
break;
|
||||
}
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
if (ret)
|
||||
drm_connector_unregister_all(dev);
|
||||
@ -488,6 +478,28 @@ const char *drm_get_connector_status_name(enum drm_connector_status status)
|
||||
}
|
||||
EXPORT_SYMBOL(drm_get_connector_status_name);
|
||||
|
||||
/**
|
||||
* drm_get_connector_force_name - return a string for connector force
|
||||
* @force: connector force to get name of
|
||||
*
|
||||
* Returns: const pointer to name.
|
||||
*/
|
||||
const char *drm_get_connector_force_name(enum drm_connector_force force)
|
||||
{
|
||||
switch (force) {
|
||||
case DRM_FORCE_UNSPECIFIED:
|
||||
return "unspecified";
|
||||
case DRM_FORCE_OFF:
|
||||
return "off";
|
||||
case DRM_FORCE_ON:
|
||||
return "on";
|
||||
case DRM_FORCE_ON_DIGITAL:
|
||||
return "digital";
|
||||
default:
|
||||
return "unknown";
|
||||
}
|
||||
}
|
||||
|
||||
#ifdef CONFIG_LOCKDEP
|
||||
static struct lockdep_map connector_list_iter_dep_map = {
|
||||
.name = "drm_connector_list_iter"
|
||||
@ -495,23 +507,23 @@ static struct lockdep_map connector_list_iter_dep_map = {
|
||||
#endif
|
||||
|
||||
/**
|
||||
* drm_connector_list_iter_get - initialize a connector_list iterator
|
||||
* drm_connector_list_iter_begin - initialize a connector_list iterator
|
||||
* @dev: DRM device
|
||||
* @iter: connector_list iterator
|
||||
*
|
||||
* Sets @iter up to walk the &drm_mode_config.connector_list of @dev. @iter
|
||||
* must always be cleaned up again by calling drm_connector_list_iter_put().
|
||||
* must always be cleaned up again by calling drm_connector_list_iter_end().
|
||||
* Iteration itself happens using drm_connector_list_iter_next() or
|
||||
* drm_for_each_connector_iter().
|
||||
*/
|
||||
void drm_connector_list_iter_get(struct drm_device *dev,
|
||||
struct drm_connector_list_iter *iter)
|
||||
void drm_connector_list_iter_begin(struct drm_device *dev,
|
||||
struct drm_connector_list_iter *iter)
|
||||
{
|
||||
iter->dev = dev;
|
||||
iter->conn = NULL;
|
||||
lock_acquire_shared_recursive(&connector_list_iter_dep_map, 0, 1, NULL, _RET_IP_);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_connector_list_iter_get);
|
||||
EXPORT_SYMBOL(drm_connector_list_iter_begin);
|
||||
|
||||
/**
|
||||
* drm_connector_list_iter_next - return next connector
|
||||
@ -545,14 +557,14 @@ drm_connector_list_iter_next(struct drm_connector_list_iter *iter)
|
||||
spin_unlock_irqrestore(&config->connector_list_lock, flags);
|
||||
|
||||
if (old_conn)
|
||||
drm_connector_unreference(old_conn);
|
||||
drm_connector_put(old_conn);
|
||||
|
||||
return iter->conn;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_connector_list_iter_next);
|
||||
|
||||
/**
|
||||
* drm_connector_list_iter_put - tear down a connector_list iterator
|
||||
* drm_connector_list_iter_end - tear down a connector_list iterator
|
||||
* @iter: connector_list iterator
|
||||
*
|
||||
* Tears down @iter and releases any resources (like &drm_connector references)
|
||||
@ -560,14 +572,14 @@ EXPORT_SYMBOL(drm_connector_list_iter_next);
|
||||
* iteration completes fully or when it was aborted without walking the entire
|
||||
* list.
|
||||
*/
|
||||
void drm_connector_list_iter_put(struct drm_connector_list_iter *iter)
|
||||
void drm_connector_list_iter_end(struct drm_connector_list_iter *iter)
|
||||
{
|
||||
iter->dev = NULL;
|
||||
if (iter->conn)
|
||||
drm_connector_unreference(iter->conn);
|
||||
drm_connector_put(iter->conn);
|
||||
lock_release(&connector_list_iter_dep_map, 0, _RET_IP_);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_connector_list_iter_put);
|
||||
EXPORT_SYMBOL(drm_connector_list_iter_end);
|
||||
|
||||
static const struct drm_prop_enum_list drm_subpixel_enum_list[] = {
|
||||
{ SubPixelUnknown, "Unknown" },
|
||||
@ -599,6 +611,12 @@ static const struct drm_prop_enum_list drm_dpms_enum_list[] = {
|
||||
};
|
||||
DRM_ENUM_NAME_FN(drm_get_dpms_name, drm_dpms_enum_list)
|
||||
|
||||
static const struct drm_prop_enum_list drm_link_status_enum_list[] = {
|
||||
{ DRM_MODE_LINK_STATUS_GOOD, "Good" },
|
||||
{ DRM_MODE_LINK_STATUS_BAD, "Bad" },
|
||||
};
|
||||
DRM_ENUM_NAME_FN(drm_get_link_status_name, drm_link_status_enum_list)
|
||||
|
||||
/**
|
||||
* drm_display_info_set_bus_formats - set the supported bus formats
|
||||
* @info: display info to store bus formats in
|
||||
@ -718,6 +736,11 @@ DRM_ENUM_NAME_FN(drm_get_tv_subconnector_name,
|
||||
* tiling and virtualize both &drm_crtc and &drm_plane if needed. Drivers
|
||||
* should update this value using drm_mode_connector_set_tile_property().
|
||||
* Userspace cannot change this property.
|
||||
* link-status:
|
||||
* Connector link-status property to indicate the status of link. The default
|
||||
* value of link-status is "GOOD". If something fails during or after modeset,
|
||||
* the kernel driver may set this to "BAD" and issue a hotplug uevent. Drivers
|
||||
* should update this value using drm_mode_connector_set_link_status_property().
|
||||
*
|
||||
* Connectors also have one standardized atomic property:
|
||||
*
|
||||
@ -759,6 +782,13 @@ int drm_connector_create_standard_properties(struct drm_device *dev)
|
||||
return -ENOMEM;
|
||||
dev->mode_config.tile_property = prop;
|
||||
|
||||
prop = drm_property_create_enum(dev, 0, "link-status",
|
||||
drm_link_status_enum_list,
|
||||
ARRAY_SIZE(drm_link_status_enum_list));
|
||||
if (!prop)
|
||||
return -ENOMEM;
|
||||
dev->mode_config.link_status_property = prop;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -1088,6 +1118,36 @@ int drm_mode_connector_update_edid_property(struct drm_connector *connector,
|
||||
}
|
||||
EXPORT_SYMBOL(drm_mode_connector_update_edid_property);
|
||||
|
||||
/**
|
||||
* drm_mode_connector_set_link_status_property - Set link status property of a connector
|
||||
* @connector: drm connector
|
||||
* @link_status: new value of link status property (0: Good, 1: Bad)
|
||||
*
|
||||
* In usual working scenario, this link status property will always be set to
|
||||
* "GOOD". If something fails during or after a mode set, the kernel driver
|
||||
* may set this link status property to "BAD". The caller then needs to send a
|
||||
* hotplug uevent for userspace to re-check the valid modes through
|
||||
* GET_CONNECTOR_IOCTL and retry modeset.
|
||||
*
|
||||
* Note: Drivers cannot rely on userspace to support this property and
|
||||
* issue a modeset. As such, they may choose to handle issues (like
|
||||
* re-training a link) without userspace's intervention.
|
||||
*
|
||||
* The reason for adding this property is to handle link training failures, but
|
||||
* it is not limited to DP or link training. For example, if we implement
|
||||
* asynchronous setcrtc, this property can be used to report any failures in that.
|
||||
*/
|
||||
void drm_mode_connector_set_link_status_property(struct drm_connector *connector,
|
||||
uint64_t link_status)
|
||||
{
|
||||
struct drm_device *dev = connector->dev;
|
||||
|
||||
drm_modeset_lock(&dev->mode_config.connection_mutex, NULL);
|
||||
connector->state->link_status = link_status;
|
||||
drm_modeset_unlock(&dev->mode_config.connection_mutex);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_mode_connector_set_link_status_property);
|
||||
|
||||
int drm_mode_connector_set_obj_prop(struct drm_mode_object *obj,
|
||||
struct drm_property *property,
|
||||
uint64_t value)
|
||||
@ -1249,7 +1309,7 @@ int drm_mode_getconnector(struct drm_device *dev, void *data,
|
||||
out:
|
||||
mutex_unlock(&dev->mode_config.mutex);
|
||||
out_unref:
|
||||
drm_connector_unreference(connector);
|
||||
drm_connector_put(connector);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -282,7 +282,7 @@ int drm_crtc_init_with_planes(struct drm_device *dev, struct drm_crtc *crtc,
|
||||
spin_lock_init(&crtc->commit_lock);
|
||||
|
||||
drm_modeset_lock_init(&crtc->mutex);
|
||||
ret = drm_mode_object_get(dev, &crtc->base, DRM_MODE_OBJECT_CRTC);
|
||||
ret = drm_mode_object_add(dev, &crtc->base, DRM_MODE_OBJECT_CRTC);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -471,9 +471,9 @@ int drm_mode_set_config_internal(struct drm_mode_set *set)
|
||||
|
||||
drm_for_each_crtc(tmp, crtc->dev) {
|
||||
if (tmp->primary->fb)
|
||||
drm_framebuffer_reference(tmp->primary->fb);
|
||||
drm_framebuffer_get(tmp->primary->fb);
|
||||
if (tmp->primary->old_fb)
|
||||
drm_framebuffer_unreference(tmp->primary->old_fb);
|
||||
drm_framebuffer_put(tmp->primary->old_fb);
|
||||
tmp->primary->old_fb = NULL;
|
||||
}
|
||||
|
||||
@ -567,7 +567,7 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
|
||||
}
|
||||
fb = crtc->primary->fb;
|
||||
/* Make refcounting symmetric with the lookup path. */
|
||||
drm_framebuffer_reference(fb);
|
||||
drm_framebuffer_get(fb);
|
||||
} else {
|
||||
fb = drm_framebuffer_lookup(dev, crtc_req->fb_id);
|
||||
if (!fb) {
|
||||
@ -680,12 +680,12 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
|
||||
|
||||
out:
|
||||
if (fb)
|
||||
drm_framebuffer_unreference(fb);
|
||||
drm_framebuffer_put(fb);
|
||||
|
||||
if (connector_set) {
|
||||
for (i = 0; i < crtc_req->count_connectors; i++) {
|
||||
if (connector_set[i])
|
||||
drm_connector_unreference(connector_set[i]);
|
||||
drm_connector_put(connector_set[i]);
|
||||
}
|
||||
}
|
||||
kfree(connector_set);
|
||||
|
@ -102,14 +102,14 @@ bool drm_helper_encoder_in_use(struct drm_encoder *encoder)
|
||||
}
|
||||
|
||||
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
if (connector->encoder == encoder) {
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
return false;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_helper_encoder_in_use);
|
||||
@ -449,7 +449,7 @@ drm_crtc_helper_disable(struct drm_crtc *crtc)
|
||||
if (encoder->crtc != crtc)
|
||||
continue;
|
||||
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
if (connector->encoder != encoder)
|
||||
continue;
|
||||
@ -465,9 +465,9 @@ drm_crtc_helper_disable(struct drm_crtc *crtc)
|
||||
connector->dpms = DRM_MODE_DPMS_OFF;
|
||||
|
||||
/* we keep a reference while the encoder is bound */
|
||||
drm_connector_unreference(connector);
|
||||
drm_connector_put(connector);
|
||||
}
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
}
|
||||
|
||||
__drm_helper_disable_unused_functions(dev);
|
||||
@ -583,10 +583,10 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
|
||||
}
|
||||
|
||||
count = 0;
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter)
|
||||
save_connector_encoders[count++] = connector->encoder;
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
save_set.crtc = set->crtc;
|
||||
save_set.mode = &set->crtc->mode;
|
||||
@ -623,12 +623,12 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
|
||||
for (ro = 0; ro < set->num_connectors; ro++) {
|
||||
if (set->connectors[ro]->encoder)
|
||||
continue;
|
||||
drm_connector_reference(set->connectors[ro]);
|
||||
drm_connector_get(set->connectors[ro]);
|
||||
}
|
||||
|
||||
/* a) traverse passed in connector list and get encoders for them */
|
||||
count = 0;
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
const struct drm_connector_helper_funcs *connector_funcs =
|
||||
connector->helper_private;
|
||||
@ -662,7 +662,7 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
|
||||
connector->encoder = new_encoder;
|
||||
}
|
||||
}
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
if (fail) {
|
||||
ret = -EINVAL;
|
||||
@ -670,7 +670,7 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
|
||||
}
|
||||
|
||||
count = 0;
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
if (!connector->encoder)
|
||||
continue;
|
||||
@ -689,7 +689,7 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
|
||||
if (new_crtc &&
|
||||
!drm_encoder_crtc_ok(connector->encoder, new_crtc)) {
|
||||
ret = -EINVAL;
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
goto fail;
|
||||
}
|
||||
if (new_crtc != connector->encoder->crtc) {
|
||||
@ -706,7 +706,7 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
|
||||
connector->base.id, connector->name);
|
||||
}
|
||||
}
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
/* mode_set_base is not a required function */
|
||||
if (fb_changed && !crtc_funcs->mode_set_base)
|
||||
@ -761,10 +761,10 @@ fail:
|
||||
}
|
||||
|
||||
count = 0;
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter)
|
||||
connector->encoder = save_connector_encoders[count++];
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
/* after fail drop reference on all unbound connectors in set, let
|
||||
* bound connectors keep their reference
|
||||
@ -772,7 +772,7 @@ fail:
|
||||
for (ro = 0; ro < set->num_connectors; ro++) {
|
||||
if (set->connectors[ro]->encoder)
|
||||
continue;
|
||||
drm_connector_unreference(set->connectors[ro]);
|
||||
drm_connector_put(set->connectors[ro]);
|
||||
}
|
||||
|
||||
/* Try to restore the config */
|
||||
@ -794,12 +794,12 @@ static int drm_helper_choose_encoder_dpms(struct drm_encoder *encoder)
|
||||
struct drm_connector_list_iter conn_iter;
|
||||
struct drm_device *dev = encoder->dev;
|
||||
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter)
|
||||
if (connector->encoder == encoder)
|
||||
if (connector->dpms < dpms)
|
||||
dpms = connector->dpms;
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
return dpms;
|
||||
}
|
||||
@ -835,12 +835,12 @@ static int drm_helper_choose_crtc_dpms(struct drm_crtc *crtc)
|
||||
struct drm_connector_list_iter conn_iter;
|
||||
struct drm_device *dev = crtc->dev;
|
||||
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter)
|
||||
if (connector->encoder && connector->encoder->crtc == crtc)
|
||||
if (connector->dpms < dpms)
|
||||
dpms = connector->dpms;
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
return dpms;
|
||||
}
|
||||
|
@ -98,15 +98,13 @@ int drm_mode_destroyblob_ioctl(struct drm_device *dev,
|
||||
void *data, struct drm_file *file_priv);
|
||||
|
||||
/* drm_mode_object.c */
|
||||
int drm_mode_object_get_reg(struct drm_device *dev,
|
||||
struct drm_mode_object *obj,
|
||||
uint32_t obj_type,
|
||||
bool register_obj,
|
||||
void (*obj_free_cb)(struct kref *kref));
|
||||
int __drm_mode_object_add(struct drm_device *dev, struct drm_mode_object *obj,
|
||||
uint32_t obj_type, bool register_obj,
|
||||
void (*obj_free_cb)(struct kref *kref));
|
||||
int drm_mode_object_add(struct drm_device *dev, struct drm_mode_object *obj,
|
||||
uint32_t obj_type);
|
||||
void drm_mode_object_register(struct drm_device *dev,
|
||||
struct drm_mode_object *obj);
|
||||
int drm_mode_object_get(struct drm_device *dev,
|
||||
struct drm_mode_object *obj, uint32_t obj_type);
|
||||
struct drm_mode_object *__drm_mode_object_find(struct drm_device *dev,
|
||||
uint32_t id, uint32_t type);
|
||||
void drm_mode_object_unregister(struct drm_device *dev,
|
||||
@ -142,6 +140,7 @@ int drm_mode_connector_set_obj_prop(struct drm_mode_object *obj,
|
||||
struct drm_property *property,
|
||||
uint64_t value);
|
||||
int drm_connector_create_standard_properties(struct drm_device *dev);
|
||||
const char *drm_get_connector_force_name(enum drm_connector_force force);
|
||||
|
||||
/* IOCTL */
|
||||
int drm_mode_connector_property_set_ioctl(struct drm_device *dev,
|
||||
@ -183,6 +182,7 @@ int drm_atomic_get_property(struct drm_mode_object *obj,
|
||||
struct drm_property *property, uint64_t *val);
|
||||
int drm_mode_atomic_ioctl(struct drm_device *dev,
|
||||
void *data, struct drm_file *file_priv);
|
||||
int drm_atomic_remove_fb(struct drm_framebuffer *fb);
|
||||
|
||||
|
||||
/* drm_plane.c */
|
||||
|
@ -261,30 +261,8 @@ int drm_debugfs_cleanup(struct drm_minor *minor)
|
||||
static int connector_show(struct seq_file *m, void *data)
|
||||
{
|
||||
struct drm_connector *connector = m->private;
|
||||
const char *status;
|
||||
|
||||
switch (connector->force) {
|
||||
case DRM_FORCE_ON:
|
||||
status = "on\n";
|
||||
break;
|
||||
|
||||
case DRM_FORCE_ON_DIGITAL:
|
||||
status = "digital\n";
|
||||
break;
|
||||
|
||||
case DRM_FORCE_OFF:
|
||||
status = "off\n";
|
||||
break;
|
||||
|
||||
case DRM_FORCE_UNSPECIFIED:
|
||||
status = "unspecified\n";
|
||||
break;
|
||||
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
|
||||
seq_puts(m, status);
|
||||
seq_printf(m, "%s\n", drm_get_connector_force_name(connector->force));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -386,6 +386,8 @@ const char *drm_dp_get_dual_mode_type_name(enum drm_dp_dual_mode_type type)
|
||||
return "type 2 DVI";
|
||||
case DRM_DP_DUAL_MODE_TYPE2_HDMI:
|
||||
return "type 2 HDMI";
|
||||
case DRM_DP_DUAL_MODE_LSPCON:
|
||||
return "lspcon";
|
||||
default:
|
||||
WARN_ON(type != DRM_DP_DUAL_MODE_UNKNOWN);
|
||||
return "unknown";
|
||||
|
@ -1131,23 +1131,26 @@ bool drm_edid_block_valid(u8 *raw_edid, int block, bool print_bad_edid,
|
||||
|
||||
csum = drm_edid_block_checksum(raw_edid);
|
||||
if (csum) {
|
||||
if (print_bad_edid) {
|
||||
DRM_ERROR("EDID checksum is invalid, remainder is %d\n", csum);
|
||||
}
|
||||
|
||||
if (edid_corrupt)
|
||||
*edid_corrupt = true;
|
||||
|
||||
/* allow CEA to slide through, switches mangle this */
|
||||
if (raw_edid[0] != 0x02)
|
||||
if (raw_edid[0] == CEA_EXT) {
|
||||
DRM_DEBUG("EDID checksum is invalid, remainder is %d\n", csum);
|
||||
DRM_DEBUG("Assuming a KVM switch modified the CEA block but left the original checksum\n");
|
||||
} else {
|
||||
if (print_bad_edid)
|
||||
DRM_NOTE("EDID checksum is invalid, remainder is %d\n", csum);
|
||||
|
||||
goto bad;
|
||||
}
|
||||
}
|
||||
|
||||
/* per-block-type checks */
|
||||
switch (raw_edid[0]) {
|
||||
case 0: /* base */
|
||||
if (edid->version != 1) {
|
||||
DRM_ERROR("EDID has major version %d, instead of 1\n", edid->version);
|
||||
DRM_NOTE("EDID has major version %d, instead of 1\n", edid->version);
|
||||
goto bad;
|
||||
}
|
||||
|
||||
@ -1164,11 +1167,12 @@ bool drm_edid_block_valid(u8 *raw_edid, int block, bool print_bad_edid,
|
||||
bad:
|
||||
if (print_bad_edid) {
|
||||
if (drm_edid_is_zero(raw_edid, EDID_LENGTH)) {
|
||||
printk(KERN_ERR "EDID block is all zeroes\n");
|
||||
pr_notice("EDID block is all zeroes\n");
|
||||
} else {
|
||||
printk(KERN_ERR "Raw EDID:\n");
|
||||
print_hex_dump(KERN_ERR, " \t", DUMP_PREFIX_NONE, 16, 1,
|
||||
raw_edid, EDID_LENGTH, false);
|
||||
pr_notice("Raw EDID:\n");
|
||||
print_hex_dump(KERN_NOTICE,
|
||||
" \t", DUMP_PREFIX_NONE, 16, 1,
|
||||
raw_edid, EDID_LENGTH, false);
|
||||
}
|
||||
}
|
||||
return false;
|
||||
@ -1424,7 +1428,10 @@ struct edid *drm_get_edid(struct drm_connector *connector,
|
||||
{
|
||||
struct edid *edid;
|
||||
|
||||
if (!drm_probe_ddc(adapter))
|
||||
if (connector->force == DRM_FORCE_OFF)
|
||||
return NULL;
|
||||
|
||||
if (connector->force == DRM_FORCE_UNSPECIFIED && !drm_probe_ddc(adapter))
|
||||
return NULL;
|
||||
|
||||
edid = drm_do_get_edid(connector, drm_do_probe_ddc_edid, adapter);
|
||||
@ -3433,6 +3440,9 @@ void drm_edid_to_eld(struct drm_connector *connector, struct edid *edid)
|
||||
connector->video_latency[1] = 0;
|
||||
connector->audio_latency[1] = 0;
|
||||
|
||||
if (!edid)
|
||||
return;
|
||||
|
||||
cea = drm_find_cea_extension(edid);
|
||||
if (!cea) {
|
||||
DRM_DEBUG_KMS("ELD: no CEA Extension found\n");
|
||||
@ -3999,7 +4009,7 @@ static int validate_displayid(u8 *displayid, int length, int idx)
|
||||
csum += displayid[i];
|
||||
}
|
||||
if (csum) {
|
||||
DRM_ERROR("DisplayID checksum invalid, remainder is %d\n", csum);
|
||||
DRM_NOTE("DisplayID checksum invalid, remainder is %d\n", csum);
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
|
@ -256,15 +256,14 @@ out:
|
||||
return edid;
|
||||
}
|
||||
|
||||
int drm_load_edid_firmware(struct drm_connector *connector)
|
||||
struct edid *drm_load_edid_firmware(struct drm_connector *connector)
|
||||
{
|
||||
const char *connector_name = connector->name;
|
||||
char *edidname, *last, *colon, *fwstr, *edidstr, *fallback = NULL;
|
||||
int ret;
|
||||
struct edid *edid;
|
||||
|
||||
if (edid_firmware[0] == '\0')
|
||||
return 0;
|
||||
return ERR_PTR(-ENOENT);
|
||||
|
||||
/*
|
||||
* If there are multiple edid files specified and separated
|
||||
@ -293,7 +292,7 @@ int drm_load_edid_firmware(struct drm_connector *connector)
|
||||
if (!edidname) {
|
||||
if (!fallback) {
|
||||
kfree(fwstr);
|
||||
return 0;
|
||||
return ERR_PTR(-ENOENT);
|
||||
}
|
||||
edidname = fallback;
|
||||
}
|
||||
@ -305,13 +304,5 @@ int drm_load_edid_firmware(struct drm_connector *connector)
|
||||
edid = edid_load(connector, edidname, connector_name);
|
||||
kfree(fwstr);
|
||||
|
||||
if (IS_ERR_OR_NULL(edid))
|
||||
return 0;
|
||||
|
||||
drm_mode_connector_update_edid_property(connector, edid);
|
||||
ret = drm_add_edid_modes(connector, edid);
|
||||
drm_edid_to_eld(connector, edid);
|
||||
kfree(edid);
|
||||
|
||||
return ret;
|
||||
return edid;
|
||||
}
|
||||
|
@ -110,7 +110,7 @@ int drm_encoder_init(struct drm_device *dev,
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = drm_mode_object_get(dev, &encoder->base, DRM_MODE_OBJECT_ENCODER);
|
||||
ret = drm_mode_object_add(dev, &encoder->base, DRM_MODE_OBJECT_ENCODER);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -188,7 +188,7 @@ static struct drm_crtc *drm_encoder_get_crtc(struct drm_encoder *encoder)
|
||||
|
||||
/* For atomic drivers only state objects are synchronously updated and
|
||||
* protected by modeset locks, so check those first. */
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
if (!connector->state)
|
||||
continue;
|
||||
@ -198,10 +198,10 @@ static struct drm_crtc *drm_encoder_get_crtc(struct drm_encoder *encoder)
|
||||
if (connector->state->best_encoder != encoder)
|
||||
continue;
|
||||
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
return connector->state->crtc;
|
||||
}
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
/* Don't return stale data (e.g. pending async disable). */
|
||||
if (uses_atomic)
|
||||
|
@ -102,7 +102,7 @@ void drm_fb_cma_destroy(struct drm_framebuffer *fb)
|
||||
|
||||
for (i = 0; i < 4; i++) {
|
||||
if (fb_cma->obj[i])
|
||||
drm_gem_object_unreference_unlocked(&fb_cma->obj[i]->base);
|
||||
drm_gem_object_put_unlocked(&fb_cma->obj[i]->base);
|
||||
}
|
||||
|
||||
drm_framebuffer_cleanup(fb);
|
||||
@ -190,7 +190,7 @@ struct drm_framebuffer *drm_fb_cma_create_with_funcs(struct drm_device *dev,
|
||||
if (!obj) {
|
||||
dev_err(dev->dev, "Failed to lookup GEM object\n");
|
||||
ret = -ENXIO;
|
||||
goto err_gem_object_unreference;
|
||||
goto err_gem_object_put;
|
||||
}
|
||||
|
||||
min_size = (height - 1) * mode_cmd->pitches[i]
|
||||
@ -198,9 +198,9 @@ struct drm_framebuffer *drm_fb_cma_create_with_funcs(struct drm_device *dev,
|
||||
+ mode_cmd->offsets[i];
|
||||
|
||||
if (obj->size < min_size) {
|
||||
drm_gem_object_unreference_unlocked(obj);
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
ret = -EINVAL;
|
||||
goto err_gem_object_unreference;
|
||||
goto err_gem_object_put;
|
||||
}
|
||||
objs[i] = to_drm_gem_cma_obj(obj);
|
||||
}
|
||||
@ -208,14 +208,14 @@ struct drm_framebuffer *drm_fb_cma_create_with_funcs(struct drm_device *dev,
|
||||
fb_cma = drm_fb_cma_alloc(dev, mode_cmd, objs, i, funcs);
|
||||
if (IS_ERR(fb_cma)) {
|
||||
ret = PTR_ERR(fb_cma);
|
||||
goto err_gem_object_unreference;
|
||||
goto err_gem_object_put;
|
||||
}
|
||||
|
||||
return &fb_cma->fb;
|
||||
|
||||
err_gem_object_unreference:
|
||||
err_gem_object_put:
|
||||
for (i--; i >= 0; i--)
|
||||
drm_gem_object_unreference_unlocked(&objs[i]->base);
|
||||
drm_gem_object_put_unlocked(&objs[i]->base);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(drm_fb_cma_create_with_funcs);
|
||||
@ -475,9 +475,9 @@ drm_fbdev_cma_create(struct drm_fb_helper *helper,
|
||||
err_cma_destroy:
|
||||
drm_framebuffer_remove(&fbdev_cma->fb->fb);
|
||||
err_fb_info_destroy:
|
||||
drm_fb_helper_release_fbi(helper);
|
||||
drm_fb_helper_fini(helper);
|
||||
err_gem_free_object:
|
||||
drm_gem_object_unreference_unlocked(&obj->base);
|
||||
drm_gem_object_put_unlocked(&obj->base);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -547,7 +547,6 @@ EXPORT_SYMBOL_GPL(drm_fbdev_cma_init_with_funcs);
|
||||
* drm_fbdev_cma_init() - Allocate and initializes a drm_fbdev_cma struct
|
||||
* @dev: DRM device
|
||||
* @preferred_bpp: Preferred bits per pixel for the device
|
||||
* @num_crtc: Number of CRTCs
|
||||
* @max_conn_count: Maximum number of connectors
|
||||
*
|
||||
* Returns a newly allocated drm_fbdev_cma struct or a ERR_PTR.
|
||||
@ -570,7 +569,6 @@ void drm_fbdev_cma_fini(struct drm_fbdev_cma *fbdev_cma)
|
||||
drm_fb_helper_unregister_fbi(&fbdev_cma->fb_helper);
|
||||
if (fbdev_cma->fb_helper.fbdev)
|
||||
drm_fbdev_cma_defio_fini(fbdev_cma->fb_helper.fbdev);
|
||||
drm_fb_helper_release_fbi(&fbdev_cma->fb_helper);
|
||||
|
||||
if (fbdev_cma->fb)
|
||||
drm_framebuffer_remove(&fbdev_cma->fb->fb);
|
||||
|
@ -48,6 +48,12 @@ module_param_named(fbdev_emulation, drm_fbdev_emulation, bool, 0600);
|
||||
MODULE_PARM_DESC(fbdev_emulation,
|
||||
"Enable legacy fbdev emulation [default=true]");
|
||||
|
||||
static int drm_fbdev_overalloc = CONFIG_DRM_FBDEV_OVERALLOC;
|
||||
module_param(drm_fbdev_overalloc, int, 0444);
|
||||
MODULE_PARM_DESC(drm_fbdev_overalloc,
|
||||
"Overallocation of the fbdev buffer (%) [default="
|
||||
__MODULE_STRING(CONFIG_DRM_FBDEV_OVERALLOC) "]");
|
||||
|
||||
static LIST_HEAD(kernel_fb_helper_list);
|
||||
static DEFINE_MUTEX(kernel_fb_helper_lock);
|
||||
|
||||
@ -63,7 +69,8 @@ static DEFINE_MUTEX(kernel_fb_helper_lock);
|
||||
* drm_fb_helper_init(), drm_fb_helper_single_add_all_connectors() and
|
||||
* drm_fb_helper_initial_config(). Drivers with fancier requirements than the
|
||||
* default behaviour can override the third step with their own code.
|
||||
* Teardown is done with drm_fb_helper_fini().
|
||||
* Teardown is done with drm_fb_helper_fini() after the fbdev device is
|
||||
* unregisters using drm_fb_helper_unregister_fbi().
|
||||
*
|
||||
* At runtime drivers should restore the fbdev console by calling
|
||||
* drm_fb_helper_restore_fbdev_mode_unlocked() from their &drm_driver.lastclose
|
||||
@ -127,7 +134,7 @@ int drm_fb_helper_single_add_all_connectors(struct drm_fb_helper *fb_helper)
|
||||
return 0;
|
||||
|
||||
mutex_lock(&dev->mode_config.mutex);
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
ret = drm_fb_helper_add_one_connector(fb_helper, connector);
|
||||
|
||||
@ -141,14 +148,14 @@ fail:
|
||||
struct drm_fb_helper_connector *fb_helper_connector =
|
||||
fb_helper->connector_info[i];
|
||||
|
||||
drm_connector_unreference(fb_helper_connector->connector);
|
||||
drm_connector_put(fb_helper_connector->connector);
|
||||
|
||||
kfree(fb_helper_connector);
|
||||
fb_helper->connector_info[i] = NULL;
|
||||
}
|
||||
fb_helper->connector_count = 0;
|
||||
out:
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
mutex_unlock(&dev->mode_config.mutex);
|
||||
|
||||
return ret;
|
||||
@ -178,7 +185,7 @@ int drm_fb_helper_add_one_connector(struct drm_fb_helper *fb_helper, struct drm_
|
||||
if (!fb_helper_connector)
|
||||
return -ENOMEM;
|
||||
|
||||
drm_connector_reference(connector);
|
||||
drm_connector_get(connector);
|
||||
fb_helper_connector->connector = connector;
|
||||
fb_helper->connector_info[fb_helper->connector_count++] = fb_helper_connector;
|
||||
return 0;
|
||||
@ -204,7 +211,7 @@ int drm_fb_helper_remove_one_connector(struct drm_fb_helper *fb_helper,
|
||||
if (i == fb_helper->connector_count)
|
||||
return -EINVAL;
|
||||
fb_helper_connector = fb_helper->connector_info[i];
|
||||
drm_connector_unreference(fb_helper_connector->connector);
|
||||
drm_connector_put(fb_helper_connector->connector);
|
||||
|
||||
for (j = i + 1; j < fb_helper->connector_count; j++) {
|
||||
fb_helper->connector_info[j - 1] = fb_helper->connector_info[j];
|
||||
@ -626,7 +633,7 @@ static void drm_fb_helper_modeset_release(struct drm_fb_helper *helper,
|
||||
int i;
|
||||
|
||||
for (i = 0; i < modeset->num_connectors; i++) {
|
||||
drm_connector_unreference(modeset->connectors[i]);
|
||||
drm_connector_put(modeset->connectors[i]);
|
||||
modeset->connectors[i] = NULL;
|
||||
}
|
||||
modeset->num_connectors = 0;
|
||||
@ -643,7 +650,7 @@ static void drm_fb_helper_crtc_free(struct drm_fb_helper *helper)
|
||||
int i;
|
||||
|
||||
for (i = 0; i < helper->connector_count; i++) {
|
||||
drm_connector_unreference(helper->connector_info[i]->connector);
|
||||
drm_connector_put(helper->connector_info[i]->connector);
|
||||
kfree(helper->connector_info[i]);
|
||||
}
|
||||
kfree(helper->connector_info);
|
||||
@ -709,7 +716,7 @@ void drm_fb_helper_prepare(struct drm_device *dev, struct drm_fb_helper *helper,
|
||||
EXPORT_SYMBOL(drm_fb_helper_prepare);
|
||||
|
||||
/**
|
||||
* drm_fb_helper_init - initialize a drm_fb_helper structure
|
||||
* drm_fb_helper_init - initialize a &struct drm_fb_helper
|
||||
* @dev: drm device
|
||||
* @fb_helper: driver-allocated fbdev helper structure to initialize
|
||||
* @max_conn_count: max connector count
|
||||
@ -780,7 +787,9 @@ EXPORT_SYMBOL(drm_fb_helper_init);
|
||||
* @fb_helper: driver-allocated fbdev helper
|
||||
*
|
||||
* A helper to alloc fb_info and the members cmap and apertures. Called
|
||||
* by the driver within the fb_probe fb_helper callback function.
|
||||
* by the driver within the fb_probe fb_helper callback function. Drivers do not
|
||||
* need to release the allocated fb_info structure themselves, this is
|
||||
* automatically done when calling drm_fb_helper_fini().
|
||||
*
|
||||
* RETURNS:
|
||||
* fb_info pointer if things went okay, pointer containing error code
|
||||
@ -823,7 +832,8 @@ EXPORT_SYMBOL(drm_fb_helper_alloc_fbi);
|
||||
* @fb_helper: driver-allocated fbdev helper
|
||||
*
|
||||
* A wrapper around unregister_framebuffer, to release the fb_info
|
||||
* framebuffer device
|
||||
* framebuffer device. This must be called before releasing all resources for
|
||||
* @fb_helper by calling drm_fb_helper_fini().
|
||||
*/
|
||||
void drm_fb_helper_unregister_fbi(struct drm_fb_helper *fb_helper)
|
||||
{
|
||||
@ -833,33 +843,27 @@ void drm_fb_helper_unregister_fbi(struct drm_fb_helper *fb_helper)
|
||||
EXPORT_SYMBOL(drm_fb_helper_unregister_fbi);
|
||||
|
||||
/**
|
||||
* drm_fb_helper_release_fbi - dealloc fb_info and its members
|
||||
* drm_fb_helper_fini - finialize a &struct drm_fb_helper
|
||||
* @fb_helper: driver-allocated fbdev helper
|
||||
*
|
||||
* A helper to free memory taken by fb_info and the members cmap and
|
||||
* apertures
|
||||
* This cleans up all remaining resources associated with @fb_helper. Must be
|
||||
* called after drm_fb_helper_unlink_fbi() was called.
|
||||
*/
|
||||
void drm_fb_helper_release_fbi(struct drm_fb_helper *fb_helper)
|
||||
{
|
||||
if (fb_helper) {
|
||||
struct fb_info *info = fb_helper->fbdev;
|
||||
|
||||
if (info) {
|
||||
if (info->cmap.len)
|
||||
fb_dealloc_cmap(&info->cmap);
|
||||
framebuffer_release(info);
|
||||
}
|
||||
|
||||
fb_helper->fbdev = NULL;
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(drm_fb_helper_release_fbi);
|
||||
|
||||
void drm_fb_helper_fini(struct drm_fb_helper *fb_helper)
|
||||
{
|
||||
if (!drm_fbdev_emulation)
|
||||
struct fb_info *info;
|
||||
|
||||
if (!drm_fbdev_emulation || !fb_helper)
|
||||
return;
|
||||
|
||||
info = fb_helper->fbdev;
|
||||
if (info) {
|
||||
if (info->cmap.len)
|
||||
fb_dealloc_cmap(&info->cmap);
|
||||
framebuffer_release(info);
|
||||
}
|
||||
fb_helper->fbdev = NULL;
|
||||
|
||||
cancel_work_sync(&fb_helper->resume_work);
|
||||
cancel_work_sync(&fb_helper->dirty_work);
|
||||
|
||||
@ -1240,6 +1244,74 @@ int drm_fb_helper_setcmap(struct fb_cmap *cmap, struct fb_info *info)
|
||||
}
|
||||
EXPORT_SYMBOL(drm_fb_helper_setcmap);
|
||||
|
||||
/**
|
||||
* drm_fb_helper_ioctl - legacy ioctl implementation
|
||||
* @info: fbdev registered by the helper
|
||||
* @cmd: ioctl command
|
||||
* @arg: ioctl argument
|
||||
*
|
||||
* A helper to implement the standard fbdev ioctl. Only
|
||||
* FBIO_WAITFORVSYNC is implemented for now.
|
||||
*/
|
||||
int drm_fb_helper_ioctl(struct fb_info *info, unsigned int cmd,
|
||||
unsigned long arg)
|
||||
{
|
||||
struct drm_fb_helper *fb_helper = info->par;
|
||||
struct drm_device *dev = fb_helper->dev;
|
||||
struct drm_mode_set *mode_set;
|
||||
struct drm_crtc *crtc;
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&dev->mode_config.mutex);
|
||||
if (!drm_fb_helper_is_bound(fb_helper)) {
|
||||
ret = -EBUSY;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
switch (cmd) {
|
||||
case FBIO_WAITFORVSYNC:
|
||||
/*
|
||||
* Only consider the first CRTC.
|
||||
*
|
||||
* This ioctl is supposed to take the CRTC number as
|
||||
* an argument, but in fbdev times, what that number
|
||||
* was supposed to be was quite unclear, different
|
||||
* drivers were passing that argument differently
|
||||
* (some by reference, some by value), and most of the
|
||||
* userspace applications were just hardcoding 0 as an
|
||||
* argument.
|
||||
*
|
||||
* The first CRTC should be the integrated panel on
|
||||
* most drivers, so this is the best choice we can
|
||||
* make. If we're not smart enough here, one should
|
||||
* just consider switch the userspace to KMS.
|
||||
*/
|
||||
mode_set = &fb_helper->crtc_info[0].mode_set;
|
||||
crtc = mode_set->crtc;
|
||||
|
||||
/*
|
||||
* Only wait for a vblank event if the CRTC is
|
||||
* enabled, otherwise just don't do anythintg,
|
||||
* not even report an error.
|
||||
*/
|
||||
ret = drm_crtc_vblank_get(crtc);
|
||||
if (!ret) {
|
||||
drm_crtc_wait_one_vblank(crtc);
|
||||
drm_crtc_vblank_put(crtc);
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
goto unlock;
|
||||
default:
|
||||
ret = -ENOTTY;
|
||||
}
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&dev->mode_config.mutex);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_fb_helper_ioctl);
|
||||
|
||||
/**
|
||||
* drm_fb_helper_check_var - implementation for &fb_ops.fb_check_var
|
||||
* @var: screeninfo to check
|
||||
@ -1580,6 +1652,10 @@ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
|
||||
sizes.fb_height = sizes.surface_height = 768;
|
||||
}
|
||||
|
||||
/* Handle our overallocation */
|
||||
sizes.surface_height *= drm_fbdev_overalloc;
|
||||
sizes.surface_height /= 100;
|
||||
|
||||
/* push down into drivers */
|
||||
ret = (*fb_helper->funcs->fb_probe)(fb_helper, &sizes);
|
||||
if (ret < 0)
|
||||
@ -2184,7 +2260,7 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper,
|
||||
fb_crtc->y = offset->y;
|
||||
modeset->mode = drm_mode_duplicate(dev,
|
||||
fb_crtc->desired_mode);
|
||||
drm_connector_reference(connector);
|
||||
drm_connector_get(connector);
|
||||
modeset->connectors[modeset->num_connectors++] = connector;
|
||||
modeset->fb = fb_helper->fb;
|
||||
modeset->x = offset->x;
|
||||
|
@ -52,13 +52,13 @@
|
||||
* metadata fields.
|
||||
*
|
||||
* The lifetime of a drm framebuffer is controlled with a reference count,
|
||||
* drivers can grab additional references with drm_framebuffer_reference() and
|
||||
* drop them again with drm_framebuffer_unreference(). For driver-private
|
||||
* framebuffers for which the last reference is never dropped (e.g. for the
|
||||
* fbdev framebuffer when the struct &struct drm_framebuffer is embedded into
|
||||
* the fbdev helper struct) drivers can manually clean up a framebuffer at
|
||||
* module unload time with drm_framebuffer_unregister_private(). But doing this
|
||||
* is not recommended, and it's better to have a normal free-standing &struct
|
||||
* drivers can grab additional references with drm_framebuffer_get() and drop
|
||||
* them again with drm_framebuffer_put(). For driver-private framebuffers for
|
||||
* which the last reference is never dropped (e.g. for the fbdev framebuffer
|
||||
* when the struct &struct drm_framebuffer is embedded into the fbdev helper
|
||||
* struct) drivers can manually clean up a framebuffer at module unload time
|
||||
* with drm_framebuffer_unregister_private(). But doing this is not
|
||||
* recommended, and it's better to have a normal free-standing &struct
|
||||
* drm_framebuffer.
|
||||
*/
|
||||
|
||||
@ -374,7 +374,7 @@ int drm_mode_rmfb(struct drm_device *dev,
|
||||
mutex_unlock(&file_priv->fbs_lock);
|
||||
|
||||
/* drop the reference we picked up in framebuffer lookup */
|
||||
drm_framebuffer_unreference(fb);
|
||||
drm_framebuffer_put(fb);
|
||||
|
||||
/*
|
||||
* we now own the reference that was stored in the fbs list
|
||||
@ -394,12 +394,12 @@ int drm_mode_rmfb(struct drm_device *dev,
|
||||
flush_work(&arg.work);
|
||||
destroy_work_on_stack(&arg.work);
|
||||
} else
|
||||
drm_framebuffer_unreference(fb);
|
||||
drm_framebuffer_put(fb);
|
||||
|
||||
return 0;
|
||||
|
||||
fail_unref:
|
||||
drm_framebuffer_unreference(fb);
|
||||
drm_framebuffer_put(fb);
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
@ -453,7 +453,7 @@ int drm_mode_getfb(struct drm_device *dev,
|
||||
ret = -ENODEV;
|
||||
}
|
||||
|
||||
drm_framebuffer_unreference(fb);
|
||||
drm_framebuffer_put(fb);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -540,7 +540,7 @@ int drm_mode_dirtyfb_ioctl(struct drm_device *dev,
|
||||
out_err2:
|
||||
kfree(clips);
|
||||
out_err1:
|
||||
drm_framebuffer_unreference(fb);
|
||||
drm_framebuffer_put(fb);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -580,7 +580,7 @@ void drm_fb_release(struct drm_file *priv)
|
||||
list_del_init(&fb->filp_head);
|
||||
|
||||
/* This drops the fpriv->fbs reference. */
|
||||
drm_framebuffer_unreference(fb);
|
||||
drm_framebuffer_put(fb);
|
||||
}
|
||||
}
|
||||
|
||||
@ -638,8 +638,8 @@ int drm_framebuffer_init(struct drm_device *dev, struct drm_framebuffer *fb,
|
||||
|
||||
fb->funcs = funcs;
|
||||
|
||||
ret = drm_mode_object_get_reg(dev, &fb->base, DRM_MODE_OBJECT_FB,
|
||||
false, drm_framebuffer_free);
|
||||
ret = __drm_mode_object_add(dev, &fb->base, DRM_MODE_OBJECT_FB,
|
||||
false, drm_framebuffer_free);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
@ -661,7 +661,7 @@ EXPORT_SYMBOL(drm_framebuffer_init);
|
||||
*
|
||||
* If successful, this grabs an additional reference to the framebuffer -
|
||||
* callers need to make sure to eventually unreference the returned framebuffer
|
||||
* again, using @drm_framebuffer_unreference.
|
||||
* again, using drm_framebuffer_put().
|
||||
*/
|
||||
struct drm_framebuffer *drm_framebuffer_lookup(struct drm_device *dev,
|
||||
uint32_t id)
|
||||
@ -687,8 +687,8 @@ EXPORT_SYMBOL(drm_framebuffer_lookup);
|
||||
*
|
||||
* NOTE: This function is deprecated. For driver-private framebuffers it is not
|
||||
* recommended to embed a framebuffer struct info fbdev struct, instead, a
|
||||
* framebuffer pointer is preferred and drm_framebuffer_unreference() should be
|
||||
* called when the framebuffer is to be cleaned up.
|
||||
* framebuffer pointer is preferred and drm_framebuffer_put() should be called
|
||||
* when the framebuffer is to be cleaned up.
|
||||
*/
|
||||
void drm_framebuffer_unregister_private(struct drm_framebuffer *fb)
|
||||
{
|
||||
@ -773,6 +773,12 @@ void drm_framebuffer_remove(struct drm_framebuffer *fb)
|
||||
* in this manner.
|
||||
*/
|
||||
if (drm_framebuffer_read_refcount(fb) > 1) {
|
||||
if (drm_drv_uses_atomic_modeset(dev)) {
|
||||
int ret = drm_atomic_remove_fb(fb);
|
||||
WARN(ret, "atomic remove_fb failed with %i\n", ret);
|
||||
goto out;
|
||||
}
|
||||
|
||||
drm_modeset_lock_all(dev);
|
||||
/* remove from any CRTC */
|
||||
drm_for_each_crtc(crtc, dev) {
|
||||
@ -790,7 +796,8 @@ void drm_framebuffer_remove(struct drm_framebuffer *fb)
|
||||
drm_modeset_unlock_all(dev);
|
||||
}
|
||||
|
||||
drm_framebuffer_unreference(fb);
|
||||
out:
|
||||
drm_framebuffer_put(fb);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_framebuffer_remove);
|
||||
|
||||
|
@ -218,7 +218,7 @@ static void drm_gem_object_exported_dma_buf_free(struct drm_gem_object *obj)
|
||||
}
|
||||
|
||||
static void
|
||||
drm_gem_object_handle_unreference_unlocked(struct drm_gem_object *obj)
|
||||
drm_gem_object_handle_put_unlocked(struct drm_gem_object *obj)
|
||||
{
|
||||
struct drm_device *dev = obj->dev;
|
||||
bool final = false;
|
||||
@ -241,7 +241,7 @@ drm_gem_object_handle_unreference_unlocked(struct drm_gem_object *obj)
|
||||
mutex_unlock(&dev->object_name_lock);
|
||||
|
||||
if (final)
|
||||
drm_gem_object_unreference_unlocked(obj);
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -262,7 +262,7 @@ drm_gem_object_release_handle(int id, void *ptr, void *data)
|
||||
if (dev->driver->gem_close_object)
|
||||
dev->driver->gem_close_object(obj, file_priv);
|
||||
|
||||
drm_gem_object_handle_unreference_unlocked(obj);
|
||||
drm_gem_object_handle_put_unlocked(obj);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -352,7 +352,7 @@ drm_gem_handle_create_tail(struct drm_file *file_priv,
|
||||
|
||||
WARN_ON(!mutex_is_locked(&dev->object_name_lock));
|
||||
if (obj->handle_count++ == 0)
|
||||
drm_gem_object_reference(obj);
|
||||
drm_gem_object_get(obj);
|
||||
|
||||
/*
|
||||
* Get the user-visible handle using idr. Preload and perform
|
||||
@ -392,7 +392,7 @@ err_remove:
|
||||
idr_remove(&file_priv->object_idr, handle);
|
||||
spin_unlock(&file_priv->table_lock);
|
||||
err_unref:
|
||||
drm_gem_object_handle_unreference_unlocked(obj);
|
||||
drm_gem_object_handle_put_unlocked(obj);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -606,7 +606,7 @@ drm_gem_object_lookup(struct drm_file *filp, u32 handle)
|
||||
/* Check if we currently have a reference on the object */
|
||||
obj = idr_find(&filp->object_idr, handle);
|
||||
if (obj)
|
||||
drm_gem_object_reference(obj);
|
||||
drm_gem_object_get(obj);
|
||||
|
||||
spin_unlock(&filp->table_lock);
|
||||
|
||||
@ -683,7 +683,7 @@ drm_gem_flink_ioctl(struct drm_device *dev, void *data,
|
||||
|
||||
err:
|
||||
mutex_unlock(&dev->object_name_lock);
|
||||
drm_gem_object_unreference_unlocked(obj);
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -713,7 +713,7 @@ drm_gem_open_ioctl(struct drm_device *dev, void *data,
|
||||
mutex_lock(&dev->object_name_lock);
|
||||
obj = idr_find(&dev->object_name_idr, (int) args->name);
|
||||
if (obj) {
|
||||
drm_gem_object_reference(obj);
|
||||
drm_gem_object_get(obj);
|
||||
} else {
|
||||
mutex_unlock(&dev->object_name_lock);
|
||||
return -ENOENT;
|
||||
@ -721,7 +721,7 @@ drm_gem_open_ioctl(struct drm_device *dev, void *data,
|
||||
|
||||
/* drm_gem_handle_create_tail unlocks dev->object_name_lock. */
|
||||
ret = drm_gem_handle_create_tail(file_priv, obj, &handle);
|
||||
drm_gem_object_unreference_unlocked(obj);
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -809,16 +809,16 @@ drm_gem_object_free(struct kref *kref)
|
||||
EXPORT_SYMBOL(drm_gem_object_free);
|
||||
|
||||
/**
|
||||
* drm_gem_object_unreference_unlocked - release a GEM BO reference
|
||||
* drm_gem_object_put_unlocked - drop a GEM buffer object reference
|
||||
* @obj: GEM buffer object
|
||||
*
|
||||
* This releases a reference to @obj. Callers must not hold the
|
||||
* &drm_device.struct_mutex lock when calling this function.
|
||||
*
|
||||
* See also __drm_gem_object_unreference().
|
||||
* See also __drm_gem_object_put().
|
||||
*/
|
||||
void
|
||||
drm_gem_object_unreference_unlocked(struct drm_gem_object *obj)
|
||||
drm_gem_object_put_unlocked(struct drm_gem_object *obj)
|
||||
{
|
||||
struct drm_device *dev;
|
||||
|
||||
@ -834,10 +834,10 @@ drm_gem_object_unreference_unlocked(struct drm_gem_object *obj)
|
||||
&dev->struct_mutex))
|
||||
mutex_unlock(&dev->struct_mutex);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_object_unreference_unlocked);
|
||||
EXPORT_SYMBOL(drm_gem_object_put_unlocked);
|
||||
|
||||
/**
|
||||
* drm_gem_object_unreference - release a GEM BO reference
|
||||
* drm_gem_object_put - release a GEM buffer object reference
|
||||
* @obj: GEM buffer object
|
||||
*
|
||||
* This releases a reference to @obj. Callers must hold the
|
||||
@ -845,10 +845,10 @@ EXPORT_SYMBOL(drm_gem_object_unreference_unlocked);
|
||||
* driver doesn't use &drm_device.struct_mutex for anything.
|
||||
*
|
||||
* For drivers not encumbered with legacy locking use
|
||||
* drm_gem_object_unreference_unlocked() instead.
|
||||
* drm_gem_object_put_unlocked() instead.
|
||||
*/
|
||||
void
|
||||
drm_gem_object_unreference(struct drm_gem_object *obj)
|
||||
drm_gem_object_put(struct drm_gem_object *obj)
|
||||
{
|
||||
if (obj) {
|
||||
WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex));
|
||||
@ -856,7 +856,7 @@ drm_gem_object_unreference(struct drm_gem_object *obj)
|
||||
kref_put(&obj->refcount, drm_gem_object_free);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_object_unreference);
|
||||
EXPORT_SYMBOL(drm_gem_object_put);
|
||||
|
||||
/**
|
||||
* drm_gem_vm_open - vma->ops->open implementation for GEM
|
||||
@ -869,7 +869,7 @@ void drm_gem_vm_open(struct vm_area_struct *vma)
|
||||
{
|
||||
struct drm_gem_object *obj = vma->vm_private_data;
|
||||
|
||||
drm_gem_object_reference(obj);
|
||||
drm_gem_object_get(obj);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vm_open);
|
||||
|
||||
@ -884,7 +884,7 @@ void drm_gem_vm_close(struct vm_area_struct *vma)
|
||||
{
|
||||
struct drm_gem_object *obj = vma->vm_private_data;
|
||||
|
||||
drm_gem_object_unreference_unlocked(obj);
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vm_close);
|
||||
|
||||
@ -935,7 +935,7 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
|
||||
* (which should happen whether the vma was created by this call, or
|
||||
* by a vm_open due to mremap or partial unmap or whatever).
|
||||
*/
|
||||
drm_gem_object_reference(obj);
|
||||
drm_gem_object_get(obj);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -992,14 +992,14 @@ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
|
||||
return -EINVAL;
|
||||
|
||||
if (!drm_vma_node_is_allowed(node, priv)) {
|
||||
drm_gem_object_unreference_unlocked(obj);
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
return -EACCES;
|
||||
}
|
||||
|
||||
ret = drm_gem_mmap_obj(obj, drm_vma_node_size(node) << PAGE_SHIFT,
|
||||
vma);
|
||||
|
||||
drm_gem_object_unreference_unlocked(obj);
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -121,7 +121,7 @@ struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
|
||||
return cma_obj;
|
||||
|
||||
error:
|
||||
drm_gem_object_unreference_unlocked(&cma_obj->base);
|
||||
drm_gem_object_put_unlocked(&cma_obj->base);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(drm_gem_cma_create);
|
||||
@ -163,7 +163,7 @@ drm_gem_cma_create_with_handle(struct drm_file *file_priv,
|
||||
*/
|
||||
ret = drm_gem_handle_create(file_priv, gem_obj, handle);
|
||||
/* drop reference from allocate - handle holds it now. */
|
||||
drm_gem_object_unreference_unlocked(gem_obj);
|
||||
drm_gem_object_put_unlocked(gem_obj);
|
||||
if (ret)
|
||||
return ERR_PTR(ret);
|
||||
|
||||
@ -293,7 +293,7 @@ int drm_gem_cma_dumb_map_offset(struct drm_file *file_priv,
|
||||
|
||||
*offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
|
||||
|
||||
drm_gem_object_unreference_unlocked(gem_obj);
|
||||
drm_gem_object_put_unlocked(gem_obj);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -416,13 +416,13 @@ unsigned long drm_gem_cma_get_unmapped_area(struct file *filp,
|
||||
return -EINVAL;
|
||||
|
||||
if (!drm_vma_node_is_allowed(node, priv)) {
|
||||
drm_gem_object_unreference_unlocked(obj);
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
return -EACCES;
|
||||
}
|
||||
|
||||
cma_obj = to_drm_gem_cma_obj(obj);
|
||||
|
||||
drm_gem_object_unreference_unlocked(obj);
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
|
||||
return cma_obj->vaddr ? (unsigned long)cma_obj->vaddr : -EINVAL;
|
||||
}
|
||||
|
@ -257,8 +257,7 @@ static int compat_drm_addmap(struct file *file, unsigned int cmd,
|
||||
|
||||
m32.handle = (unsigned long)handle;
|
||||
if (m32.handle != (unsigned long)handle)
|
||||
printk_ratelimited(KERN_ERR "compat_drm_addmap truncated handle"
|
||||
" %p for type %d offset %x\n",
|
||||
pr_err_ratelimited("compat_drm_addmap truncated handle %p for type %d offset %x\n",
|
||||
handle, m32.type, m32.offset);
|
||||
|
||||
if (copy_to_user(argp, &m32, sizeof(m32)))
|
||||
|
@ -89,6 +89,31 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
|
||||
write_sequnlock(&vblank->seqlock);
|
||||
}
|
||||
|
||||
/*
|
||||
* "No hw counter" fallback implementation of .get_vblank_counter() hook,
|
||||
* if there is no useable hardware frame counter available.
|
||||
*/
|
||||
static u32 drm_vblank_no_hw_counter(struct drm_device *dev, unsigned int pipe)
|
||||
{
|
||||
WARN_ON_ONCE(dev->max_vblank_count != 0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static u32 __get_vblank_counter(struct drm_device *dev, unsigned int pipe)
|
||||
{
|
||||
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
|
||||
struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe);
|
||||
|
||||
if (crtc->funcs->get_vblank_counter)
|
||||
return crtc->funcs->get_vblank_counter(crtc);
|
||||
}
|
||||
|
||||
if (dev->driver->get_vblank_counter)
|
||||
return dev->driver->get_vblank_counter(dev, pipe);
|
||||
|
||||
return drm_vblank_no_hw_counter(dev, pipe);
|
||||
}
|
||||
|
||||
/*
|
||||
* Reset the stored timestamp for the current vblank count to correspond
|
||||
* to the last vblank occurred.
|
||||
@ -112,9 +137,9 @@ static void drm_reset_vblank_timestamp(struct drm_device *dev, unsigned int pipe
|
||||
* when drm_vblank_enable() applies the diff
|
||||
*/
|
||||
do {
|
||||
cur_vblank = dev->driver->get_vblank_counter(dev, pipe);
|
||||
cur_vblank = __get_vblank_counter(dev, pipe);
|
||||
rc = drm_get_last_vbltimestamp(dev, pipe, &t_vblank, 0);
|
||||
} while (cur_vblank != dev->driver->get_vblank_counter(dev, pipe) && --count > 0);
|
||||
} while (cur_vblank != __get_vblank_counter(dev, pipe) && --count > 0);
|
||||
|
||||
/*
|
||||
* Only reinitialize corresponding vblank timestamp if high-precision query
|
||||
@ -168,9 +193,9 @@ static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe,
|
||||
* corresponding vblank timestamp.
|
||||
*/
|
||||
do {
|
||||
cur_vblank = dev->driver->get_vblank_counter(dev, pipe);
|
||||
cur_vblank = __get_vblank_counter(dev, pipe);
|
||||
rc = drm_get_last_vbltimestamp(dev, pipe, &t_vblank, flags);
|
||||
} while (cur_vblank != dev->driver->get_vblank_counter(dev, pipe) && --count > 0);
|
||||
} while (cur_vblank != __get_vblank_counter(dev, pipe) && --count > 0);
|
||||
|
||||
if (dev->max_vblank_count != 0) {
|
||||
/* trust the hw counter when it's around */
|
||||
@ -275,6 +300,20 @@ u32 drm_accurate_vblank_count(struct drm_crtc *crtc)
|
||||
}
|
||||
EXPORT_SYMBOL(drm_accurate_vblank_count);
|
||||
|
||||
static void __disable_vblank(struct drm_device *dev, unsigned int pipe)
|
||||
{
|
||||
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
|
||||
struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe);
|
||||
|
||||
if (crtc->funcs->disable_vblank) {
|
||||
crtc->funcs->disable_vblank(crtc);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
dev->driver->disable_vblank(dev, pipe);
|
||||
}
|
||||
|
||||
/*
|
||||
* Disable vblank irq's on crtc, make sure that last vblank count
|
||||
* of hardware and corresponding consistent software vblank counter
|
||||
@ -298,7 +337,7 @@ static void vblank_disable_and_save(struct drm_device *dev, unsigned int pipe)
|
||||
* hardware potentially runtime suspended.
|
||||
*/
|
||||
if (vblank->enabled) {
|
||||
dev->driver->disable_vblank(dev, pipe);
|
||||
__disable_vblank(dev, pipe);
|
||||
vblank->enabled = false;
|
||||
}
|
||||
|
||||
@ -1027,6 +1066,18 @@ void drm_crtc_send_vblank_event(struct drm_crtc *crtc,
|
||||
}
|
||||
EXPORT_SYMBOL(drm_crtc_send_vblank_event);
|
||||
|
||||
static int __enable_vblank(struct drm_device *dev, unsigned int pipe)
|
||||
{
|
||||
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
|
||||
struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe);
|
||||
|
||||
if (crtc->funcs->enable_vblank)
|
||||
return crtc->funcs->enable_vblank(crtc);
|
||||
}
|
||||
|
||||
return dev->driver->enable_vblank(dev, pipe);
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_vblank_enable - enable the vblank interrupt on a CRTC
|
||||
* @dev: DRM device
|
||||
@ -1052,7 +1103,7 @@ static int drm_vblank_enable(struct drm_device *dev, unsigned int pipe)
|
||||
* timestamps. Filtercode in drm_handle_vblank() will
|
||||
* prevent double-accounting of same vblank interval.
|
||||
*/
|
||||
ret = dev->driver->enable_vblank(dev, pipe);
|
||||
ret = __enable_vblank(dev, pipe);
|
||||
DRM_DEBUG("enabling vblank on crtc %u, ret: %d\n", pipe, ret);
|
||||
if (ret)
|
||||
atomic_dec(&vblank->refcount);
|
||||
@ -1707,21 +1758,3 @@ bool drm_crtc_handle_vblank(struct drm_crtc *crtc)
|
||||
return drm_handle_vblank(crtc->dev, drm_crtc_index(crtc));
|
||||
}
|
||||
EXPORT_SYMBOL(drm_crtc_handle_vblank);
|
||||
|
||||
/**
|
||||
* drm_vblank_no_hw_counter - "No hw counter" implementation of .get_vblank_counter()
|
||||
* @dev: DRM device
|
||||
* @pipe: CRTC for which to read the counter
|
||||
*
|
||||
* Drivers can plug this into the .get_vblank_counter() function if
|
||||
* there is no useable hardware frame counter available.
|
||||
*
|
||||
* Returns:
|
||||
* 0
|
||||
*/
|
||||
u32 drm_vblank_no_hw_counter(struct drm_device *dev, unsigned int pipe)
|
||||
{
|
||||
WARN_ON_ONCE(dev->max_vblank_count != 0);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vblank_no_hw_counter);
|
||||
|
@ -170,7 +170,7 @@ struct drm_mm_node *
|
||||
__drm_mm_interval_first(const struct drm_mm *mm, u64 start, u64 last)
|
||||
{
|
||||
return drm_mm_interval_tree_iter_first((struct rb_root *)&mm->interval_tree,
|
||||
start, last);
|
||||
start, last) ?: (struct drm_mm_node *)&mm->head_node;
|
||||
}
|
||||
EXPORT_SYMBOL(__drm_mm_interval_first);
|
||||
|
||||
|
@ -139,19 +139,19 @@ int drm_mode_getresources(struct drm_device *dev, void *data,
|
||||
}
|
||||
card_res->count_encoders = count;
|
||||
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
count = 0;
|
||||
connector_id = u64_to_user_ptr(card_res->connector_id_ptr);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
if (count < card_res->count_connectors &&
|
||||
put_user(connector->base.id, connector_id + count)) {
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
return -EFAULT;
|
||||
}
|
||||
count++;
|
||||
}
|
||||
card_res->count_connectors = count;
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -184,11 +184,11 @@ void drm_mode_config_reset(struct drm_device *dev)
|
||||
if (encoder->funcs->reset)
|
||||
encoder->funcs->reset(encoder);
|
||||
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter)
|
||||
if (connector->funcs->reset)
|
||||
connector->funcs->reset(connector);
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_mode_config_reset);
|
||||
|
||||
@ -412,20 +412,20 @@ void drm_mode_config_cleanup(struct drm_device *dev)
|
||||
encoder->funcs->destroy(encoder);
|
||||
}
|
||||
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
/* drm_connector_list_iter holds an full reference to the
|
||||
* current connector itself, which means it is inherently safe
|
||||
* against unreferencing the current connector - but not against
|
||||
* deleting it right away. */
|
||||
drm_connector_unreference(connector);
|
||||
drm_connector_put(connector);
|
||||
}
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
if (WARN_ON(!list_empty(&dev->mode_config.connector_list))) {
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter)
|
||||
DRM_ERROR("connector %s leaked!\n", connector->name);
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
}
|
||||
|
||||
list_for_each_entry_safe(property, pt, &dev->mode_config.property_list,
|
||||
@ -444,7 +444,7 @@ void drm_mode_config_cleanup(struct drm_device *dev)
|
||||
|
||||
list_for_each_entry_safe(blob, bt, &dev->mode_config.property_blob_list,
|
||||
head_global) {
|
||||
drm_property_unreference_blob(blob);
|
||||
drm_property_blob_put(blob);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -31,11 +31,9 @@
|
||||
* Internal function to assign a slot in the object idr and optionally
|
||||
* register the object into the idr.
|
||||
*/
|
||||
int drm_mode_object_get_reg(struct drm_device *dev,
|
||||
struct drm_mode_object *obj,
|
||||
uint32_t obj_type,
|
||||
bool register_obj,
|
||||
void (*obj_free_cb)(struct kref *kref))
|
||||
int __drm_mode_object_add(struct drm_device *dev, struct drm_mode_object *obj,
|
||||
uint32_t obj_type, bool register_obj,
|
||||
void (*obj_free_cb)(struct kref *kref))
|
||||
{
|
||||
int ret;
|
||||
|
||||
@ -59,23 +57,21 @@ int drm_mode_object_get_reg(struct drm_device *dev,
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_mode_object_get - allocate a new modeset identifier
|
||||
* drm_mode_object_add - allocate a new modeset identifier
|
||||
* @dev: DRM device
|
||||
* @obj: object pointer, used to generate unique ID
|
||||
* @obj_type: object type
|
||||
*
|
||||
* Create a unique identifier based on @ptr in @dev's identifier space. Used
|
||||
* for tracking modes, CRTCs and connectors. Note that despite the _get postfix
|
||||
* modeset identifiers are _not_ reference counted. Hence don't use this for
|
||||
* reference counted modeset objects like framebuffers.
|
||||
* for tracking modes, CRTCs and connectors.
|
||||
*
|
||||
* Returns:
|
||||
* Zero on success, error code on failure.
|
||||
*/
|
||||
int drm_mode_object_get(struct drm_device *dev,
|
||||
int drm_mode_object_add(struct drm_device *dev,
|
||||
struct drm_mode_object *obj, uint32_t obj_type)
|
||||
{
|
||||
return drm_mode_object_get_reg(dev, obj, obj_type, true, NULL);
|
||||
return __drm_mode_object_add(dev, obj, obj_type, true, NULL);
|
||||
}
|
||||
|
||||
void drm_mode_object_register(struct drm_device *dev,
|
||||
@ -137,7 +133,7 @@ struct drm_mode_object *__drm_mode_object_find(struct drm_device *dev,
|
||||
*
|
||||
* This function is used to look up a modeset object. It will acquire a
|
||||
* reference for reference counted objects. This reference must be dropped again
|
||||
* by callind drm_mode_object_unreference().
|
||||
* by callind drm_mode_object_put().
|
||||
*/
|
||||
struct drm_mode_object *drm_mode_object_find(struct drm_device *dev,
|
||||
uint32_t id, uint32_t type)
|
||||
@ -150,38 +146,38 @@ struct drm_mode_object *drm_mode_object_find(struct drm_device *dev,
|
||||
EXPORT_SYMBOL(drm_mode_object_find);
|
||||
|
||||
/**
|
||||
* drm_mode_object_unreference - decr the object refcnt
|
||||
* @obj: mode_object
|
||||
* drm_mode_object_put - release a mode object reference
|
||||
* @obj: DRM mode object
|
||||
*
|
||||
* This function decrements the object's refcount if it is a refcounted modeset
|
||||
* object. It is a no-op on any other object. This is used to drop references
|
||||
* acquired with drm_mode_object_reference().
|
||||
* acquired with drm_mode_object_get().
|
||||
*/
|
||||
void drm_mode_object_unreference(struct drm_mode_object *obj)
|
||||
void drm_mode_object_put(struct drm_mode_object *obj)
|
||||
{
|
||||
if (obj->free_cb) {
|
||||
DRM_DEBUG("OBJ ID: %d (%d)\n", obj->id, kref_read(&obj->refcount));
|
||||
kref_put(&obj->refcount, obj->free_cb);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(drm_mode_object_unreference);
|
||||
EXPORT_SYMBOL(drm_mode_object_put);
|
||||
|
||||
/**
|
||||
* drm_mode_object_reference - incr the object refcnt
|
||||
* @obj: mode_object
|
||||
* drm_mode_object_get - acquire a mode object reference
|
||||
* @obj: DRM mode object
|
||||
*
|
||||
* This function increments the object's refcount if it is a refcounted modeset
|
||||
* object. It is a no-op on any other object. References should be dropped again
|
||||
* by calling drm_mode_object_unreference().
|
||||
* by calling drm_mode_object_put().
|
||||
*/
|
||||
void drm_mode_object_reference(struct drm_mode_object *obj)
|
||||
void drm_mode_object_get(struct drm_mode_object *obj)
|
||||
{
|
||||
if (obj->free_cb) {
|
||||
DRM_DEBUG("OBJ ID: %d (%d)\n", obj->id, kref_read(&obj->refcount));
|
||||
kref_get(&obj->refcount);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(drm_mode_object_reference);
|
||||
EXPORT_SYMBOL(drm_mode_object_get);
|
||||
|
||||
/**
|
||||
* drm_object_attach_property - attach a property to a modeset object
|
||||
@ -367,7 +363,7 @@ int drm_mode_obj_get_properties_ioctl(struct drm_device *dev, void *data,
|
||||
&arg->count_props);
|
||||
|
||||
out_unref:
|
||||
drm_mode_object_unreference(obj);
|
||||
drm_mode_object_put(obj);
|
||||
out:
|
||||
drm_modeset_unlock_all(dev);
|
||||
return ret;
|
||||
@ -432,7 +428,7 @@ int drm_mode_obj_set_property_ioctl(struct drm_device *dev, void *data,
|
||||
drm_property_change_valid_put(property, ref);
|
||||
|
||||
out_unref:
|
||||
drm_mode_object_unreference(arg_obj);
|
||||
drm_mode_object_put(arg_obj);
|
||||
out:
|
||||
drm_modeset_unlock_all(dev);
|
||||
return ret;
|
||||
|
@ -71,7 +71,7 @@ struct drm_display_mode *drm_mode_create(struct drm_device *dev)
|
||||
if (!nmode)
|
||||
return NULL;
|
||||
|
||||
if (drm_mode_object_get(dev, &nmode->base, DRM_MODE_OBJECT_MODE)) {
|
||||
if (drm_mode_object_add(dev, &nmode->base, DRM_MODE_OBJECT_MODE)) {
|
||||
kfree(nmode);
|
||||
return NULL;
|
||||
}
|
||||
|
@ -88,7 +88,7 @@ int drm_universal_plane_init(struct drm_device *dev, struct drm_plane *plane,
|
||||
struct drm_mode_config *config = &dev->mode_config;
|
||||
int ret;
|
||||
|
||||
ret = drm_mode_object_get(dev, &plane->base, DRM_MODE_OBJECT_PLANE);
|
||||
ret = drm_mode_object_add(dev, &plane->base, DRM_MODE_OBJECT_PLANE);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -293,7 +293,7 @@ void drm_plane_force_disable(struct drm_plane *plane)
|
||||
return;
|
||||
}
|
||||
/* disconnect the plane from the fb and crtc: */
|
||||
drm_framebuffer_unreference(plane->old_fb);
|
||||
drm_framebuffer_put(plane->old_fb);
|
||||
plane->old_fb = NULL;
|
||||
plane->fb = NULL;
|
||||
plane->crtc = NULL;
|
||||
@ -520,9 +520,9 @@ static int __setplane_internal(struct drm_plane *plane,
|
||||
|
||||
out:
|
||||
if (fb)
|
||||
drm_framebuffer_unreference(fb);
|
||||
drm_framebuffer_put(fb);
|
||||
if (plane->old_fb)
|
||||
drm_framebuffer_unreference(plane->old_fb);
|
||||
drm_framebuffer_put(plane->old_fb);
|
||||
plane->old_fb = NULL;
|
||||
|
||||
return ret;
|
||||
@ -638,7 +638,7 @@ static int drm_mode_cursor_universal(struct drm_crtc *crtc,
|
||||
} else {
|
||||
fb = crtc->cursor->fb;
|
||||
if (fb)
|
||||
drm_framebuffer_reference(fb);
|
||||
drm_framebuffer_get(fb);
|
||||
}
|
||||
|
||||
if (req->flags & DRM_MODE_CURSOR_MOVE) {
|
||||
@ -902,9 +902,9 @@ out:
|
||||
if (ret && crtc->funcs->page_flip_target)
|
||||
drm_crtc_vblank_put(crtc);
|
||||
if (fb)
|
||||
drm_framebuffer_unreference(fb);
|
||||
drm_framebuffer_put(fb);
|
||||
if (crtc->primary->old_fb)
|
||||
drm_framebuffer_unreference(crtc->primary->old_fb);
|
||||
drm_framebuffer_put(crtc->primary->old_fb);
|
||||
crtc->primary->old_fb = NULL;
|
||||
drm_modeset_unlock_crtc(crtc);
|
||||
|
||||
|
@ -85,7 +85,7 @@ static int get_connectors_for_crtc(struct drm_crtc *crtc,
|
||||
*/
|
||||
WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex));
|
||||
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
if (connector->encoder && connector->encoder->crtc == crtc) {
|
||||
if (connector_list != NULL && count < num_connectors)
|
||||
@ -94,7 +94,7 @@ static int get_connectors_for_crtc(struct drm_crtc *crtc,
|
||||
count++;
|
||||
}
|
||||
}
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
return count;
|
||||
}
|
||||
@ -450,8 +450,7 @@ int drm_plane_helper_commit(struct drm_plane *plane,
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (plane_funcs->prepare_fb && plane_state->fb &&
|
||||
plane_state->fb != old_fb) {
|
||||
if (plane_funcs->prepare_fb && plane_state->fb != old_fb) {
|
||||
ret = plane_funcs->prepare_fb(plane,
|
||||
plane_state);
|
||||
if (ret)
|
||||
|
@ -318,7 +318,7 @@ struct dma_buf *drm_gem_dmabuf_export(struct drm_device *dev,
|
||||
return dma_buf;
|
||||
|
||||
drm_dev_ref(dev);
|
||||
drm_gem_object_reference(exp_info->priv);
|
||||
drm_gem_object_get(exp_info->priv);
|
||||
|
||||
return dma_buf;
|
||||
}
|
||||
@ -339,7 +339,7 @@ void drm_gem_dmabuf_release(struct dma_buf *dma_buf)
|
||||
struct drm_device *dev = obj->dev;
|
||||
|
||||
/* drop the reference on the export fd holds */
|
||||
drm_gem_object_unreference_unlocked(obj);
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
|
||||
drm_dev_unref(dev);
|
||||
}
|
||||
@ -585,7 +585,7 @@ out_have_handle:
|
||||
fail_put_dmabuf:
|
||||
dma_buf_put(dmabuf);
|
||||
out:
|
||||
drm_gem_object_unreference_unlocked(obj);
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
out_unlock:
|
||||
mutex_unlock(&file_priv->prime.lock);
|
||||
|
||||
@ -616,7 +616,7 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
|
||||
* Importing dmabuf exported from out own gem increases
|
||||
* refcount on gem itself instead of f_count of dmabuf.
|
||||
*/
|
||||
drm_gem_object_reference(obj);
|
||||
drm_gem_object_get(obj);
|
||||
return obj;
|
||||
}
|
||||
}
|
||||
@ -704,7 +704,7 @@ int drm_gem_prime_fd_to_handle(struct drm_device *dev,
|
||||
|
||||
/* _handle_create_tail unconditionally unlocks dev->object_name_lock. */
|
||||
ret = drm_gem_handle_create_tail(file_priv, obj, handle);
|
||||
drm_gem_object_unreference_unlocked(obj);
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
if (ret)
|
||||
goto out_put;
|
||||
|
||||
|
@ -36,7 +36,7 @@ EXPORT_SYMBOL(__drm_printfn_seq_file);
|
||||
|
||||
void __drm_printfn_info(struct drm_printer *p, struct va_format *vaf)
|
||||
{
|
||||
dev_printk(KERN_INFO, p->arg, "[" DRM_NAME "] %pV", vaf);
|
||||
dev_info(p->arg, "[" DRM_NAME "] %pV", vaf);
|
||||
}
|
||||
EXPORT_SYMBOL(__drm_printfn_info);
|
||||
|
||||
|
@ -140,13 +140,13 @@ void drm_kms_helper_poll_enable(struct drm_device *dev)
|
||||
if (!dev->mode_config.poll_enabled || !drm_kms_helper_poll)
|
||||
return;
|
||||
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
if (connector->polled & (DRM_CONNECTOR_POLL_CONNECT |
|
||||
DRM_CONNECTOR_POLL_DISCONNECT))
|
||||
poll = true;
|
||||
}
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
if (dev->mode_config.delayed_event) {
|
||||
/*
|
||||
@ -311,7 +311,13 @@ int drm_helper_probe_single_connector_modes(struct drm_connector *connector,
|
||||
count = drm_add_edid_modes(connector, edid);
|
||||
drm_edid_to_eld(connector, edid);
|
||||
} else {
|
||||
count = drm_load_edid_firmware(connector);
|
||||
struct edid *edid = drm_load_edid_firmware(connector);
|
||||
if (!IS_ERR_OR_NULL(edid)) {
|
||||
drm_mode_connector_update_edid_property(connector, edid);
|
||||
count = drm_add_edid_modes(connector, edid);
|
||||
drm_edid_to_eld(connector, edid);
|
||||
kfree(edid);
|
||||
}
|
||||
if (count == 0)
|
||||
count = (*connector_funcs->get_modes)(connector);
|
||||
}
|
||||
@ -414,7 +420,7 @@ static void output_poll_execute(struct work_struct *work)
|
||||
goto out;
|
||||
}
|
||||
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
/* Ignore forced connectors. */
|
||||
if (connector->force)
|
||||
@ -468,7 +474,7 @@ static void output_poll_execute(struct work_struct *work)
|
||||
changed = true;
|
||||
}
|
||||
}
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
mutex_unlock(&dev->mode_config.mutex);
|
||||
|
||||
@ -574,7 +580,7 @@ bool drm_helper_hpd_irq_event(struct drm_device *dev)
|
||||
return false;
|
||||
|
||||
mutex_lock(&dev->mode_config.mutex);
|
||||
drm_connector_list_iter_get(dev, &conn_iter);
|
||||
drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
/* Only handle HPD capable connectors. */
|
||||
if (!(connector->polled & DRM_CONNECTOR_POLL_HPD))
|
||||
@ -591,7 +597,7 @@ bool drm_helper_hpd_irq_event(struct drm_device *dev)
|
||||
if (old_status != connector->status)
|
||||
changed = true;
|
||||
}
|
||||
drm_connector_list_iter_put(&conn_iter);
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
mutex_unlock(&dev->mode_config.mutex);
|
||||
|
||||
if (changed)
|
||||
|
@ -91,7 +91,7 @@ struct drm_property *drm_property_create(struct drm_device *dev, int flags,
|
||||
goto fail;
|
||||
}
|
||||
|
||||
ret = drm_mode_object_get(dev, &property->base, DRM_MODE_OBJECT_PROPERTY);
|
||||
ret = drm_mode_object_add(dev, &property->base, DRM_MODE_OBJECT_PROPERTY);
|
||||
if (ret)
|
||||
goto fail;
|
||||
|
||||
@ -570,8 +570,8 @@ drm_property_create_blob(struct drm_device *dev, size_t length,
|
||||
if (data)
|
||||
memcpy(blob->data, data, length);
|
||||
|
||||
ret = drm_mode_object_get_reg(dev, &blob->base, DRM_MODE_OBJECT_BLOB,
|
||||
true, drm_property_free_blob);
|
||||
ret = __drm_mode_object_add(dev, &blob->base, DRM_MODE_OBJECT_BLOB,
|
||||
true, drm_property_free_blob);
|
||||
if (ret) {
|
||||
kfree(blob);
|
||||
return ERR_PTR(-EINVAL);
|
||||
@ -587,19 +587,19 @@ drm_property_create_blob(struct drm_device *dev, size_t length,
|
||||
EXPORT_SYMBOL(drm_property_create_blob);
|
||||
|
||||
/**
|
||||
* drm_property_unreference_blob - Unreference a blob property
|
||||
* @blob: Pointer to blob property
|
||||
* drm_property_blob_put - release a blob property reference
|
||||
* @blob: DRM blob property
|
||||
*
|
||||
* Drop a reference on a blob property. May free the object.
|
||||
* Releases a reference to a blob property. May free the object.
|
||||
*/
|
||||
void drm_property_unreference_blob(struct drm_property_blob *blob)
|
||||
void drm_property_blob_put(struct drm_property_blob *blob)
|
||||
{
|
||||
if (!blob)
|
||||
return;
|
||||
|
||||
drm_mode_object_unreference(&blob->base);
|
||||
drm_mode_object_put(&blob->base);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_property_unreference_blob);
|
||||
EXPORT_SYMBOL(drm_property_blob_put);
|
||||
|
||||
void drm_property_destroy_user_blobs(struct drm_device *dev,
|
||||
struct drm_file *file_priv)
|
||||
@ -612,23 +612,23 @@ void drm_property_destroy_user_blobs(struct drm_device *dev,
|
||||
*/
|
||||
list_for_each_entry_safe(blob, bt, &file_priv->blobs, head_file) {
|
||||
list_del_init(&blob->head_file);
|
||||
drm_property_unreference_blob(blob);
|
||||
drm_property_blob_put(blob);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_property_reference_blob - Take a reference on an existing property
|
||||
* @blob: Pointer to blob property
|
||||
* drm_property_blob_get - acquire blob property reference
|
||||
* @blob: DRM blob property
|
||||
*
|
||||
* Take a new reference on an existing blob property. Returns @blob, which
|
||||
* Acquires a reference to an existing blob property. Returns @blob, which
|
||||
* allows this to be used as a shorthand in assignments.
|
||||
*/
|
||||
struct drm_property_blob *drm_property_reference_blob(struct drm_property_blob *blob)
|
||||
struct drm_property_blob *drm_property_blob_get(struct drm_property_blob *blob)
|
||||
{
|
||||
drm_mode_object_reference(&blob->base);
|
||||
drm_mode_object_get(&blob->base);
|
||||
return blob;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_property_reference_blob);
|
||||
EXPORT_SYMBOL(drm_property_blob_get);
|
||||
|
||||
/**
|
||||
* drm_property_lookup_blob - look up a blob property and take a reference
|
||||
@ -637,7 +637,7 @@ EXPORT_SYMBOL(drm_property_reference_blob);
|
||||
*
|
||||
* If successful, this takes an additional reference to the blob property.
|
||||
* callers need to make sure to eventually unreference the returned property
|
||||
* again, using @drm_property_unreference_blob.
|
||||
* again, using drm_property_blob_put().
|
||||
*
|
||||
* Return:
|
||||
* NULL on failure, pointer to the blob on success.
|
||||
@ -712,13 +712,13 @@ int drm_property_replace_global_blob(struct drm_device *dev,
|
||||
goto err_created;
|
||||
}
|
||||
|
||||
drm_property_unreference_blob(old_blob);
|
||||
drm_property_blob_put(old_blob);
|
||||
*replace = new_blob;
|
||||
|
||||
return 0;
|
||||
|
||||
err_created:
|
||||
drm_property_unreference_blob(new_blob);
|
||||
drm_property_blob_put(new_blob);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_property_replace_global_blob);
|
||||
@ -747,7 +747,7 @@ int drm_mode_getblob_ioctl(struct drm_device *dev,
|
||||
}
|
||||
out_resp->length = blob->length;
|
||||
unref:
|
||||
drm_property_unreference_blob(blob);
|
||||
drm_property_blob_put(blob);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -784,7 +784,7 @@ int drm_mode_createblob_ioctl(struct drm_device *dev,
|
||||
return 0;
|
||||
|
||||
out_blob:
|
||||
drm_property_unreference_blob(blob);
|
||||
drm_property_blob_put(blob);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -823,14 +823,14 @@ int drm_mode_destroyblob_ioctl(struct drm_device *dev,
|
||||
mutex_unlock(&dev->mode_config.blob_lock);
|
||||
|
||||
/* One reference from lookup, and one from the filp. */
|
||||
drm_property_unreference_blob(blob);
|
||||
drm_property_unreference_blob(blob);
|
||||
drm_property_blob_put(blob);
|
||||
drm_property_blob_put(blob);
|
||||
|
||||
return 0;
|
||||
|
||||
err:
|
||||
mutex_unlock(&dev->mode_config.blob_lock);
|
||||
drm_property_unreference_blob(blob);
|
||||
drm_property_blob_put(blob);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -906,7 +906,7 @@ void drm_property_change_valid_put(struct drm_property *property,
|
||||
return;
|
||||
|
||||
if (drm_property_type_is(property, DRM_MODE_PROP_OBJECT)) {
|
||||
drm_mode_object_unreference(ref);
|
||||
drm_mode_object_put(ref);
|
||||
} else if (drm_property_type_is(property, DRM_MODE_PROP_BLOB))
|
||||
drm_property_unreference_blob(obj_to_blob(ref));
|
||||
drm_property_blob_put(obj_to_blob(ref));
|
||||
}
|
||||
|
@ -122,6 +122,24 @@ static void exynos_drm_crtc_destroy(struct drm_crtc *crtc)
|
||||
kfree(exynos_crtc);
|
||||
}
|
||||
|
||||
static int exynos_drm_crtc_enable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);
|
||||
|
||||
if (exynos_crtc->ops->enable_vblank)
|
||||
return exynos_crtc->ops->enable_vblank(exynos_crtc);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void exynos_drm_crtc_disable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);
|
||||
|
||||
if (exynos_crtc->ops->disable_vblank)
|
||||
exynos_crtc->ops->disable_vblank(exynos_crtc);
|
||||
}
|
||||
|
||||
static const struct drm_crtc_funcs exynos_crtc_funcs = {
|
||||
.set_config = drm_atomic_helper_set_config,
|
||||
.page_flip = drm_atomic_helper_page_flip,
|
||||
@ -129,6 +147,8 @@ static const struct drm_crtc_funcs exynos_crtc_funcs = {
|
||||
.reset = drm_atomic_helper_crtc_reset,
|
||||
.atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state,
|
||||
.atomic_destroy_state = drm_atomic_helper_crtc_destroy_state,
|
||||
.enable_vblank = exynos_drm_crtc_enable_vblank,
|
||||
.disable_vblank = exynos_drm_crtc_disable_vblank,
|
||||
};
|
||||
|
||||
struct exynos_drm_crtc *exynos_drm_crtc_create(struct drm_device *drm_dev,
|
||||
@ -168,26 +188,6 @@ err_crtc:
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
int exynos_drm_crtc_enable_vblank(struct drm_device *dev, unsigned int pipe)
|
||||
{
|
||||
struct exynos_drm_crtc *exynos_crtc = exynos_drm_crtc_from_pipe(dev,
|
||||
pipe);
|
||||
|
||||
if (exynos_crtc->ops->enable_vblank)
|
||||
return exynos_crtc->ops->enable_vblank(exynos_crtc);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void exynos_drm_crtc_disable_vblank(struct drm_device *dev, unsigned int pipe)
|
||||
{
|
||||
struct exynos_drm_crtc *exynos_crtc = exynos_drm_crtc_from_pipe(dev,
|
||||
pipe);
|
||||
|
||||
if (exynos_crtc->ops->disable_vblank)
|
||||
exynos_crtc->ops->disable_vblank(exynos_crtc);
|
||||
}
|
||||
|
||||
int exynos_drm_crtc_get_pipe_from_type(struct drm_device *drm_dev,
|
||||
enum exynos_drm_output_type out_type)
|
||||
{
|
||||
|
@ -23,8 +23,6 @@ struct exynos_drm_crtc *exynos_drm_crtc_create(struct drm_device *drm_dev,
|
||||
enum exynos_drm_output_type type,
|
||||
const struct exynos_drm_crtc_ops *ops,
|
||||
void *context);
|
||||
int exynos_drm_crtc_enable_vblank(struct drm_device *dev, unsigned int pipe);
|
||||
void exynos_drm_crtc_disable_vblank(struct drm_device *dev, unsigned int pipe);
|
||||
void exynos_drm_crtc_wait_pending_update(struct exynos_drm_crtc *exynos_crtc);
|
||||
void exynos_drm_crtc_finish_update(struct exynos_drm_crtc *exynos_crtc,
|
||||
struct exynos_drm_plane *exynos_plane);
|
||||
|
@ -22,7 +22,6 @@
|
||||
#include <drm/exynos_drm.h>
|
||||
|
||||
#include "exynos_drm_drv.h"
|
||||
#include "exynos_drm_crtc.h"
|
||||
#include "exynos_drm_fbdev.h"
|
||||
#include "exynos_drm_fb.h"
|
||||
#include "exynos_drm_gem.h"
|
||||
@ -263,9 +262,6 @@ static struct drm_driver exynos_drm_driver = {
|
||||
.preclose = exynos_drm_preclose,
|
||||
.lastclose = exynos_drm_lastclose,
|
||||
.postclose = exynos_drm_postclose,
|
||||
.get_vblank_counter = drm_vblank_no_hw_counter,
|
||||
.enable_vblank = exynos_drm_crtc_enable_vblank,
|
||||
.disable_vblank = exynos_drm_crtc_disable_vblank,
|
||||
.gem_free_object_unlocked = exynos_drm_gem_free_object,
|
||||
.gem_vm_ops = &exynos_drm_gem_vm_ops,
|
||||
.dumb_create = exynos_drm_gem_dumb_create,
|
||||
|
@ -222,14 +222,6 @@ struct exynos_drm_private {
|
||||
wait_queue_head_t wait;
|
||||
};
|
||||
|
||||
static inline struct exynos_drm_crtc *
|
||||
exynos_drm_crtc_from_pipe(struct drm_device *dev, int pipe)
|
||||
{
|
||||
struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe);
|
||||
|
||||
return to_exynos_crtc(crtc);
|
||||
}
|
||||
|
||||
static inline struct device *to_dma_dev(struct drm_device *dev)
|
||||
{
|
||||
struct exynos_drm_private *priv = dev->dev_private;
|
||||
|
@ -99,7 +99,6 @@ static int exynos_drm_fbdev_update(struct drm_fb_helper *helper,
|
||||
VM_MAP, pgprot_writecombine(PAGE_KERNEL));
|
||||
if (!exynos_gem->kvaddr) {
|
||||
DRM_ERROR("failed to map pages to kernel space.\n");
|
||||
drm_fb_helper_release_fbi(helper);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
@ -272,7 +271,6 @@ static void exynos_drm_fbdev_destroy(struct drm_device *dev,
|
||||
}
|
||||
|
||||
drm_fb_helper_unregister_fbi(fb_helper);
|
||||
drm_fb_helper_release_fbi(fb_helper);
|
||||
|
||||
drm_fb_helper_fini(fb_helper);
|
||||
}
|
||||
|
@ -43,7 +43,6 @@
|
||||
|
||||
#include <drm/exynos_drm.h>
|
||||
|
||||
#include "exynos_drm_drv.h"
|
||||
#include "exynos_drm_crtc.h"
|
||||
|
||||
#define HOTPLUG_DEBOUNCE_MS 1100
|
||||
@ -1703,6 +1702,8 @@ static int hdmi_bind(struct device *dev, struct device *master, void *data)
|
||||
struct drm_device *drm_dev = data;
|
||||
struct hdmi_context *hdata = dev_get_drvdata(dev);
|
||||
struct drm_encoder *encoder = &hdata->encoder;
|
||||
struct exynos_drm_crtc *exynos_crtc;
|
||||
struct drm_crtc *crtc;
|
||||
int ret, pipe;
|
||||
|
||||
hdata->drm_dev = drm_dev;
|
||||
@ -1714,7 +1715,9 @@ static int hdmi_bind(struct device *dev, struct device *master, void *data)
|
||||
|
||||
hdata->phy_clk.enable = hdmiphy_clk_enable;
|
||||
|
||||
exynos_drm_crtc_from_pipe(drm_dev, pipe)->pipe_clk = &hdata->phy_clk;
|
||||
crtc = drm_crtc_from_index(drm_dev, pipe);
|
||||
exynos_crtc = to_exynos_crtc(crtc);
|
||||
exynos_crtc->pipe_clk = &hdata->phy_clk;
|
||||
|
||||
encoder->possible_crtcs = 1 << pipe;
|
||||
|
||||
|
@ -137,6 +137,30 @@ static const struct drm_crtc_helper_funcs fsl_dcu_drm_crtc_helper_funcs = {
|
||||
.mode_set_nofb = fsl_dcu_drm_crtc_mode_set_nofb,
|
||||
};
|
||||
|
||||
static int fsl_dcu_drm_crtc_enable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct drm_device *dev = crtc->dev;
|
||||
struct fsl_dcu_drm_device *fsl_dev = dev->dev_private;
|
||||
unsigned int value;
|
||||
|
||||
regmap_read(fsl_dev->regmap, DCU_INT_MASK, &value);
|
||||
value &= ~DCU_INT_MASK_VBLANK;
|
||||
regmap_write(fsl_dev->regmap, DCU_INT_MASK, value);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void fsl_dcu_drm_crtc_disable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct drm_device *dev = crtc->dev;
|
||||
struct fsl_dcu_drm_device *fsl_dev = dev->dev_private;
|
||||
unsigned int value;
|
||||
|
||||
regmap_read(fsl_dev->regmap, DCU_INT_MASK, &value);
|
||||
value |= DCU_INT_MASK_VBLANK;
|
||||
regmap_write(fsl_dev->regmap, DCU_INT_MASK, value);
|
||||
}
|
||||
|
||||
static const struct drm_crtc_funcs fsl_dcu_drm_crtc_funcs = {
|
||||
.atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state,
|
||||
.atomic_destroy_state = drm_atomic_helper_crtc_destroy_state,
|
||||
@ -144,6 +168,8 @@ static const struct drm_crtc_funcs fsl_dcu_drm_crtc_funcs = {
|
||||
.page_flip = drm_atomic_helper_page_flip,
|
||||
.reset = drm_atomic_helper_crtc_reset,
|
||||
.set_config = drm_atomic_helper_set_config,
|
||||
.enable_vblank = fsl_dcu_drm_crtc_enable_vblank,
|
||||
.disable_vblank = fsl_dcu_drm_crtc_disable_vblank,
|
||||
};
|
||||
|
||||
int fsl_dcu_drm_crtc_create(struct fsl_dcu_drm_device *fsl_dev)
|
||||
|
@ -154,29 +154,6 @@ static irqreturn_t fsl_dcu_drm_irq(int irq, void *arg)
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int fsl_dcu_drm_enable_vblank(struct drm_device *dev, unsigned int pipe)
|
||||
{
|
||||
struct fsl_dcu_drm_device *fsl_dev = dev->dev_private;
|
||||
unsigned int value;
|
||||
|
||||
regmap_read(fsl_dev->regmap, DCU_INT_MASK, &value);
|
||||
value &= ~DCU_INT_MASK_VBLANK;
|
||||
regmap_write(fsl_dev->regmap, DCU_INT_MASK, value);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void fsl_dcu_drm_disable_vblank(struct drm_device *dev,
|
||||
unsigned int pipe)
|
||||
{
|
||||
struct fsl_dcu_drm_device *fsl_dev = dev->dev_private;
|
||||
unsigned int value;
|
||||
|
||||
regmap_read(fsl_dev->regmap, DCU_INT_MASK, &value);
|
||||
value |= DCU_INT_MASK_VBLANK;
|
||||
regmap_write(fsl_dev->regmap, DCU_INT_MASK, value);
|
||||
}
|
||||
|
||||
static void fsl_dcu_drm_lastclose(struct drm_device *dev)
|
||||
{
|
||||
struct fsl_dcu_drm_device *fsl_dev = dev->dev_private;
|
||||
@ -203,9 +180,6 @@ static struct drm_driver fsl_dcu_drm_driver = {
|
||||
.load = fsl_dcu_load,
|
||||
.unload = fsl_dcu_unload,
|
||||
.irq_handler = fsl_dcu_drm_irq,
|
||||
.get_vblank_counter = drm_vblank_no_hw_counter,
|
||||
.enable_vblank = fsl_dcu_drm_enable_vblank,
|
||||
.disable_vblank = fsl_dcu_drm_disable_vblank,
|
||||
.gem_free_object_unlocked = drm_gem_cma_free_object,
|
||||
.gem_vm_ops = &drm_gem_cma_vm_ops,
|
||||
.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
|
||||
|
@ -284,8 +284,7 @@ static bool cdv_intel_lvds_mode_fixup(struct drm_encoder *encoder,
|
||||
head) {
|
||||
if (tmp_encoder != encoder
|
||||
&& tmp_encoder->crtc == encoder->crtc) {
|
||||
printk(KERN_ERR "Can't enable LVDS and another "
|
||||
"encoder on the same pipe\n");
|
||||
pr_err("Can't enable LVDS and another encoder on the same pipe\n");
|
||||
return false;
|
||||
}
|
||||
}
|
||||
@ -756,13 +755,13 @@ out:
|
||||
|
||||
failed_find:
|
||||
mutex_unlock(&dev->mode_config.mutex);
|
||||
printk(KERN_ERR "Failed find\n");
|
||||
pr_err("Failed find\n");
|
||||
psb_intel_i2c_destroy(gma_encoder->ddc_bus);
|
||||
failed_ddc:
|
||||
printk(KERN_ERR "Failed DDC\n");
|
||||
pr_err("Failed DDC\n");
|
||||
psb_intel_i2c_destroy(gma_encoder->i2c_bus);
|
||||
failed_blc_i2c:
|
||||
printk(KERN_ERR "Failed BLC\n");
|
||||
pr_err("Failed BLC\n");
|
||||
drm_encoder_cleanup(encoder);
|
||||
drm_connector_cleanup(connector);
|
||||
kfree(lvds_priv);
|
||||
|
@ -393,7 +393,7 @@ static int psbfb_create(struct psb_fbdev *fbdev,
|
||||
info = drm_fb_helper_alloc_fbi(&fbdev->psb_fb_helper);
|
||||
if (IS_ERR(info)) {
|
||||
ret = PTR_ERR(info);
|
||||
goto err_free_range;
|
||||
goto out;
|
||||
}
|
||||
info->par = fbdev;
|
||||
|
||||
@ -401,7 +401,7 @@ static int psbfb_create(struct psb_fbdev *fbdev,
|
||||
|
||||
ret = psb_framebuffer_init(dev, psbfb, &mode_cmd, backing);
|
||||
if (ret)
|
||||
goto err_release;
|
||||
goto out;
|
||||
|
||||
fb = &psbfb->base;
|
||||
psbfb->fbdev = info;
|
||||
@ -446,9 +446,7 @@ static int psbfb_create(struct psb_fbdev *fbdev,
|
||||
psbfb->base.width, psbfb->base.height);
|
||||
|
||||
return 0;
|
||||
err_release:
|
||||
drm_fb_helper_release_fbi(&fbdev->psb_fb_helper);
|
||||
err_free_range:
|
||||
out:
|
||||
psb_gtt_free_range(dev, backing);
|
||||
return ret;
|
||||
}
|
||||
@ -537,7 +535,6 @@ static int psb_fbdev_destroy(struct drm_device *dev, struct psb_fbdev *fbdev)
|
||||
struct psb_framebuffer *psbfb = &fbdev->pfb;
|
||||
|
||||
drm_fb_helper_unregister_fbi(&fbdev->psb_fb_helper);
|
||||
drm_fb_helper_release_fbi(&fbdev->psb_fb_helper);
|
||||
|
||||
drm_fb_helper_fini(&fbdev->psb_fb_helper);
|
||||
drm_framebuffer_unregister_private(&psbfb->base);
|
||||
|
@ -255,15 +255,15 @@ static void oaktrail_lvds_get_configuration_mode(struct drm_device *dev,
|
||||
((ti->vblank_hi << 8) | ti->vblank_lo);
|
||||
mode->clock = ti->pixel_clock * 10;
|
||||
#if 0
|
||||
printk(KERN_INFO "hdisplay is %d\n", mode->hdisplay);
|
||||
printk(KERN_INFO "vdisplay is %d\n", mode->vdisplay);
|
||||
printk(KERN_INFO "HSS is %d\n", mode->hsync_start);
|
||||
printk(KERN_INFO "HSE is %d\n", mode->hsync_end);
|
||||
printk(KERN_INFO "htotal is %d\n", mode->htotal);
|
||||
printk(KERN_INFO "VSS is %d\n", mode->vsync_start);
|
||||
printk(KERN_INFO "VSE is %d\n", mode->vsync_end);
|
||||
printk(KERN_INFO "vtotal is %d\n", mode->vtotal);
|
||||
printk(KERN_INFO "clock is %d\n", mode->clock);
|
||||
pr_info("hdisplay is %d\n", mode->hdisplay);
|
||||
pr_info("vdisplay is %d\n", mode->vdisplay);
|
||||
pr_info("HSS is %d\n", mode->hsync_start);
|
||||
pr_info("HSE is %d\n", mode->hsync_end);
|
||||
pr_info("htotal is %d\n", mode->htotal);
|
||||
pr_info("VSS is %d\n", mode->vsync_start);
|
||||
pr_info("VSE is %d\n", mode->vsync_end);
|
||||
pr_info("vtotal is %d\n", mode->vtotal);
|
||||
pr_info("clock is %d\n", mode->clock);
|
||||
#endif
|
||||
mode_dev->panel_fixed_mode = mode;
|
||||
}
|
||||
|
@ -905,9 +905,8 @@ static inline void REGISTER_WRITE8(struct drm_device *dev,
|
||||
#define PSB_RSGX32(_offs) \
|
||||
({ \
|
||||
if (inl(dev_priv->apm_base + PSB_APM_STS) & 0x3) { \
|
||||
printk(KERN_ERR \
|
||||
"access sgx when it's off!! (READ) %s, %d\n", \
|
||||
__FILE__, __LINE__); \
|
||||
pr_err("access sgx when it's off!! (READ) %s, %d\n", \
|
||||
__FILE__, __LINE__); \
|
||||
melay(1000); \
|
||||
} \
|
||||
ioread32(dev_priv->sgx_reg + (_offs)); \
|
||||
|
@ -388,11 +388,11 @@ bool psb_intel_lvds_mode_fixup(struct drm_encoder *encoder,
|
||||
|
||||
/* PSB requires the LVDS is on pipe B, MRST has only one pipe anyway */
|
||||
if (!IS_MRST(dev) && gma_crtc->pipe == 0) {
|
||||
printk(KERN_ERR "Can't support LVDS on pipe A\n");
|
||||
pr_err("Can't support LVDS on pipe A\n");
|
||||
return false;
|
||||
}
|
||||
if (IS_MRST(dev) && gma_crtc->pipe != 0) {
|
||||
printk(KERN_ERR "Must use PIPE A\n");
|
||||
pr_err("Must use PIPE A\n");
|
||||
return false;
|
||||
}
|
||||
/* Should never happen!! */
|
||||
@ -400,8 +400,7 @@ bool psb_intel_lvds_mode_fixup(struct drm_encoder *encoder,
|
||||
head) {
|
||||
if (tmp_encoder != encoder
|
||||
&& tmp_encoder->crtc == encoder->crtc) {
|
||||
printk(KERN_ERR "Can't enable LVDS and another "
|
||||
"encoder on the same pipe\n");
|
||||
pr_err("Can't enable LVDS and another encoder on the same pipe\n");
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
@ -423,6 +423,24 @@ static void hibmc_crtc_atomic_flush(struct drm_crtc *crtc,
|
||||
spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
|
||||
}
|
||||
|
||||
static int hibmc_crtc_enable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct hibmc_drm_private *priv = crtc->dev->dev_private;
|
||||
|
||||
writel(HIBMC_RAW_INTERRUPT_EN_VBLANK(1),
|
||||
priv->mmio + HIBMC_RAW_INTERRUPT_EN);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void hibmc_crtc_disable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct hibmc_drm_private *priv = crtc->dev->dev_private;
|
||||
|
||||
writel(HIBMC_RAW_INTERRUPT_EN_VBLANK(0),
|
||||
priv->mmio + HIBMC_RAW_INTERRUPT_EN);
|
||||
}
|
||||
|
||||
static const struct drm_crtc_funcs hibmc_crtc_funcs = {
|
||||
.page_flip = drm_atomic_helper_page_flip,
|
||||
.set_config = drm_atomic_helper_set_config,
|
||||
@ -430,6 +448,8 @@ static const struct drm_crtc_funcs hibmc_crtc_funcs = {
|
||||
.reset = drm_atomic_helper_crtc_reset,
|
||||
.atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state,
|
||||
.atomic_destroy_state = drm_atomic_helper_crtc_destroy_state,
|
||||
.enable_vblank = hibmc_crtc_enable_vblank,
|
||||
.disable_vblank = hibmc_crtc_disable_vblank,
|
||||
};
|
||||
|
||||
static const struct drm_crtc_helper_funcs hibmc_crtc_helper_funcs = {
|
||||
|
@ -37,26 +37,6 @@ static const struct file_operations hibmc_fops = {
|
||||
.llseek = no_llseek,
|
||||
};
|
||||
|
||||
static int hibmc_enable_vblank(struct drm_device *dev, unsigned int pipe)
|
||||
{
|
||||
struct hibmc_drm_private *priv =
|
||||
(struct hibmc_drm_private *)dev->dev_private;
|
||||
|
||||
writel(HIBMC_RAW_INTERRUPT_EN_VBLANK(1),
|
||||
priv->mmio + HIBMC_RAW_INTERRUPT_EN);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void hibmc_disable_vblank(struct drm_device *dev, unsigned int pipe)
|
||||
{
|
||||
struct hibmc_drm_private *priv =
|
||||
(struct hibmc_drm_private *)dev->dev_private;
|
||||
|
||||
writel(HIBMC_RAW_INTERRUPT_EN_VBLANK(0),
|
||||
priv->mmio + HIBMC_RAW_INTERRUPT_EN);
|
||||
}
|
||||
|
||||
irqreturn_t hibmc_drm_interrupt(int irq, void *arg)
|
||||
{
|
||||
struct drm_device *dev = (struct drm_device *)arg;
|
||||
@ -84,9 +64,6 @@ static struct drm_driver hibmc_driver = {
|
||||
.desc = "hibmc drm driver",
|
||||
.major = 1,
|
||||
.minor = 0,
|
||||
.get_vblank_counter = drm_vblank_no_hw_counter,
|
||||
.enable_vblank = hibmc_enable_vblank,
|
||||
.disable_vblank = hibmc_disable_vblank,
|
||||
.gem_free_object_unlocked = hibmc_gem_free_object,
|
||||
.dumb_create = hibmc_dumb_create,
|
||||
.dumb_map_offset = hibmc_dumb_mmap_offset,
|
||||
|
@ -147,7 +147,6 @@ static int hibmc_drm_fb_create(struct drm_fb_helper *helper,
|
||||
return 0;
|
||||
|
||||
out_release_fbi:
|
||||
drm_fb_helper_release_fbi(helper);
|
||||
ret1 = ttm_bo_reserve(&bo->bo, true, false, NULL);
|
||||
if (ret1) {
|
||||
DRM_ERROR("failed to rsv ttm_bo when release fbi: %d\n", ret1);
|
||||
@ -170,7 +169,6 @@ static void hibmc_fbdev_destroy(struct hibmc_fbdev *fbdev)
|
||||
struct drm_fb_helper *fbh = &fbdev->helper;
|
||||
|
||||
drm_fb_helper_unregister_fbi(fbh);
|
||||
drm_fb_helper_release_fbi(fbh);
|
||||
|
||||
drm_fb_helper_fini(fbh);
|
||||
|
||||
|
@ -302,9 +302,8 @@ static void ade_set_medianoc_qos(struct ade_crtc *acrtc)
|
||||
SOCKET_QOS_EN, SOCKET_QOS_EN);
|
||||
}
|
||||
|
||||
static int ade_enable_vblank(struct drm_device *dev, unsigned int pipe)
|
||||
static int ade_crtc_enable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe);
|
||||
struct ade_crtc *acrtc = to_ade_crtc(crtc);
|
||||
struct ade_hw_ctx *ctx = acrtc->ctx;
|
||||
void __iomem *base = ctx->base;
|
||||
@ -318,9 +317,8 @@ static int ade_enable_vblank(struct drm_device *dev, unsigned int pipe)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ade_disable_vblank(struct drm_device *dev, unsigned int pipe)
|
||||
static void ade_crtc_disable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe);
|
||||
struct ade_crtc *acrtc = to_ade_crtc(crtc);
|
||||
struct ade_hw_ctx *ctx = acrtc->ctx;
|
||||
void __iomem *base = ctx->base;
|
||||
@ -570,6 +568,8 @@ static const struct drm_crtc_funcs ade_crtc_funcs = {
|
||||
.set_property = drm_atomic_helper_crtc_set_property,
|
||||
.atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state,
|
||||
.atomic_destroy_state = drm_atomic_helper_crtc_destroy_state,
|
||||
.enable_vblank = ade_crtc_enable_vblank,
|
||||
.disable_vblank = ade_crtc_disable_vblank,
|
||||
};
|
||||
|
||||
static int ade_crtc_init(struct drm_device *dev, struct drm_crtc *crtc,
|
||||
@ -1025,9 +1025,6 @@ static int ade_drm_init(struct platform_device *pdev)
|
||||
IRQF_SHARED, dev->driver->name, acrtc);
|
||||
if (ret)
|
||||
return ret;
|
||||
dev->driver->get_vblank_counter = drm_vblank_no_hw_counter;
|
||||
dev->driver->enable_vblank = ade_enable_vblank;
|
||||
dev->driver->disable_vblank = ade_disable_vblank;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -35,32 +35,6 @@ static inline struct drm_i915_private *node_to_i915(struct drm_info_node *node)
|
||||
return to_i915(node->minor->dev);
|
||||
}
|
||||
|
||||
/* As the drm_debugfs_init() routines are called before dev->dev_private is
|
||||
* allocated we need to hook into the minor for release. */
|
||||
static int
|
||||
drm_add_fake_info_node(struct drm_minor *minor,
|
||||
struct dentry *ent,
|
||||
const void *key)
|
||||
{
|
||||
struct drm_info_node *node;
|
||||
|
||||
node = kmalloc(sizeof(*node), GFP_KERNEL);
|
||||
if (node == NULL) {
|
||||
debugfs_remove(ent);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
node->minor = minor;
|
||||
node->dent = ent;
|
||||
node->info_ent = (void *)key;
|
||||
|
||||
mutex_lock(&minor->debugfs_lock);
|
||||
list_add(&node->list, &minor->debugfs_list);
|
||||
mutex_unlock(&minor->debugfs_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int i915_capabilities(struct seq_file *m, void *data)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = node_to_i915(m->private);
|
||||
@ -4593,37 +4567,6 @@ static const struct file_operations i915_forcewake_fops = {
|
||||
.release = i915_forcewake_release,
|
||||
};
|
||||
|
||||
static int i915_forcewake_create(struct dentry *root, struct drm_minor *minor)
|
||||
{
|
||||
struct dentry *ent;
|
||||
|
||||
ent = debugfs_create_file("i915_forcewake_user",
|
||||
S_IRUSR,
|
||||
root, to_i915(minor->dev),
|
||||
&i915_forcewake_fops);
|
||||
if (!ent)
|
||||
return -ENOMEM;
|
||||
|
||||
return drm_add_fake_info_node(minor, ent, &i915_forcewake_fops);
|
||||
}
|
||||
|
||||
static int i915_debugfs_create(struct dentry *root,
|
||||
struct drm_minor *minor,
|
||||
const char *name,
|
||||
const struct file_operations *fops)
|
||||
{
|
||||
struct dentry *ent;
|
||||
|
||||
ent = debugfs_create_file(name,
|
||||
S_IRUGO | S_IWUSR,
|
||||
root, to_i915(minor->dev),
|
||||
fops);
|
||||
if (!ent)
|
||||
return -ENOMEM;
|
||||
|
||||
return drm_add_fake_info_node(minor, ent, fops);
|
||||
}
|
||||
|
||||
static const struct drm_info_list i915_debugfs_list[] = {
|
||||
{"i915_capabilities", i915_capabilities, 0},
|
||||
{"i915_gem_objects", i915_gem_object_info, 0},
|
||||
@ -4706,22 +4649,27 @@ static const struct i915_debugfs_files {
|
||||
int i915_debugfs_register(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
struct drm_minor *minor = dev_priv->drm.primary;
|
||||
struct dentry *ent;
|
||||
int ret, i;
|
||||
|
||||
ret = i915_forcewake_create(minor->debugfs_root, minor);
|
||||
if (ret)
|
||||
return ret;
|
||||
ent = debugfs_create_file("i915_forcewake_user", S_IRUSR,
|
||||
minor->debugfs_root, to_i915(minor->dev),
|
||||
&i915_forcewake_fops);
|
||||
if (!ent)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = intel_pipe_crc_create(minor);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(i915_debugfs_files); i++) {
|
||||
ret = i915_debugfs_create(minor->debugfs_root, minor,
|
||||
i915_debugfs_files[i].name,
|
||||
ent = debugfs_create_file(i915_debugfs_files[i].name,
|
||||
S_IRUGO | S_IWUSR,
|
||||
minor->debugfs_root,
|
||||
to_i915(minor->dev),
|
||||
i915_debugfs_files[i].fops);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (!ent)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
return drm_debugfs_create_files(i915_debugfs_list,
|
||||
@ -4729,27 +4677,6 @@ int i915_debugfs_register(struct drm_i915_private *dev_priv)
|
||||
minor->debugfs_root, minor);
|
||||
}
|
||||
|
||||
void i915_debugfs_unregister(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
struct drm_minor *minor = dev_priv->drm.primary;
|
||||
int i;
|
||||
|
||||
drm_debugfs_remove_files(i915_debugfs_list,
|
||||
I915_DEBUGFS_ENTRIES, minor);
|
||||
|
||||
drm_debugfs_remove_files((struct drm_info_list *)&i915_forcewake_fops,
|
||||
1, minor);
|
||||
|
||||
intel_pipe_crc_cleanup(minor);
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(i915_debugfs_files); i++) {
|
||||
struct drm_info_list *info_list =
|
||||
(struct drm_info_list *)i915_debugfs_files[i].fops;
|
||||
|
||||
drm_debugfs_remove_files(info_list, 1, minor);
|
||||
}
|
||||
}
|
||||
|
||||
struct dpcd_block {
|
||||
/* DPCD dump start address. */
|
||||
unsigned int offset;
|
||||
|
@ -1167,7 +1167,6 @@ static void i915_driver_unregister(struct drm_i915_private *dev_priv)
|
||||
|
||||
i915_teardown_sysfs(dev_priv);
|
||||
i915_guc_log_unregister(dev_priv);
|
||||
i915_debugfs_unregister(dev_priv);
|
||||
drm_dev_unregister(&dev_priv->drm);
|
||||
|
||||
i915_gem_shrinker_cleanup(dev_priv);
|
||||
|
@ -3528,12 +3528,10 @@ u32 i915_gem_fence_alignment(struct drm_i915_private *dev_priv, u32 size,
|
||||
/* i915_debugfs.c */
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
int i915_debugfs_register(struct drm_i915_private *dev_priv);
|
||||
void i915_debugfs_unregister(struct drm_i915_private *dev_priv);
|
||||
int i915_debugfs_connector_add(struct drm_connector *connector);
|
||||
void intel_display_crc_init(struct drm_i915_private *dev_priv);
|
||||
#else
|
||||
static inline int i915_debugfs_register(struct drm_i915_private *dev_priv) {return 0;}
|
||||
static inline void i915_debugfs_unregister(struct drm_i915_private *dev_priv) {}
|
||||
static inline int i915_debugfs_connector_add(struct drm_connector *connector)
|
||||
{ return 0; }
|
||||
static inline void intel_display_crc_init(struct drm_i915_private *dev_priv) {}
|
||||
|
@ -4249,7 +4249,6 @@ void intel_irq_init(struct drm_i915_private *dev_priv)
|
||||
if (IS_GEN2(dev_priv)) {
|
||||
/* Gen2 doesn't have a hardware frame counter */
|
||||
dev->max_vblank_count = 0;
|
||||
dev->driver->get_vblank_counter = drm_vblank_no_hw_counter;
|
||||
} else if (IS_G4X(dev_priv) || INTEL_INFO(dev_priv)->gen >= 5) {
|
||||
dev->max_vblank_count = 0xffffffff; /* full 32 bit counter */
|
||||
dev->driver->get_vblank_counter = g4x_get_vblank_counter;
|
||||
|
@ -395,10 +395,10 @@ static void timer_i915_sw_fence_wake(unsigned long data)
|
||||
{
|
||||
struct i915_sw_dma_fence_cb *cb = (struct i915_sw_dma_fence_cb *)data;
|
||||
|
||||
printk(KERN_WARNING "asynchronous wait on fence %s:%s:%x timed out\n",
|
||||
cb->dma->ops->get_driver_name(cb->dma),
|
||||
cb->dma->ops->get_timeline_name(cb->dma),
|
||||
cb->dma->seqno);
|
||||
pr_warn("asynchronous wait on fence %s:%s:%x timed out\n",
|
||||
cb->dma->ops->get_driver_name(cb->dma),
|
||||
cb->dma->ops->get_timeline_name(cb->dma),
|
||||
cb->dma->seqno);
|
||||
dma_fence_put(cb->dma);
|
||||
cb->dma = NULL;
|
||||
|
||||
|
@ -3482,7 +3482,8 @@ static void intel_update_primary_planes(struct drm_device *dev)
|
||||
|
||||
static int
|
||||
__intel_display_resume(struct drm_device *dev,
|
||||
struct drm_atomic_state *state)
|
||||
struct drm_atomic_state *state,
|
||||
struct drm_modeset_acquire_ctx *ctx)
|
||||
{
|
||||
struct drm_crtc_state *crtc_state;
|
||||
struct drm_crtc *crtc;
|
||||
@ -3506,7 +3507,7 @@ __intel_display_resume(struct drm_device *dev,
|
||||
/* ignore any reset values/BIOS leftovers in the WM registers */
|
||||
to_intel_atomic_state(state)->skip_intermediate_wm = true;
|
||||
|
||||
ret = drm_atomic_commit(state);
|
||||
ret = drm_atomic_helper_commit_duplicated_state(state, ctx);
|
||||
|
||||
WARN_ON(ret == -EDEADLK);
|
||||
return ret;
|
||||
@ -3596,7 +3597,7 @@ void intel_finish_reset(struct drm_i915_private *dev_priv)
|
||||
*/
|
||||
intel_update_primary_planes(dev);
|
||||
} else {
|
||||
ret = __intel_display_resume(dev, state);
|
||||
ret = __intel_display_resume(dev, state, ctx);
|
||||
if (ret)
|
||||
DRM_ERROR("Restoring old state failed with %i\n", ret);
|
||||
}
|
||||
@ -3616,7 +3617,7 @@ void intel_finish_reset(struct drm_i915_private *dev_priv)
|
||||
dev_priv->display.hpd_irq_setup(dev_priv);
|
||||
spin_unlock_irq(&dev_priv->irq_lock);
|
||||
|
||||
ret = __intel_display_resume(dev, state);
|
||||
ret = __intel_display_resume(dev, state, ctx);
|
||||
if (ret)
|
||||
DRM_ERROR("Restoring old state failed with %i\n", ret);
|
||||
|
||||
@ -11323,7 +11324,7 @@ void intel_release_load_detect_pipe(struct drm_connector *connector,
|
||||
if (!state)
|
||||
return;
|
||||
|
||||
ret = drm_atomic_commit(state);
|
||||
ret = drm_atomic_helper_commit_duplicated_state(state, ctx);
|
||||
if (ret)
|
||||
DRM_DEBUG_KMS("Couldn't release load detect pipe: %i\n", ret);
|
||||
drm_atomic_state_put(state);
|
||||
@ -17240,7 +17241,7 @@ void intel_display_resume(struct drm_device *dev)
|
||||
}
|
||||
|
||||
if (!ret)
|
||||
ret = __intel_display_resume(dev, state);
|
||||
ret = __intel_display_resume(dev, state, &ctx);
|
||||
|
||||
drm_modeset_drop_locks(&ctx);
|
||||
drm_modeset_acquire_fini(&ctx);
|
||||
|
@ -1891,7 +1891,6 @@ void lspcon_wait_pcon_mode(struct intel_lspcon *lspcon);
|
||||
|
||||
/* intel_pipe_crc.c */
|
||||
int intel_pipe_crc_create(struct drm_minor *minor);
|
||||
void intel_pipe_crc_cleanup(struct drm_minor *minor);
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
int intel_crtc_set_crc_source(struct drm_crtc *crtc, const char *source_name,
|
||||
size_t *values_cnt);
|
||||
|
@ -253,7 +253,7 @@ static int intelfb_create(struct drm_fb_helper *helper,
|
||||
if (IS_ERR(vaddr)) {
|
||||
DRM_ERROR("Failed to remap framebuffer into virtual memory\n");
|
||||
ret = PTR_ERR(vaddr);
|
||||
goto out_destroy_fbi;
|
||||
goto out_unpin;
|
||||
}
|
||||
info->screen_base = vaddr;
|
||||
info->screen_size = vma->node.size;
|
||||
@ -281,8 +281,6 @@ static int intelfb_create(struct drm_fb_helper *helper,
|
||||
vga_switcheroo_client_fb_set(pdev, info);
|
||||
return 0;
|
||||
|
||||
out_destroy_fbi:
|
||||
drm_fb_helper_release_fbi(helper);
|
||||
out_unpin:
|
||||
intel_unpin_fb_vma(vma);
|
||||
out_unlock:
|
||||
@ -543,7 +541,6 @@ static void intel_fbdev_destroy(struct intel_fbdev *ifbdev)
|
||||
*/
|
||||
|
||||
drm_fb_helper_unregister_fbi(&ifbdev->helper);
|
||||
drm_fb_helper_release_fbi(&ifbdev->helper);
|
||||
|
||||
drm_fb_helper_fini(&ifbdev->helper);
|
||||
|
||||
|
@ -36,31 +36,6 @@ struct pipe_crc_info {
|
||||
enum pipe pipe;
|
||||
};
|
||||
|
||||
/* As the drm_debugfs_init() routines are called before dev->dev_private is
|
||||
* allocated we need to hook into the minor for release.
|
||||
*/
|
||||
static int drm_add_fake_info_node(struct drm_minor *minor,
|
||||
struct dentry *ent, const void *key)
|
||||
{
|
||||
struct drm_info_node *node;
|
||||
|
||||
node = kmalloc(sizeof(*node), GFP_KERNEL);
|
||||
if (node == NULL) {
|
||||
debugfs_remove(ent);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
node->minor = minor;
|
||||
node->dent = ent;
|
||||
node->info_ent = (void *) key;
|
||||
|
||||
mutex_lock(&minor->debugfs_lock);
|
||||
list_add(&node->list, &minor->debugfs_list);
|
||||
mutex_unlock(&minor->debugfs_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int i915_pipe_crc_open(struct inode *inode, struct file *filep)
|
||||
{
|
||||
struct pipe_crc_info *info = inode->i_private;
|
||||
@ -209,22 +184,6 @@ static struct pipe_crc_info i915_pipe_crc_data[I915_MAX_PIPES] = {
|
||||
},
|
||||
};
|
||||
|
||||
static int i915_pipe_crc_create(struct dentry *root, struct drm_minor *minor,
|
||||
enum pipe pipe)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(minor->dev);
|
||||
struct dentry *ent;
|
||||
struct pipe_crc_info *info = &i915_pipe_crc_data[pipe];
|
||||
|
||||
info->dev_priv = dev_priv;
|
||||
ent = debugfs_create_file(info->name, S_IRUGO, root, info,
|
||||
&i915_pipe_crc_fops);
|
||||
if (!ent)
|
||||
return -ENOMEM;
|
||||
|
||||
return drm_add_fake_info_node(minor, ent, info);
|
||||
}
|
||||
|
||||
static const char * const pipe_crc_sources[] = {
|
||||
"none",
|
||||
"plane1",
|
||||
@ -928,27 +887,22 @@ void intel_display_crc_init(struct drm_i915_private *dev_priv)
|
||||
|
||||
int intel_pipe_crc_create(struct drm_minor *minor)
|
||||
{
|
||||
int ret, i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(i915_pipe_crc_data); i++) {
|
||||
ret = i915_pipe_crc_create(minor->debugfs_root, minor, i);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void intel_pipe_crc_cleanup(struct drm_minor *minor)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(minor->dev);
|
||||
struct dentry *ent;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(i915_pipe_crc_data); i++) {
|
||||
struct drm_info_list *info_list =
|
||||
(struct drm_info_list *)&i915_pipe_crc_data[i];
|
||||
struct pipe_crc_info *info = &i915_pipe_crc_data[i];
|
||||
|
||||
drm_debugfs_remove_files(info_list, 1, minor);
|
||||
info->dev_priv = dev_priv;
|
||||
ent = debugfs_create_file(info->name, S_IRUGO,
|
||||
minor->debugfs_root, info,
|
||||
&i915_pipe_crc_fops);
|
||||
if (!ent)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int intel_crtc_set_crc_source(struct drm_crtc *crtc, const char *source_name,
|
||||
|
@ -40,17 +40,11 @@ struct imx_drm_component {
|
||||
|
||||
struct imx_drm_device {
|
||||
struct drm_device *drm;
|
||||
struct imx_drm_crtc *crtc[MAX_CRTC];
|
||||
unsigned int pipes;
|
||||
struct drm_fbdev_cma *fbhelper;
|
||||
struct drm_atomic_state *state;
|
||||
};
|
||||
|
||||
struct imx_drm_crtc {
|
||||
struct drm_crtc *crtc;
|
||||
struct imx_drm_crtc_helper_funcs imx_drm_helper_funcs;
|
||||
};
|
||||
|
||||
#if IS_ENABLED(CONFIG_DRM_FBDEV_EMULATION)
|
||||
static int legacyfb_depth = 16;
|
||||
module_param(legacyfb_depth, int, 0444);
|
||||
@ -63,38 +57,6 @@ static void imx_drm_driver_lastclose(struct drm_device *drm)
|
||||
drm_fbdev_cma_restore_mode(imxdrm->fbhelper);
|
||||
}
|
||||
|
||||
static int imx_drm_enable_vblank(struct drm_device *drm, unsigned int pipe)
|
||||
{
|
||||
struct imx_drm_device *imxdrm = drm->dev_private;
|
||||
struct imx_drm_crtc *imx_drm_crtc = imxdrm->crtc[pipe];
|
||||
int ret;
|
||||
|
||||
if (!imx_drm_crtc)
|
||||
return -EINVAL;
|
||||
|
||||
if (!imx_drm_crtc->imx_drm_helper_funcs.enable_vblank)
|
||||
return -ENOSYS;
|
||||
|
||||
ret = imx_drm_crtc->imx_drm_helper_funcs.enable_vblank(
|
||||
imx_drm_crtc->crtc);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void imx_drm_disable_vblank(struct drm_device *drm, unsigned int pipe)
|
||||
{
|
||||
struct imx_drm_device *imxdrm = drm->dev_private;
|
||||
struct imx_drm_crtc *imx_drm_crtc = imxdrm->crtc[pipe];
|
||||
|
||||
if (!imx_drm_crtc)
|
||||
return;
|
||||
|
||||
if (!imx_drm_crtc->imx_drm_helper_funcs.disable_vblank)
|
||||
return;
|
||||
|
||||
imx_drm_crtc->imx_drm_helper_funcs.disable_vblank(imx_drm_crtc->crtc);
|
||||
}
|
||||
|
||||
static const struct file_operations imx_drm_driver_fops = {
|
||||
.owner = THIS_MODULE,
|
||||
.open = drm_open,
|
||||
@ -176,71 +138,10 @@ static void imx_drm_atomic_commit_tail(struct drm_atomic_state *state)
|
||||
drm_atomic_helper_cleanup_planes(dev, state);
|
||||
}
|
||||
|
||||
static struct drm_mode_config_helper_funcs imx_drm_mode_config_helpers = {
|
||||
static const struct drm_mode_config_helper_funcs imx_drm_mode_config_helpers = {
|
||||
.atomic_commit_tail = imx_drm_atomic_commit_tail,
|
||||
};
|
||||
|
||||
/*
|
||||
* imx_drm_add_crtc - add a new crtc
|
||||
*/
|
||||
int imx_drm_add_crtc(struct drm_device *drm, struct drm_crtc *crtc,
|
||||
struct imx_drm_crtc **new_crtc, struct drm_plane *primary_plane,
|
||||
const struct imx_drm_crtc_helper_funcs *imx_drm_helper_funcs,
|
||||
struct device_node *port)
|
||||
{
|
||||
struct imx_drm_device *imxdrm = drm->dev_private;
|
||||
struct imx_drm_crtc *imx_drm_crtc;
|
||||
|
||||
/*
|
||||
* The vblank arrays are dimensioned by MAX_CRTC - we can't
|
||||
* pass IDs greater than this to those functions.
|
||||
*/
|
||||
if (imxdrm->pipes >= MAX_CRTC)
|
||||
return -EINVAL;
|
||||
|
||||
if (imxdrm->drm->open_count)
|
||||
return -EBUSY;
|
||||
|
||||
imx_drm_crtc = kzalloc(sizeof(*imx_drm_crtc), GFP_KERNEL);
|
||||
if (!imx_drm_crtc)
|
||||
return -ENOMEM;
|
||||
|
||||
imx_drm_crtc->imx_drm_helper_funcs = *imx_drm_helper_funcs;
|
||||
imx_drm_crtc->crtc = crtc;
|
||||
|
||||
crtc->port = port;
|
||||
|
||||
imxdrm->crtc[imxdrm->pipes++] = imx_drm_crtc;
|
||||
|
||||
*new_crtc = imx_drm_crtc;
|
||||
|
||||
drm_crtc_helper_add(crtc,
|
||||
imx_drm_crtc->imx_drm_helper_funcs.crtc_helper_funcs);
|
||||
|
||||
drm_crtc_init_with_planes(drm, crtc, primary_plane, NULL,
|
||||
imx_drm_crtc->imx_drm_helper_funcs.crtc_funcs, NULL);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(imx_drm_add_crtc);
|
||||
|
||||
/*
|
||||
* imx_drm_remove_crtc - remove a crtc
|
||||
*/
|
||||
int imx_drm_remove_crtc(struct imx_drm_crtc *imx_drm_crtc)
|
||||
{
|
||||
struct imx_drm_device *imxdrm = imx_drm_crtc->crtc->dev->dev_private;
|
||||
unsigned int pipe = drm_crtc_index(imx_drm_crtc->crtc);
|
||||
|
||||
drm_crtc_cleanup(imx_drm_crtc->crtc);
|
||||
|
||||
imxdrm->crtc[pipe] = NULL;
|
||||
|
||||
kfree(imx_drm_crtc);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(imx_drm_remove_crtc);
|
||||
|
||||
int imx_drm_encoder_parse_of(struct drm_device *drm,
|
||||
struct drm_encoder *encoder, struct device_node *np)
|
||||
@ -288,9 +189,6 @@ static struct drm_driver imx_drm_driver = {
|
||||
.gem_prime_vmap = drm_gem_cma_prime_vmap,
|
||||
.gem_prime_vunmap = drm_gem_cma_prime_vunmap,
|
||||
.gem_prime_mmap = drm_gem_cma_prime_mmap,
|
||||
.get_vblank_counter = drm_vblank_no_hw_counter,
|
||||
.enable_vblank = imx_drm_enable_vblank,
|
||||
.disable_vblank = imx_drm_disable_vblank,
|
||||
.ioctls = imx_drm_ioctls,
|
||||
.num_ioctls = ARRAY_SIZE(imx_drm_ioctls),
|
||||
.fops = &imx_drm_driver_fops,
|
||||
|
@ -25,19 +25,6 @@ static inline struct imx_crtc_state *to_imx_crtc_state(struct drm_crtc_state *s)
|
||||
{
|
||||
return container_of(s, struct imx_crtc_state, base);
|
||||
}
|
||||
|
||||
struct imx_drm_crtc_helper_funcs {
|
||||
int (*enable_vblank)(struct drm_crtc *crtc);
|
||||
void (*disable_vblank)(struct drm_crtc *crtc);
|
||||
const struct drm_crtc_helper_funcs *crtc_helper_funcs;
|
||||
const struct drm_crtc_funcs *crtc_funcs;
|
||||
};
|
||||
|
||||
int imx_drm_add_crtc(struct drm_device *drm, struct drm_crtc *crtc,
|
||||
struct imx_drm_crtc **new_crtc, struct drm_plane *primary_plane,
|
||||
const struct imx_drm_crtc_helper_funcs *imx_helper_funcs,
|
||||
struct device_node *port);
|
||||
int imx_drm_remove_crtc(struct imx_drm_crtc *);
|
||||
int imx_drm_init_drm(struct platform_device *pdev,
|
||||
int preferred_bpp);
|
||||
int imx_drm_exit_drm(void);
|
||||
|
@ -129,18 +129,31 @@ static void imx_drm_crtc_destroy_state(struct drm_crtc *crtc,
|
||||
kfree(to_imx_crtc_state(state));
|
||||
}
|
||||
|
||||
static void imx_drm_crtc_destroy(struct drm_crtc *crtc)
|
||||
static int ipu_enable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
imx_drm_remove_crtc(to_ipu_crtc(crtc)->imx_crtc);
|
||||
struct ipu_crtc *ipu_crtc = to_ipu_crtc(crtc);
|
||||
|
||||
enable_irq(ipu_crtc->irq);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ipu_disable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct ipu_crtc *ipu_crtc = to_ipu_crtc(crtc);
|
||||
|
||||
disable_irq_nosync(ipu_crtc->irq);
|
||||
}
|
||||
|
||||
static const struct drm_crtc_funcs ipu_crtc_funcs = {
|
||||
.set_config = drm_atomic_helper_set_config,
|
||||
.destroy = imx_drm_crtc_destroy,
|
||||
.destroy = drm_crtc_cleanup,
|
||||
.page_flip = drm_atomic_helper_page_flip,
|
||||
.reset = imx_drm_crtc_reset,
|
||||
.atomic_duplicate_state = imx_drm_crtc_duplicate_state,
|
||||
.atomic_destroy_state = imx_drm_crtc_destroy_state,
|
||||
.enable_vblank = ipu_enable_vblank,
|
||||
.disable_vblank = ipu_disable_vblank,
|
||||
};
|
||||
|
||||
static irqreturn_t ipu_irq_handler(int irq, void *dev_id)
|
||||
@ -261,29 +274,6 @@ static const struct drm_crtc_helper_funcs ipu_helper_funcs = {
|
||||
.enable = ipu_crtc_enable,
|
||||
};
|
||||
|
||||
static int ipu_enable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct ipu_crtc *ipu_crtc = to_ipu_crtc(crtc);
|
||||
|
||||
enable_irq(ipu_crtc->irq);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ipu_disable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct ipu_crtc *ipu_crtc = to_ipu_crtc(crtc);
|
||||
|
||||
disable_irq_nosync(ipu_crtc->irq);
|
||||
}
|
||||
|
||||
static const struct imx_drm_crtc_helper_funcs ipu_crtc_helper_funcs = {
|
||||
.enable_vblank = ipu_enable_vblank,
|
||||
.disable_vblank = ipu_disable_vblank,
|
||||
.crtc_funcs = &ipu_crtc_funcs,
|
||||
.crtc_helper_funcs = &ipu_helper_funcs,
|
||||
};
|
||||
|
||||
static void ipu_put_resources(struct ipu_crtc *ipu_crtc)
|
||||
{
|
||||
if (!IS_ERR_OR_NULL(ipu_crtc->dc))
|
||||
@ -321,6 +311,7 @@ static int ipu_crtc_init(struct ipu_crtc *ipu_crtc,
|
||||
struct ipu_client_platformdata *pdata, struct drm_device *drm)
|
||||
{
|
||||
struct ipu_soc *ipu = dev_get_drvdata(ipu_crtc->dev->parent);
|
||||
struct drm_crtc *crtc = &ipu_crtc->base;
|
||||
int dp = -EINVAL;
|
||||
int ret;
|
||||
|
||||
@ -340,19 +331,16 @@ static int ipu_crtc_init(struct ipu_crtc *ipu_crtc,
|
||||
goto err_put_resources;
|
||||
}
|
||||
|
||||
ret = imx_drm_add_crtc(drm, &ipu_crtc->base, &ipu_crtc->imx_crtc,
|
||||
&ipu_crtc->plane[0]->base, &ipu_crtc_helper_funcs,
|
||||
pdata->of_node);
|
||||
if (ret) {
|
||||
dev_err(ipu_crtc->dev, "adding crtc failed with %d.\n", ret);
|
||||
goto err_put_resources;
|
||||
}
|
||||
crtc->port = pdata->of_node;
|
||||
drm_crtc_helper_add(crtc, &ipu_helper_funcs);
|
||||
drm_crtc_init_with_planes(drm, crtc, &ipu_crtc->plane[0]->base, NULL,
|
||||
&ipu_crtc_funcs, NULL);
|
||||
|
||||
ret = ipu_plane_get_resources(ipu_crtc->plane[0]);
|
||||
if (ret) {
|
||||
dev_err(ipu_crtc->dev, "getting plane 0 resources failed with %d.\n",
|
||||
ret);
|
||||
goto err_remove_crtc;
|
||||
goto err_put_resources;
|
||||
}
|
||||
|
||||
/* If this crtc is using the DP, add an overlay plane */
|
||||
@ -390,8 +378,6 @@ err_put_plane1_res:
|
||||
ipu_plane_put_resources(ipu_crtc->plane[1]);
|
||||
err_put_plane0_res:
|
||||
ipu_plane_put_resources(ipu_crtc->plane[0]);
|
||||
err_remove_crtc:
|
||||
imx_drm_remove_crtc(ipu_crtc->imx_crtc);
|
||||
err_put_resources:
|
||||
ipu_put_resources(ipu_crtc);
|
||||
|
||||
|
@ -168,9 +168,8 @@ static void mtk_drm_crtc_mode_set_nofb(struct drm_crtc *crtc)
|
||||
state->pending_config = true;
|
||||
}
|
||||
|
||||
int mtk_drm_crtc_enable_vblank(struct drm_device *drm, unsigned int pipe)
|
||||
static int mtk_drm_crtc_enable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct drm_crtc *crtc = drm_crtc_from_index(drm, pipe);
|
||||
struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
|
||||
struct mtk_ddp_comp *ovl = mtk_crtc->ddp_comp[0];
|
||||
|
||||
@ -179,9 +178,8 @@ int mtk_drm_crtc_enable_vblank(struct drm_device *drm, unsigned int pipe)
|
||||
return 0;
|
||||
}
|
||||
|
||||
void mtk_drm_crtc_disable_vblank(struct drm_device *drm, unsigned int pipe)
|
||||
static void mtk_drm_crtc_disable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct drm_crtc *crtc = drm_crtc_from_index(drm, pipe);
|
||||
struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
|
||||
struct mtk_ddp_comp *ovl = mtk_crtc->ddp_comp[0];
|
||||
|
||||
@ -436,6 +434,8 @@ static const struct drm_crtc_funcs mtk_crtc_funcs = {
|
||||
.atomic_duplicate_state = mtk_drm_crtc_duplicate_state,
|
||||
.atomic_destroy_state = mtk_drm_crtc_destroy_state,
|
||||
.gamma_set = drm_atomic_helper_legacy_gamma_set,
|
||||
.enable_vblank = mtk_drm_crtc_enable_vblank,
|
||||
.disable_vblank = mtk_drm_crtc_disable_vblank,
|
||||
};
|
||||
|
||||
static const struct drm_crtc_helper_funcs mtk_crtc_helper_funcs = {
|
||||
|
@ -23,8 +23,6 @@
|
||||
#define MTK_MAX_BPC 10
|
||||
#define MTK_MIN_BPC 3
|
||||
|
||||
int mtk_drm_crtc_enable_vblank(struct drm_device *drm, unsigned int pipe);
|
||||
void mtk_drm_crtc_disable_vblank(struct drm_device *drm, unsigned int pipe);
|
||||
void mtk_drm_crtc_commit(struct drm_crtc *crtc);
|
||||
void mtk_crtc_ddp_irq(struct drm_crtc *crtc, struct mtk_ddp_comp *ovl);
|
||||
int mtk_drm_crtc_create(struct drm_device *drm_dev,
|
||||
|
@ -256,10 +256,6 @@ static struct drm_driver mtk_drm_driver = {
|
||||
.driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME |
|
||||
DRIVER_ATOMIC,
|
||||
|
||||
.get_vblank_counter = drm_vblank_no_hw_counter,
|
||||
.enable_vblank = mtk_drm_crtc_enable_vblank,
|
||||
.disable_vblank = mtk_drm_crtc_disable_vblank,
|
||||
|
||||
.gem_free_object_unlocked = mtk_drm_gem_free_object,
|
||||
.gem_vm_ops = &drm_gem_cma_vm_ops,
|
||||
.dumb_create = mtk_drm_gem_dumb_create,
|
||||
|
@ -33,6 +33,7 @@
|
||||
|
||||
#include "meson_crtc.h"
|
||||
#include "meson_plane.h"
|
||||
#include "meson_venc.h"
|
||||
#include "meson_vpp.h"
|
||||
#include "meson_viu.h"
|
||||
#include "meson_registers.h"
|
||||
@ -48,6 +49,24 @@ struct meson_crtc {
|
||||
|
||||
/* CRTC */
|
||||
|
||||
static int meson_crtc_enable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct meson_crtc *meson_crtc = to_meson_crtc(crtc);
|
||||
struct meson_drm *priv = meson_crtc->priv;
|
||||
|
||||
meson_venc_enable_vsync(priv);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void meson_crtc_disable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct meson_crtc *meson_crtc = to_meson_crtc(crtc);
|
||||
struct meson_drm *priv = meson_crtc->priv;
|
||||
|
||||
meson_venc_disable_vsync(priv);
|
||||
}
|
||||
|
||||
static const struct drm_crtc_funcs meson_crtc_funcs = {
|
||||
.atomic_destroy_state = drm_atomic_helper_crtc_destroy_state,
|
||||
.atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state,
|
||||
@ -55,6 +74,9 @@ static const struct drm_crtc_funcs meson_crtc_funcs = {
|
||||
.page_flip = drm_atomic_helper_page_flip,
|
||||
.reset = drm_atomic_helper_crtc_reset,
|
||||
.set_config = drm_atomic_helper_set_config,
|
||||
.enable_vblank = meson_crtc_enable_vblank,
|
||||
.disable_vblank = meson_crtc_disable_vblank,
|
||||
|
||||
};
|
||||
|
||||
static void meson_crtc_enable(struct drm_crtc *crtc)
|
||||
|
@ -79,22 +79,6 @@ static const struct drm_mode_config_funcs meson_mode_config_funcs = {
|
||||
.fb_create = drm_fb_cma_create,
|
||||
};
|
||||
|
||||
static int meson_enable_vblank(struct drm_device *dev, unsigned int crtc)
|
||||
{
|
||||
struct meson_drm *priv = dev->dev_private;
|
||||
|
||||
meson_venc_enable_vsync(priv);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void meson_disable_vblank(struct drm_device *dev, unsigned int crtc)
|
||||
{
|
||||
struct meson_drm *priv = dev->dev_private;
|
||||
|
||||
meson_venc_disable_vsync(priv);
|
||||
}
|
||||
|
||||
static irqreturn_t meson_irq(int irq, void *arg)
|
||||
{
|
||||
struct drm_device *dev = arg;
|
||||
@ -126,11 +110,6 @@ static struct drm_driver meson_driver = {
|
||||
DRIVER_MODESET | DRIVER_PRIME |
|
||||
DRIVER_ATOMIC,
|
||||
|
||||
/* Vblank */
|
||||
.enable_vblank = meson_enable_vblank,
|
||||
.disable_vblank = meson_disable_vblank,
|
||||
.get_vblank_counter = drm_vblank_no_hw_counter,
|
||||
|
||||
/* IRQ */
|
||||
.irq_handler = meson_irq,
|
||||
|
||||
|
@ -198,7 +198,7 @@ static int mgag200fb_create(struct drm_fb_helper *helper,
|
||||
|
||||
ret = mgag200_framebuffer_init(dev, &mfbdev->mfb, &mode_cmd, gobj);
|
||||
if (ret)
|
||||
goto err_framebuffer_init;
|
||||
goto err_alloc_fbi;
|
||||
|
||||
mfbdev->sysram = sysram;
|
||||
mfbdev->size = size;
|
||||
@ -230,8 +230,6 @@ static int mgag200fb_create(struct drm_fb_helper *helper,
|
||||
|
||||
return 0;
|
||||
|
||||
err_framebuffer_init:
|
||||
drm_fb_helper_release_fbi(helper);
|
||||
err_alloc_fbi:
|
||||
vfree(sysram);
|
||||
err_sysram:
|
||||
@ -246,7 +244,6 @@ static int mga_fbdev_destroy(struct drm_device *dev,
|
||||
struct mga_framebuffer *mfb = &mfbdev->mfb;
|
||||
|
||||
drm_fb_helper_unregister_fbi(&mfbdev->helper);
|
||||
drm_fb_helper_release_fbi(&mfbdev->helper);
|
||||
|
||||
if (mfb->obj) {
|
||||
drm_gem_object_unreference_unlocked(mfb->obj);
|
||||
|
@ -195,7 +195,7 @@ static int mga_g200se_set_plls(struct mga_device *mdev, long clock)
|
||||
}
|
||||
|
||||
if (delta > permitteddelta) {
|
||||
printk(KERN_WARNING "PLL delta too large\n");
|
||||
pr_warn("PLL delta too large\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
|
@ -1740,6 +1740,7 @@ int msm_dsi_host_init(struct msm_dsi *msm_dsi)
|
||||
|
||||
msm_host->rx_buf = devm_kzalloc(&pdev->dev, SZ_4K, GFP_KERNEL);
|
||||
if (!msm_host->rx_buf) {
|
||||
ret = -ENOMEM;
|
||||
pr_err("%s: alloc rx temp buf failed\n", __func__);
|
||||
goto fail;
|
||||
}
|
||||
|
@ -214,12 +214,6 @@ static int mdp5_kms_debugfs_init(struct msm_kms *kms, struct drm_minor *minor)
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void mdp5_kms_debugfs_cleanup(struct msm_kms *kms, struct drm_minor *minor)
|
||||
{
|
||||
drm_debugfs_remove_files(mdp5_debugfs_list,
|
||||
ARRAY_SIZE(mdp5_debugfs_list), minor);
|
||||
}
|
||||
#endif
|
||||
|
||||
static const struct mdp_kms_funcs kms_funcs = {
|
||||
@ -242,7 +236,6 @@ static const struct mdp_kms_funcs kms_funcs = {
|
||||
.destroy = mdp5_kms_destroy,
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
.debugfs_init = mdp5_kms_debugfs_init,
|
||||
.debugfs_cleanup = mdp5_kms_debugfs_cleanup,
|
||||
#endif
|
||||
},
|
||||
.set_irqmask = mdp5_set_irqmask,
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user