Merge tag 'drm-misc-next-2021-05-12' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for 5.14:

UAPI Changes:

 * drm: Disable connector force-probing for non-master clients
 * drm: Enforce consistency between IN_FORMATS property and cap + related
   driver cleanups
 * drm/amdgpu: Track devices, process info and fence info via
   /proc/<pid>/fdinfo
 * drm/ioctl: Mark AGP-related ioctls as legacy
 * drm/ttm: Provide tt_shrink file to trigger shrinker via debugfs;

Cross-subsystem Changes:

 * fbdev/efifb: Special handling of non-PCI devices
 * fbdev/imxfb: Fix error message

Core Changes:

 * drm: Add connector helper to attach HDR-metadata property and convert
   drivers
 * drm: Add connector helper to compare HDR-metadata and convert drivers
 * drm: Add conenctor helper to attach colorspace property
 * drm: Signal colorimetry in HDMI infoframe
 * drm: Support pitch for destination buffers; Add blitter function
   with generic format conversion
 * drm: Remove struct drm_device.pdev and update legacy drivers
 * drm: Remove obsolete DRM_KMS_FB_HELPER config option in core and drivers
 * drm: Remove obsolete drm_pci_alloc/drm_pci_free

 * drm/aperture: Add helpers for aperture ownership and convert drivers, replaces rsp fbdev helpers

 * drm/agp: Mark DRM AGP code as legacy and convert legacy drivers

 * drm/atomic-helpers: Cleanups

 * drm/dp: Handle downstream port counts of 0 correctly; AUX channel fixes; Use
   drm_err_*/drm_dbg_*(); Cleanups

 * drm/dp_dual_mode: Use drm_err_*/drm_dbg_*()

 * drm/dp_mst: Use drm_err_*/drm_dbg_*(); Use Extended Base Receiver Capability DPCD space

 * drm/gem-ttm-helper: Provide helper for dumb_map_offset and convert drivers

 * drm/panel: Use sysfs_emit; panel-simple: Use runtime PM, Power up panel
              when reading EDID, Cache EDID, Cleanups;
              Lms397KF04: DT bindings

 * drm/pci: Mark AGP helpers as legacy

 * drm/print: Handle NULL for DRM devices gracefully

 * drm/scheduler: Change scheduled fence track

 * drm/ttm: Don't count SG BOs against pages_limit; Warn about freeing pinned
            BOs; Fix error handling if no BO can be swapped out; Move special
            handling of non-GEM drivers into vmwgfx; Move page_alignment into
            the BO; Set drm-misc as TTM tree in MAINTAINERS; Cleanup
	    ttm_agp_backend; Add ttm_sys_manager for system domain; Cleanups

Driver Changes:

 * drm: Don't set allow_fb_modifiers explictly in drivers

 * drm/amdgpu: Pin/unpin fixes wrt to TTM; Use bo->base.size instead of
   mem->num_pages

 * drm/ast: Use managed pcim_iomap(); Fix EDID retrieval with DP501

 * drm/bridge: MHDP8546: HDCP support + DT bindings, Register DP AUX channel
   with userspace; Sil8620: Fix module dependencies; dw-hdmi: Add option to
   not load CEC driver; Fix stopping in drm_bridge_chain_pre_enable();
   Ti-sn65dsi86: Fix refclk handling, Break GPIO and MIPI-to-eDP into
   subdrivers, Use pm_runtime autosuspend, cleanups; It66121: Add
   driver + DT bindings; Adv7511: Support I2S IEC958 encoding; Anx7625: fix
   power-on delay; Nwi-dsi: Modesetting fixes; Cleanups

 * drm/bochs: Support screen blanking

 * drm/gma500: Cleanups

 * drm/gud: Cleanups

 * drm/i915: Use correct max source link rate for MST

 * drm/kmb: Cleanups

 * drm/meson: Disable dw-hdmi CEC driver

 * drm/nouveau: Pin/unpin fixes wrt to TTM; Use bo->base.size instead of
   mem->num_pages; Register AUX adapters after their connectors

 * drm/qxl: Fix shadow BO unpin

 * drm/radeon: Duplicate some DRM AGP code to uncouple from legacy drivers

 * drm/simpledrm: Add a generic DRM driver for simple-framebuffer devices

 * drm/tiny: Fix log spam if probe function gets deferred

 * drm/vc4: Add support for HDR-metadata property; Cleanups

 * drm/virtio: Create dumb BOs as guest blobs;

 * drm/vkms: Use managed drmm_universal_plane_alloc(); Add XRGB plane
   composition; Add overlay support

 * drm/vmwgfx: Enable console with DRM_FBDEV_EMULATION; Fix CPU updates
   of coherent multisample surfaces; Remove reservation semaphore; Add
   initial SVGA3 support; Support amd64; Use 1-based IDR; Use min_t();
   Cleanups

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://patchwork.freedesktop.org/patch/msgid/YJvkD523evviED01@linux-uq9g.fritz.box
This commit is contained in:
Dave Airlie
2021-05-19 09:20:49 +10:00
269 changed files with 7168 additions and 3005 deletions

View File

@@ -18,7 +18,7 @@ properties:
reg: reg:
minItems: 1 minItems: 1
maxItems: 2 maxItems: 3
items: items:
- description: - description:
Register block of mhdptx apb registers up to PHY mapped area (AUX_CONFIG_P). Register block of mhdptx apb registers up to PHY mapped area (AUX_CONFIG_P).
@@ -26,13 +26,16 @@ properties:
included in the associated PHY. included in the associated PHY.
- description: - description:
Register block for DSS_EDP0_INTG_CFG_VP registers in case of TI J7 SoCs. Register block for DSS_EDP0_INTG_CFG_VP registers in case of TI J7 SoCs.
- description:
Register block of mhdptx sapb registers.
reg-names: reg-names:
minItems: 1 minItems: 1
maxItems: 2 maxItems: 3
items: items:
- const: mhdptx - const: mhdptx
- const: j721e-intg - const: j721e-intg
- const: mhdptx-sapb
clocks: clocks:
maxItems: 1 maxItems: 1
@@ -99,14 +102,18 @@ allOf:
properties: properties:
reg: reg:
minItems: 2 minItems: 2
maxItems: 3
reg-names: reg-names:
minItems: 2 minItems: 2
maxItems: 3
else: else:
properties: properties:
reg: reg:
maxItems: 1 minItems: 1
maxItems: 2
reg-names: reg-names:
maxItems: 1 minItems: 1
maxItems: 2
required: required:
- compatible - compatible

View File

@@ -0,0 +1,124 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/bridge/ite,it66121.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: ITE it66121 HDMI bridge Device Tree Bindings
maintainers:
- Phong LE <ple@baylibre.com>
- Neil Armstrong <narmstrong@baylibre.com>
description: |
The IT66121 is a high-performance and low-power single channel HDMI
transmitter, fully compliant with HDMI 1.3a, HDCP 1.2 and backward compatible
to DVI 1.0 specifications.
properties:
compatible:
const: ite,it66121
reg:
maxItems: 1
reset-gpios:
maxItems: 1
description: GPIO connected to active low reset
vrf12-supply:
description: Regulator for 1.2V analog core power.
vcn33-supply:
description: Regulator for 3.3V digital core power.
vcn18-supply:
description: Regulator for 1.8V IO core power.
interrupts:
maxItems: 1
ports:
$ref: /schemas/graph.yaml#/properties/ports
properties:
port@0:
$ref: /schemas/graph.yaml#/$defs/port-base
unevaluatedProperties: false
description: DPI input port.
properties:
endpoint:
$ref: /schemas/graph.yaml#/$defs/endpoint-base
unevaluatedProperties: false
properties:
bus-width:
description:
Endpoint bus width.
enum:
- 12 # 12 data lines connected and dual-edge mode
- 24 # 24 data lines connected and single-edge mode
default: 24
port@1:
$ref: /schemas/graph.yaml#/properties/port
description: HDMI Connector port.
required:
- port@0
- port@1
required:
- compatible
- reg
- reset-gpios
- vrf12-supply
- vcn33-supply
- vcn18-supply
- interrupts
- ports
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/gpio/gpio.h>
i2c {
#address-cells = <1>;
#size-cells = <0>;
it66121hdmitx: hdmitx@4c {
compatible = "ite,it66121";
pinctrl-names = "default";
pinctrl-0 = <&ite_pins_default>;
vcn33-supply = <&mt6358_vcn33_wifi_reg>;
vcn18-supply = <&mt6358_vcn18_reg>;
vrf12-supply = <&mt6358_vrf12_reg>;
reset-gpios = <&pio 160 GPIO_ACTIVE_LOW>;
interrupt-parent = <&pio>;
interrupts = <4 IRQ_TYPE_LEVEL_LOW>;
reg = <0x4c>;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
it66121_in: endpoint {
bus-width = <12>;
remote-endpoint = <&display_out>;
};
};
port@1 {
reg = <1>;
hdmi_conn_out: endpoint {
remote-endpoint = <&hdmi_conn_in>;
};
};
};
};
};

View File

@@ -0,0 +1,74 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/samsung,lms397kf04.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Samsung LMS397KF04 display panel
description: The datasheet claims this is based around a display controller
named DB7430 with a separate backlight controller.
maintainers:
- Linus Walleij <linus.walleij@linaro.org>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
const: samsung,lms397kf04
reg: true
reset-gpios: true
vci-supply:
description: regulator that supplies the VCI analog voltage
usually around 3.0 V
vccio-supply:
description: regulator that supplies the VCCIO voltage usually
around 1.8 V
backlight: true
spi-max-frequency:
$ref: /schemas/types.yaml#/definitions/uint32
description: inherited as a SPI client node, the datasheet specifies
maximum 300 ns minimum cycle which gives around 3 MHz max frequency
maximum: 3000000
port: true
required:
- compatible
- reg
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
spi {
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "samsung,lms397kf04";
spi-max-frequency = <3000000>;
reg = <0>;
vci-supply = <&lcd_3v0_reg>;
vccio-supply = <&lcd_1v8_reg>;
reset-gpios = <&gpio 1 GPIO_ACTIVE_LOW>;
backlight = <&ktd259>;
port {
panel_in: endpoint {
remote-endpoint = <&display_out>;
};
};
};
};
...

View File

@@ -75,6 +75,18 @@ update it, its value is mostly useless. The DRM core prints it to the
kernel log at initialization time and passes it to userspace through the kernel log at initialization time and passes it to userspace through the
DRM_IOCTL_VERSION ioctl. DRM_IOCTL_VERSION ioctl.
Managing Ownership of the Framebuffer Aperture
----------------------------------------------
.. kernel-doc:: drivers/gpu/drm/drm_aperture.c
:doc: overview
.. kernel-doc:: include/drm/drm_aperture.h
:internal:
.. kernel-doc:: drivers/gpu/drm/drm_aperture.c
:export:
Device Instance and Driver Handling Device Instance and Driver Handling
----------------------------------- -----------------------------------

View File

@@ -546,6 +546,8 @@ There's a bunch of issues with it:
this (together with the drm_minor->drm_device move) would allow us to remove this (together with the drm_minor->drm_device move) would allow us to remove
debugfs_init. debugfs_init.
Previous RFC that hasn't landed yet: https://lore.kernel.org/dri-devel/20200513114130.28641-2-wambui.karugax@gmail.com/
Contact: Daniel Vetter Contact: Daniel Vetter
Level: Intermediate Level: Intermediate

View File

@@ -5870,6 +5870,13 @@ S: Orphan / Obsolete
F: drivers/gpu/drm/savage/ F: drivers/gpu/drm/savage/
F: include/uapi/drm/savage_drm.h F: include/uapi/drm/savage_drm.h
DRM DRIVER FOR SIMPLE FRAMEBUFFERS
M: Thomas Zimmermann <tzimmermann@suse.de>
L: dri-devel@lists.freedesktop.org
S: Maintained
T: git git://anongit.freedesktop.org/drm/drm-misc
F: drivers/gpu/drm/tiny/simplekms.c
DRM DRIVER FOR SIS VIDEO CARDS DRM DRIVER FOR SIS VIDEO CARDS
S: Orphan / Obsolete S: Orphan / Obsolete
F: drivers/gpu/drm/sis/ F: drivers/gpu/drm/sis/
@@ -6239,7 +6246,7 @@ M: Christian Koenig <christian.koenig@amd.com>
M: Huang Rui <ray.huang@amd.com> M: Huang Rui <ray.huang@amd.com>
L: dri-devel@lists.freedesktop.org L: dri-devel@lists.freedesktop.org
S: Maintained S: Maintained
T: git git://people.freedesktop.org/~agd5f/linux T: git git://anongit.freedesktop.org/drm/drm-misc
F: drivers/gpu/drm/ttm/ F: drivers/gpu/drm/ttm/
F: include/drm/ttm/ F: include/drm/ttm/
@@ -9719,6 +9726,14 @@ Q: http://patchwork.linuxtv.org/project/linux-media/list/
T: git git://linuxtv.org/anttip/media_tree.git T: git git://linuxtv.org/anttip/media_tree.git
F: drivers/media/tuners/it913x* F: drivers/media/tuners/it913x*
ITE IT66121 HDMI BRIDGE DRIVER
M: Phong LE <ple@baylibre.com>
M: Neil Armstrong <narmstrong@baylibre.com>
S: Maintained
T: git git://anongit.freedesktop.org/drm/drm-misc
F: Documentation/devicetree/bindings/display/bridge/ite,it66121.yaml
F: drivers/gpu/drm/bridge/ite-it66121.c
IVTV VIDEO4LINUX DRIVER IVTV VIDEO4LINUX DRIVER
M: Andy Walls <awalls@md.metrocast.net> M: Andy Walls <awalls@md.metrocast.net>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
@@ -15257,6 +15272,7 @@ F: drivers/net/wireless/quantenna
RADEON and AMDGPU DRM DRIVERS RADEON and AMDGPU DRM DRIVERS
M: Alex Deucher <alexander.deucher@amd.com> M: Alex Deucher <alexander.deucher@amd.com>
M: Christian König <christian.koenig@amd.com> M: Christian König <christian.koenig@amd.com>
M: Pan, Xinhui <Xinhui.Pan@amd.com>
L: amd-gfx@lists.freedesktop.org L: amd-gfx@lists.freedesktop.org
S: Supported S: Supported
T: git https://gitlab.freedesktop.org/agd5f/linux.git T: git https://gitlab.freedesktop.org/agd5f/linux.git

View File

@@ -80,23 +80,6 @@ config DRM_KMS_HELPER
help help
CRTC helpers for KMS drivers. CRTC helpers for KMS drivers.
config DRM_KMS_FB_HELPER
bool
depends on DRM_KMS_HELPER
select FB
select FRAMEBUFFER_CONSOLE if !EXPERT
select FRAMEBUFFER_CONSOLE_DETECT_PRIMARY if FRAMEBUFFER_CONSOLE
select FB_SYS_FOPS
select FB_SYS_FILLRECT
select FB_SYS_COPYAREA
select FB_SYS_IMAGEBLIT
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
select FB_DEFERRED_IO
help
FBDEV helpers for KMS drivers.
config DRM_DEBUG_DP_MST_TOPOLOGY_REFS config DRM_DEBUG_DP_MST_TOPOLOGY_REFS
bool "Enable refcount backtrace history in the DP MST helpers" bool "Enable refcount backtrace history in the DP MST helpers"
depends on STACKTRACE_SUPPORT depends on STACKTRACE_SUPPORT
@@ -117,6 +100,17 @@ config DRM_FBDEV_EMULATION
depends on DRM depends on DRM
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_KMS_FB_HELPER select DRM_KMS_FB_HELPER
select FB
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
select FB_DEFERRED_IO
select FB_SYS_FOPS
select FB_SYS_FILLRECT
select FB_SYS_COPYAREA
select FB_SYS_IMAGEBLIT
select FRAMEBUFFER_CONSOLE if !EXPERT
select FRAMEBUFFER_CONSOLE_DETECT_PRIMARY if FRAMEBUFFER_CONSOLE
default y default y
help help
Choose this option if you have a need for the legacy fbdev Choose this option if you have a need for the legacy fbdev

View File

@@ -3,7 +3,7 @@
# Makefile for the drm device driver. This driver provides support for the # Makefile for the drm device driver. This driver provides support for the
# Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher. # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
drm-y := drm_auth.o drm_cache.o \ drm-y := drm_aperture.o drm_auth.o drm_cache.o \
drm_file.o drm_gem.o drm_ioctl.o drm_irq.o \ drm_file.o drm_gem.o drm_ioctl.o drm_irq.o \
drm_drv.o \ drm_drv.o \
drm_sysfs.o drm_hashtab.o drm_mm.o \ drm_sysfs.o drm_hashtab.o drm_mm.o \
@@ -20,15 +20,15 @@ drm-y := drm_auth.o drm_cache.o \
drm_client_modeset.o drm_atomic_uapi.o drm_hdcp.o \ drm_client_modeset.o drm_atomic_uapi.o drm_hdcp.o \
drm_managed.o drm_vblank_work.o drm_managed.o drm_vblank_work.o
drm-$(CONFIG_DRM_LEGACY) += drm_bufs.o drm_context.o drm_dma.o drm_legacy_misc.o drm_lock.o \ drm-$(CONFIG_DRM_LEGACY) += drm_agpsupport.o drm_bufs.o drm_context.o drm_dma.o \
drm_memory.o drm_scatter.o drm_vm.o drm_legacy_misc.o drm_lock.o drm_memory.o drm_scatter.o \
drm_vm.o
drm-$(CONFIG_DRM_LIB_RANDOM) += lib/drm_random.o drm-$(CONFIG_DRM_LIB_RANDOM) += lib/drm_random.o
drm-$(CONFIG_COMPAT) += drm_ioc32.o drm-$(CONFIG_COMPAT) += drm_ioc32.o
drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o
drm-$(CONFIG_DRM_GEM_SHMEM_HELPER) += drm_gem_shmem_helper.o drm-$(CONFIG_DRM_GEM_SHMEM_HELPER) += drm_gem_shmem_helper.o
drm-$(CONFIG_DRM_PANEL) += drm_panel.o drm-$(CONFIG_DRM_PANEL) += drm_panel.o
drm-$(CONFIG_OF) += drm_of.o drm-$(CONFIG_OF) += drm_of.o
drm-$(CONFIG_AGP) += drm_agpsupport.o
drm-$(CONFIG_PCI) += drm_pci.o drm-$(CONFIG_PCI) += drm_pci.o
drm-$(CONFIG_DEBUG_FS) += drm_debugfs.o drm_debugfs_crc.o drm-$(CONFIG_DEBUG_FS) += drm_debugfs.o drm_debugfs_crc.o
drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o

View File

@@ -58,6 +58,8 @@ amdgpu-y += amdgpu_device.o amdgpu_kms.o \
amdgpu_umc.o smu_v11_0_i2c.o amdgpu_fru_eeprom.o amdgpu_rap.o \ amdgpu_umc.o smu_v11_0_i2c.o amdgpu_fru_eeprom.o amdgpu_rap.o \
amdgpu_fw_attestation.o amdgpu_securedisplay.o amdgpu_fw_attestation.o amdgpu_securedisplay.o
amdgpu-$(CONFIG_PROC_FS) += amdgpu_fdinfo.o
amdgpu-$(CONFIG_PERF_EVENTS) += amdgpu_pmu.o amdgpu-$(CONFIG_PERF_EVENTS) += amdgpu_pmu.o
# add asic specific block # add asic specific block

View File

@@ -107,6 +107,7 @@
#include "amdgpu_gfxhub.h" #include "amdgpu_gfxhub.h"
#include "amdgpu_df.h" #include "amdgpu_df.h"
#include "amdgpu_smuio.h" #include "amdgpu_smuio.h"
#include "amdgpu_fdinfo.h"
#define MAX_GPU_INSTANCE 16 #define MAX_GPU_INSTANCE 16

View File

@@ -651,3 +651,64 @@ void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr)
idr_destroy(&mgr->ctx_handles); idr_destroy(&mgr->ctx_handles);
mutex_destroy(&mgr->lock); mutex_destroy(&mgr->lock);
} }
void amdgpu_ctx_fence_time(struct amdgpu_ctx *ctx, struct amdgpu_ctx_entity *centity,
ktime_t *total, ktime_t *max)
{
ktime_t now, t1;
uint32_t i;
now = ktime_get();
for (i = 0; i < amdgpu_sched_jobs; i++) {
struct dma_fence *fence;
struct drm_sched_fence *s_fence;
spin_lock(&ctx->ring_lock);
fence = dma_fence_get(centity->fences[i]);
spin_unlock(&ctx->ring_lock);
if (!fence)
continue;
s_fence = to_drm_sched_fence(fence);
if (!dma_fence_is_signaled(&s_fence->scheduled))
continue;
t1 = s_fence->scheduled.timestamp;
if (t1 >= now)
continue;
if (dma_fence_is_signaled(&s_fence->finished) &&
s_fence->finished.timestamp < now)
*total += ktime_sub(s_fence->finished.timestamp, t1);
else
*total += ktime_sub(now, t1);
t1 = ktime_sub(now, t1);
dma_fence_put(fence);
*max = max(t1, *max);
}
}
ktime_t amdgpu_ctx_mgr_fence_usage(struct amdgpu_ctx_mgr *mgr, uint32_t hwip,
uint32_t idx, uint64_t *elapsed)
{
struct idr *idp;
struct amdgpu_ctx *ctx;
uint32_t id;
struct amdgpu_ctx_entity *centity;
ktime_t total = 0, max = 0;
if (idx >= AMDGPU_MAX_ENTITY_NUM)
return 0;
idp = &mgr->ctx_handles;
mutex_lock(&mgr->lock);
idr_for_each_entry(idp, ctx, id) {
if (!ctx->entities[hwip][idx])
continue;
centity = ctx->entities[hwip][idx];
amdgpu_ctx_fence_time(ctx, centity, &total, &max);
}
mutex_unlock(&mgr->lock);
if (elapsed)
*elapsed = max;
return total;
}

View File

@@ -87,5 +87,8 @@ void amdgpu_ctx_mgr_init(struct amdgpu_ctx_mgr *mgr);
void amdgpu_ctx_mgr_entity_fini(struct amdgpu_ctx_mgr *mgr); void amdgpu_ctx_mgr_entity_fini(struct amdgpu_ctx_mgr *mgr);
long amdgpu_ctx_mgr_entity_flush(struct amdgpu_ctx_mgr *mgr, long timeout); long amdgpu_ctx_mgr_entity_flush(struct amdgpu_ctx_mgr *mgr, long timeout);
void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr); void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr);
ktime_t amdgpu_ctx_mgr_fence_usage(struct amdgpu_ctx_mgr *mgr, uint32_t hwip,
uint32_t idx, uint64_t *elapsed);
void amdgpu_ctx_fence_time(struct amdgpu_ctx *ctx, struct amdgpu_ctx_entity *centity,
ktime_t *total, ktime_t *max);
#endif #endif

View File

@@ -23,6 +23,7 @@
*/ */
#include <drm/amdgpu_drm.h> #include <drm/amdgpu_drm.h>
#include <drm/drm_aperture.h>
#include <drm/drm_drv.h> #include <drm/drm_drv.h>
#include <drm/drm_gem.h> #include <drm/drm_gem.h>
#include <drm/drm_vblank.h> #include <drm/drm_vblank.h>
@@ -42,7 +43,7 @@
#include "amdgpu_irq.h" #include "amdgpu_irq.h"
#include "amdgpu_dma_buf.h" #include "amdgpu_dma_buf.h"
#include "amdgpu_sched.h" #include "amdgpu_sched.h"
#include "amdgpu_fdinfo.h"
#include "amdgpu_amdkfd.h" #include "amdgpu_amdkfd.h"
#include "amdgpu_ras.h" #include "amdgpu_ras.h"
@@ -1258,7 +1259,7 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
#endif #endif
/* Get rid of things like offb */ /* Get rid of things like offb */
ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "amdgpudrmfb"); ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, "amdgpudrmfb");
if (ret) if (ret)
return ret; return ret;
@@ -1694,6 +1695,9 @@ static const struct file_operations amdgpu_driver_kms_fops = {
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
.compat_ioctl = amdgpu_kms_compat_ioctl, .compat_ioctl = amdgpu_kms_compat_ioctl,
#endif #endif
#ifdef CONFIG_PROC_FS
.show_fdinfo = amdgpu_show_fdinfo
#endif
}; };
int amdgpu_file_to_fpriv(struct file *filp, struct amdgpu_fpriv **fpriv) int amdgpu_file_to_fpriv(struct file *filp, struct amdgpu_fpriv **fpriv)

View File

@@ -0,0 +1,104 @@
// SPDX-License-Identifier: MIT
/* Copyright 2021 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: David Nieto
* Roy Sun
*/
#include <linux/debugfs.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/uaccess.h>
#include <linux/reboot.h>
#include <linux/syscalls.h>
#include <drm/amdgpu_drm.h>
#include <drm/drm_debugfs.h>
#include "amdgpu.h"
#include "amdgpu_vm.h"
#include "amdgpu_gem.h"
#include "amdgpu_ctx.h"
#include "amdgpu_fdinfo.h"
static const char *amdgpu_ip_name[AMDGPU_HW_IP_NUM] = {
[AMDGPU_HW_IP_GFX] = "gfx",
[AMDGPU_HW_IP_COMPUTE] = "compute",
[AMDGPU_HW_IP_DMA] = "dma",
[AMDGPU_HW_IP_UVD] = "dec",
[AMDGPU_HW_IP_VCE] = "enc",
[AMDGPU_HW_IP_UVD_ENC] = "enc_1",
[AMDGPU_HW_IP_VCN_DEC] = "dec",
[AMDGPU_HW_IP_VCN_ENC] = "enc",
[AMDGPU_HW_IP_VCN_JPEG] = "jpeg",
};
void amdgpu_show_fdinfo(struct seq_file *m, struct file *f)
{
struct amdgpu_fpriv *fpriv;
uint32_t bus, dev, fn, i, domain;
uint64_t vram_mem = 0, gtt_mem = 0, cpu_mem = 0;
struct drm_file *file = f->private_data;
struct amdgpu_device *adev = drm_to_adev(file->minor->dev);
int ret;
ret = amdgpu_file_to_fpriv(f, &fpriv);
if (ret)
return;
bus = adev->pdev->bus->number;
domain = pci_domain_nr(adev->pdev->bus);
dev = PCI_SLOT(adev->pdev->devfn);
fn = PCI_FUNC(adev->pdev->devfn);
ret = amdgpu_bo_reserve(fpriv->vm.root.base.bo, false);
if (ret) {
DRM_ERROR("Fail to reserve bo\n");
return;
}
amdgpu_vm_get_memory(&fpriv->vm, &vram_mem, &gtt_mem, &cpu_mem);
amdgpu_bo_unreserve(fpriv->vm.root.base.bo);
seq_printf(m, "pdev:\t%04x:%02x:%02x.%d\npasid:\t%u\n", domain, bus,
dev, fn, fpriv->vm.pasid);
seq_printf(m, "vram mem:\t%llu kB\n", vram_mem/1024UL);
seq_printf(m, "gtt mem:\t%llu kB\n", gtt_mem/1024UL);
seq_printf(m, "cpu mem:\t%llu kB\n", cpu_mem/1024UL);
for (i = 0; i < AMDGPU_HW_IP_NUM; i++) {
uint32_t count = amdgpu_ctx_num_entities[i];
int idx = 0;
uint64_t total = 0, min = 0;
uint32_t perc, frac;
for (idx = 0; idx < count; idx++) {
total = amdgpu_ctx_mgr_fence_usage(&fpriv->ctx_mgr,
i, idx, &min);
if ((total == 0) || (min == 0))
continue;
perc = div64_u64(10000 * total, min);
frac = perc % 100;
seq_printf(m, "%s%d:\t%d.%d%%\n",
amdgpu_ip_name[i],
idx, perc/100, frac);
}
}
}

View File

@@ -0,0 +1,43 @@
/* SPDX-License-Identifier: MIT
* Copyright 2021 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: David Nieto
* Roy Sun
*/
#ifndef __AMDGPU_SMI_H__
#define __AMDGPU_SMI_H__
#include <linux/idr.h>
#include <linux/kfifo.h>
#include <linux/rbtree.h>
#include <drm/gpu_scheduler.h>
#include <drm/drm_file.h>
#include <drm/ttm/ttm_bo_driver.h>
#include <linux/sched/mm.h>
#include "amdgpu_sync.h"
#include "amdgpu_ring.h"
#include "amdgpu_ids.h"
uint32_t amdgpu_get_ip_count(struct amdgpu_device *adev, int id);
void amdgpu_show_fdinfo(struct seq_file *m, struct file *f);
#endif

View File

@@ -766,7 +766,7 @@ int amdgpu_gem_op_ioctl(struct drm_device *dev, void *data,
void __user *out = u64_to_user_ptr(args->value); void __user *out = u64_to_user_ptr(args->value);
info.bo_size = robj->tbo.base.size; info.bo_size = robj->tbo.base.size;
info.alignment = robj->tbo.mem.page_alignment << PAGE_SHIFT; info.alignment = robj->tbo.page_alignment << PAGE_SHIFT;
info.domains = robj->preferred_domains; info.domains = robj->preferred_domains;
info.domain_flags = robj->flags; info.domain_flags = robj->flags;
amdgpu_bo_unreserve(robj); amdgpu_bo_unreserve(robj);

View File

@@ -205,7 +205,7 @@ static int amdgpu_gtt_mgr_new(struct ttm_resource_manager *man,
spin_lock(&mgr->lock); spin_lock(&mgr->lock);
r = drm_mm_insert_node_in_range(&mgr->mm, &node->node, mem->num_pages, r = drm_mm_insert_node_in_range(&mgr->mm, &node->node, mem->num_pages,
mem->page_alignment, 0, place->fpfn, tbo->page_alignment, 0, place->fpfn,
place->lpfn, DRM_MM_INSERT_BEST); place->lpfn, DRM_MM_INSERT_BEST);
spin_unlock(&mgr->lock); spin_unlock(&mgr->lock);

View File

@@ -52,36 +52,12 @@
* *
*/ */
/**
* amdgpu_bo_subtract_pin_size - Remove BO from pin_size accounting
*
* @bo: &amdgpu_bo buffer object
*
* This function is called when a BO stops being pinned, and updates the
* &amdgpu_device pin_size values accordingly.
*/
static void amdgpu_bo_subtract_pin_size(struct amdgpu_bo *bo)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
if (bo->tbo.mem.mem_type == TTM_PL_VRAM) {
atomic64_sub(amdgpu_bo_size(bo), &adev->vram_pin_size);
atomic64_sub(amdgpu_vram_mgr_bo_visible_size(bo),
&adev->visible_pin_size);
} else if (bo->tbo.mem.mem_type == TTM_PL_TT) {
atomic64_sub(amdgpu_bo_size(bo), &adev->gart_pin_size);
}
}
static void amdgpu_bo_destroy(struct ttm_buffer_object *tbo) static void amdgpu_bo_destroy(struct ttm_buffer_object *tbo)
{ {
struct amdgpu_device *adev = amdgpu_ttm_adev(tbo->bdev); struct amdgpu_device *adev = amdgpu_ttm_adev(tbo->bdev);
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(tbo); struct amdgpu_bo *bo = ttm_to_amdgpu_bo(tbo);
struct amdgpu_bo_user *ubo; struct amdgpu_bo_user *ubo;
if (bo->tbo.pin_count > 0)
amdgpu_bo_subtract_pin_size(bo);
amdgpu_bo_kunmap(bo); amdgpu_bo_kunmap(bo);
if (bo->tbo.base.import_attach) if (bo->tbo.base.import_attach)
@@ -1037,14 +1013,22 @@ int amdgpu_bo_pin(struct amdgpu_bo *bo, u32 domain)
*/ */
void amdgpu_bo_unpin(struct amdgpu_bo *bo) void amdgpu_bo_unpin(struct amdgpu_bo *bo)
{ {
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
ttm_bo_unpin(&bo->tbo); ttm_bo_unpin(&bo->tbo);
if (bo->tbo.pin_count) if (bo->tbo.pin_count)
return; return;
amdgpu_bo_subtract_pin_size(bo);
if (bo->tbo.base.import_attach) if (bo->tbo.base.import_attach)
dma_buf_unpin(bo->tbo.base.import_attach); dma_buf_unpin(bo->tbo.base.import_attach);
if (bo->tbo.mem.mem_type == TTM_PL_VRAM) {
atomic64_sub(amdgpu_bo_size(bo), &adev->vram_pin_size);
atomic64_sub(amdgpu_vram_mgr_bo_visible_size(bo),
&adev->visible_pin_size);
} else if (bo->tbo.mem.mem_type == TTM_PL_TT) {
atomic64_sub(amdgpu_bo_size(bo), &adev->gart_pin_size);
}
} }
/** /**
@@ -1304,6 +1288,26 @@ void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
trace_amdgpu_bo_move(abo, new_mem->mem_type, old_mem->mem_type); trace_amdgpu_bo_move(abo, new_mem->mem_type, old_mem->mem_type);
} }
void amdgpu_bo_get_memory(struct amdgpu_bo *bo, uint64_t *vram_mem,
uint64_t *gtt_mem, uint64_t *cpu_mem)
{
unsigned int domain;
domain = amdgpu_mem_type_to_domain(bo->tbo.mem.mem_type);
switch (domain) {
case AMDGPU_GEM_DOMAIN_VRAM:
*vram_mem += amdgpu_bo_size(bo);
break;
case AMDGPU_GEM_DOMAIN_GTT:
*gtt_mem += amdgpu_bo_size(bo);
break;
case AMDGPU_GEM_DOMAIN_CPU:
default:
*cpu_mem += amdgpu_bo_size(bo);
break;
}
}
/** /**
* amdgpu_bo_release_notify - notification about a BO being released * amdgpu_bo_release_notify - notification about a BO being released
* @bo: pointer to a buffer object * @bo: pointer to a buffer object
@@ -1362,7 +1366,7 @@ vm_fault_t amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo)
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev); struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
struct ttm_operation_ctx ctx = { false, false }; struct ttm_operation_ctx ctx = { false, false };
struct amdgpu_bo *abo = ttm_to_amdgpu_bo(bo); struct amdgpu_bo *abo = ttm_to_amdgpu_bo(bo);
unsigned long offset, size; unsigned long offset;
int r; int r;
/* Remember that this BO was accessed by the CPU */ /* Remember that this BO was accessed by the CPU */
@@ -1371,9 +1375,8 @@ vm_fault_t amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo)
if (bo->mem.mem_type != TTM_PL_VRAM) if (bo->mem.mem_type != TTM_PL_VRAM)
return 0; return 0;
size = bo->mem.num_pages << PAGE_SHIFT;
offset = bo->mem.start << PAGE_SHIFT; offset = bo->mem.start << PAGE_SHIFT;
if ((offset + size) <= adev->gmc.visible_vram_size) if ((offset + bo->base.size) <= adev->gmc.visible_vram_size)
return 0; return 0;
/* Can't move a pinned BO to visible VRAM */ /* Can't move a pinned BO to visible VRAM */
@@ -1398,7 +1401,7 @@ vm_fault_t amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo)
offset = bo->mem.start << PAGE_SHIFT; offset = bo->mem.start << PAGE_SHIFT;
/* this should never happen */ /* this should never happen */
if (bo->mem.mem_type == TTM_PL_VRAM && if (bo->mem.mem_type == TTM_PL_VRAM &&
(offset + size) > adev->gmc.visible_vram_size) (offset + bo->base.size) > adev->gmc.visible_vram_size)
return VM_FAULT_SIGBUS; return VM_FAULT_SIGBUS;
ttm_bo_move_to_lru_tail_unlocked(bo); ttm_bo_move_to_lru_tail_unlocked(bo);

View File

@@ -191,7 +191,7 @@ static inline unsigned amdgpu_bo_ngpu_pages(struct amdgpu_bo *bo)
static inline unsigned amdgpu_bo_gpu_page_alignment(struct amdgpu_bo *bo) static inline unsigned amdgpu_bo_gpu_page_alignment(struct amdgpu_bo *bo)
{ {
return (bo->tbo.mem.page_alignment << PAGE_SHIFT) / AMDGPU_GPU_PAGE_SIZE; return (bo->tbo.page_alignment << PAGE_SHIFT) / AMDGPU_GPU_PAGE_SIZE;
} }
/** /**
@@ -300,6 +300,8 @@ int amdgpu_bo_sync_wait(struct amdgpu_bo *bo, void *owner, bool intr);
u64 amdgpu_bo_gpu_offset(struct amdgpu_bo *bo); u64 amdgpu_bo_gpu_offset(struct amdgpu_bo *bo);
u64 amdgpu_bo_gpu_offset_no_check(struct amdgpu_bo *bo); u64 amdgpu_bo_gpu_offset_no_check(struct amdgpu_bo *bo);
int amdgpu_bo_validate(struct amdgpu_bo *bo); int amdgpu_bo_validate(struct amdgpu_bo *bo);
void amdgpu_bo_get_memory(struct amdgpu_bo *bo, uint64_t *vram_mem,
uint64_t *gtt_mem, uint64_t *cpu_mem);
int amdgpu_bo_restore_shadow(struct amdgpu_bo *shadow, int amdgpu_bo_restore_shadow(struct amdgpu_bo *shadow,
struct dma_fence **fence); struct dma_fence **fence);
uint32_t amdgpu_bo_get_preferred_pin_domain(struct amdgpu_device *adev, uint32_t amdgpu_bo_get_preferred_pin_domain(struct amdgpu_device *adev,

View File

@@ -1018,8 +1018,6 @@ int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo)
} else { } else {
/* allocate GART space */ /* allocate GART space */
tmp = bo->mem;
tmp.mm_node = NULL;
placement.num_placement = 1; placement.num_placement = 1;
placement.placement = &placements; placement.placement = &placements;
placement.num_busy_placement = 1; placement.num_busy_placement = 1;

View File

@@ -25,6 +25,7 @@
* Alex Deucher * Alex Deucher
* Jerome Glisse * Jerome Glisse
*/ */
#include <linux/dma-fence-array.h> #include <linux/dma-fence-array.h>
#include <linux/interval_tree_generic.h> #include <linux/interval_tree_generic.h>
#include <linux/idr.h> #include <linux/idr.h>
@@ -1717,6 +1718,50 @@ error_unlock:
return r; return r;
} }
void amdgpu_vm_get_memory(struct amdgpu_vm *vm, uint64_t *vram_mem,
uint64_t *gtt_mem, uint64_t *cpu_mem)
{
struct amdgpu_bo_va *bo_va, *tmp;
list_for_each_entry_safe(bo_va, tmp, &vm->idle, base.vm_status) {
if (!bo_va->base.bo)
continue;
amdgpu_bo_get_memory(bo_va->base.bo, vram_mem,
gtt_mem, cpu_mem);
}
list_for_each_entry_safe(bo_va, tmp, &vm->evicted, base.vm_status) {
if (!bo_va->base.bo)
continue;
amdgpu_bo_get_memory(bo_va->base.bo, vram_mem,
gtt_mem, cpu_mem);
}
list_for_each_entry_safe(bo_va, tmp, &vm->relocated, base.vm_status) {
if (!bo_va->base.bo)
continue;
amdgpu_bo_get_memory(bo_va->base.bo, vram_mem,
gtt_mem, cpu_mem);
}
list_for_each_entry_safe(bo_va, tmp, &vm->moved, base.vm_status) {
if (!bo_va->base.bo)
continue;
amdgpu_bo_get_memory(bo_va->base.bo, vram_mem,
gtt_mem, cpu_mem);
}
spin_lock(&vm->invalidated_lock);
list_for_each_entry_safe(bo_va, tmp, &vm->invalidated, base.vm_status) {
if (!bo_va->base.bo)
continue;
amdgpu_bo_get_memory(bo_va->base.bo, vram_mem,
gtt_mem, cpu_mem);
}
list_for_each_entry_safe(bo_va, tmp, &vm->done, base.vm_status) {
if (!bo_va->base.bo)
continue;
amdgpu_bo_get_memory(bo_va->base.bo, vram_mem,
gtt_mem, cpu_mem);
}
spin_unlock(&vm->invalidated_lock);
}
/** /**
* amdgpu_vm_bo_update - update all BO mappings in the vm page table * amdgpu_vm_bo_update - update all BO mappings in the vm page table
* *

View File

@@ -447,6 +447,8 @@ void amdgpu_vm_set_task_info(struct amdgpu_vm *vm);
void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev,
struct amdgpu_vm *vm); struct amdgpu_vm *vm);
void amdgpu_vm_del_from_lru_notify(struct ttm_buffer_object *bo); void amdgpu_vm_del_from_lru_notify(struct ttm_buffer_object *bo);
void amdgpu_vm_get_memory(struct amdgpu_vm *vm, uint64_t *vram_mem,
uint64_t *gtt_mem, uint64_t *cpu_mem);
#if defined(CONFIG_DEBUG_FS) #if defined(CONFIG_DEBUG_FS)
void amdgpu_debugfs_vm_bo_info(struct amdgpu_vm *vm, struct seq_file *m); void amdgpu_debugfs_vm_bo_info(struct amdgpu_vm *vm, struct seq_file *m);

View File

@@ -450,7 +450,8 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
/* default to 2MB */ /* default to 2MB */
pages_per_node = (2UL << (20UL - PAGE_SHIFT)); pages_per_node = (2UL << (20UL - PAGE_SHIFT));
#endif #endif
pages_per_node = max((uint32_t)pages_per_node, mem->page_alignment); pages_per_node = max((uint32_t)pages_per_node,
tbo->page_alignment);
num_nodes = DIV_ROUND_UP(mem->num_pages, pages_per_node); num_nodes = DIV_ROUND_UP(mem->num_pages, pages_per_node);
} }
@@ -489,7 +490,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
for (; pages_left; ++i) { for (; pages_left; ++i) {
unsigned long pages = min(pages_left, pages_per_node); unsigned long pages = min(pages_left, pages_per_node);
uint32_t alignment = mem->page_alignment; uint32_t alignment = tbo->page_alignment;
if (pages == pages_per_node) if (pages == pages_per_node)
alignment = pages_per_node; alignment = pages_per_node;

View File

@@ -188,6 +188,8 @@ void amdgpu_atombios_dp_aux_init(struct amdgpu_connector *amdgpu_connector)
{ {
amdgpu_connector->ddc_bus->rec.hpd = amdgpu_connector->hpd.hpd; amdgpu_connector->ddc_bus->rec.hpd = amdgpu_connector->hpd.hpd;
amdgpu_connector->ddc_bus->aux.transfer = amdgpu_atombios_dp_aux_transfer; amdgpu_connector->ddc_bus->aux.transfer = amdgpu_atombios_dp_aux_transfer;
amdgpu_connector->ddc_bus->aux.drm_dev = amdgpu_connector->base.dev;
drm_dp_aux_init(&amdgpu_connector->ddc_bus->aux); drm_dp_aux_init(&amdgpu_connector->ddc_bus->aux);
amdgpu_connector->ddc_bus->has_aux = true; amdgpu_connector->ddc_bus->has_aux = true;
} }
@@ -610,7 +612,7 @@ amdgpu_atombios_dp_link_train_cr(struct amdgpu_atombios_dp_link_train_info *dp_i
dp_info->tries = 0; dp_info->tries = 0;
voltage = 0xff; voltage = 0xff;
while (1) { while (1) {
drm_dp_link_train_clock_recovery_delay(dp_info->dpcd); drm_dp_link_train_clock_recovery_delay(dp_info->aux, dp_info->dpcd);
if (drm_dp_dpcd_read_link_status(dp_info->aux, if (drm_dp_dpcd_read_link_status(dp_info->aux,
dp_info->link_status) <= 0) { dp_info->link_status) <= 0) {
@@ -675,7 +677,7 @@ amdgpu_atombios_dp_link_train_ce(struct amdgpu_atombios_dp_link_train_info *dp_i
dp_info->tries = 0; dp_info->tries = 0;
channel_eq = false; channel_eq = false;
while (1) { while (1) {
drm_dp_link_train_channel_eq_delay(dp_info->dpcd); drm_dp_link_train_channel_eq_delay(dp_info->aux, dp_info->dpcd);
if (drm_dp_dpcd_read_link_status(dp_info->aux, if (drm_dp_dpcd_read_link_status(dp_info->aux,
dp_info->link_status) <= 0) { dp_info->link_status) <= 0) {

View File

@@ -363,6 +363,7 @@ static int uvd_v7_0_enc_ring_test_ib(struct amdgpu_ring *ring, long timeout)
error: error:
dma_fence_put(fence); dma_fence_put(fence);
amdgpu_bo_unpin(bo);
amdgpu_bo_unreserve(bo); amdgpu_bo_unreserve(bo);
amdgpu_bo_unref(&bo); amdgpu_bo_unref(&bo);
return r; return r;

View File

@@ -6308,25 +6308,6 @@ static int fill_hdr_info_packet(const struct drm_connector_state *state,
return 0; return 0;
} }
static bool
is_hdr_metadata_different(const struct drm_connector_state *old_state,
const struct drm_connector_state *new_state)
{
struct drm_property_blob *old_blob = old_state->hdr_output_metadata;
struct drm_property_blob *new_blob = new_state->hdr_output_metadata;
if (old_blob != new_blob) {
if (old_blob && new_blob &&
old_blob->length == new_blob->length)
return memcmp(old_blob->data, new_blob->data,
old_blob->length);
return true;
}
return false;
}
static int static int
amdgpu_dm_connector_atomic_check(struct drm_connector *conn, amdgpu_dm_connector_atomic_check(struct drm_connector *conn,
struct drm_atomic_state *state) struct drm_atomic_state *state)
@@ -6344,7 +6325,7 @@ amdgpu_dm_connector_atomic_check(struct drm_connector *conn,
if (!crtc) if (!crtc)
return 0; return 0;
if (is_hdr_metadata_different(old_con_state, new_con_state)) { if (!drm_connector_atomic_hdr_metadata_equal(old_con_state, new_con_state)) {
struct dc_info_packet hdr_infopacket; struct dc_info_packet hdr_infopacket;
ret = fill_hdr_info_packet(new_con_state, &hdr_infopacket); ret = fill_hdr_info_packet(new_con_state, &hdr_infopacket);
@@ -7531,9 +7512,7 @@ void amdgpu_dm_connector_init_helper(struct amdgpu_display_manager *dm,
if (connector_type == DRM_MODE_CONNECTOR_HDMIA || if (connector_type == DRM_MODE_CONNECTOR_HDMIA ||
connector_type == DRM_MODE_CONNECTOR_DisplayPort || connector_type == DRM_MODE_CONNECTOR_DisplayPort ||
connector_type == DRM_MODE_CONNECTOR_eDP) { connector_type == DRM_MODE_CONNECTOR_eDP) {
drm_object_attach_property( drm_connector_attach_hdr_output_metadata_property(&aconnector->base);
&aconnector->base.base,
dm->ddev->mode_config.hdr_output_metadata_property, 0);
if (!aconnector->mst_port) if (!aconnector->mst_port)
drm_connector_attach_vrr_capable_property(&aconnector->base); drm_connector_attach_vrr_capable_property(&aconnector->base);
@@ -8838,7 +8817,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
dm_old_crtc_state->abm_level; dm_old_crtc_state->abm_level;
hdr_changed = hdr_changed =
is_hdr_metadata_different(old_con_state, new_con_state); !drm_connector_atomic_hdr_metadata_equal(old_con_state, new_con_state);
if (!scaling_changed && !abm_changed && !hdr_changed) if (!scaling_changed && !abm_changed && !hdr_changed)
continue; continue;

View File

@@ -434,10 +434,13 @@ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
struct amdgpu_dm_connector *aconnector, struct amdgpu_dm_connector *aconnector,
int link_index) int link_index)
{ {
struct dc_link_settings max_link_enc_cap = {0};
aconnector->dm_dp_aux.aux.name = aconnector->dm_dp_aux.aux.name =
kasprintf(GFP_KERNEL, "AMDGPU DM aux hw bus %d", kasprintf(GFP_KERNEL, "AMDGPU DM aux hw bus %d",
link_index); link_index);
aconnector->dm_dp_aux.aux.transfer = dm_dp_aux_transfer; aconnector->dm_dp_aux.aux.transfer = dm_dp_aux_transfer;
aconnector->dm_dp_aux.aux.drm_dev = dm->ddev;
aconnector->dm_dp_aux.ddc_service = aconnector->dc_link->ddc; aconnector->dm_dp_aux.ddc_service = aconnector->dc_link->ddc;
drm_dp_aux_init(&aconnector->dm_dp_aux.aux); drm_dp_aux_init(&aconnector->dm_dp_aux.aux);
@@ -447,6 +450,7 @@ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
if (aconnector->base.connector_type == DRM_MODE_CONNECTOR_eDP) if (aconnector->base.connector_type == DRM_MODE_CONNECTOR_eDP)
return; return;
dc_link_dp_get_max_link_enc_cap(aconnector->dc_link, &max_link_enc_cap);
aconnector->mst_mgr.cbs = &dm_mst_cbs; aconnector->mst_mgr.cbs = &dm_mst_cbs;
drm_dp_mst_topology_mgr_init( drm_dp_mst_topology_mgr_init(
&aconnector->mst_mgr, &aconnector->mst_mgr,
@@ -454,6 +458,8 @@ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
&aconnector->dm_dp_aux.aux, &aconnector->dm_dp_aux.aux,
16, 16,
4, 4,
(u8)max_link_enc_cap.lane_count,
(u8)max_link_enc_cap.link_rate,
aconnector->connector_id); aconnector->connector_id);
drm_connector_attach_dp_subconnector_property(&aconnector->base); drm_connector_attach_dp_subconnector_property(&aconnector->base);

View File

@@ -1893,6 +1893,24 @@ bool dc_link_dp_sync_lt_end(struct dc_link *link, bool link_down)
return true; return true;
} }
bool dc_link_dp_get_max_link_enc_cap(const struct dc_link *link, struct dc_link_settings *max_link_enc_cap)
{
if (!max_link_enc_cap) {
DC_LOG_ERROR("%s: Could not return max link encoder caps", __func__);
return false;
}
if (link->link_enc->funcs->get_max_link_cap) {
link->link_enc->funcs->get_max_link_cap(link->link_enc, max_link_enc_cap);
return true;
}
DC_LOG_ERROR("%s: Max link encoder caps unknown", __func__);
max_link_enc_cap->lane_count = 1;
max_link_enc_cap->link_rate = 6;
return false;
}
static struct dc_link_settings get_max_link_cap(struct dc_link *link) static struct dc_link_settings get_max_link_cap(struct dc_link *link)
{ {
struct dc_link_settings max_link_cap = {0}; struct dc_link_settings max_link_cap = {0};

View File

@@ -345,6 +345,8 @@ bool dc_link_dp_set_test_pattern(
const unsigned char *p_custom_pattern, const unsigned char *p_custom_pattern,
unsigned int cust_pattern_size); unsigned int cust_pattern_size);
bool dc_link_dp_get_max_link_enc_cap(const struct dc_link *link, struct dc_link_settings *max_link_enc_cap);
void dc_link_enable_hpd_filter(struct dc_link *link, bool enable); void dc_link_enable_hpd_filter(struct dc_link *link, bool enable);
bool dc_link_is_dp_sink_present(struct dc_link *link); bool dc_link_is_dp_sink_present(struct dc_link *link);

View File

@@ -247,7 +247,6 @@ static void komeda_kms_mode_config_init(struct komeda_kms_dev *kms,
config->min_height = 0; config->min_height = 0;
config->max_width = 4096; config->max_width = 4096;
config->max_height = 4096; config->max_height = 4096;
config->allow_fb_modifiers = true;
config->funcs = &komeda_mode_config_funcs; config->funcs = &komeda_mode_config_funcs;
config->helper_private = &komeda_mode_config_helpers; config->helper_private = &komeda_mode_config_helpers;

View File

@@ -403,7 +403,6 @@ static int malidp_init(struct drm_device *drm)
drm->mode_config.max_height = hwdev->max_line_size; drm->mode_config.max_height = hwdev->max_line_size;
drm->mode_config.funcs = &malidp_mode_config_funcs; drm->mode_config.funcs = &malidp_mode_config_funcs;
drm->mode_config.helper_private = &malidp_mode_config_helpers; drm->mode_config.helper_private = &malidp_mode_config_helpers;
drm->mode_config.allow_fb_modifiers = true;
ret = malidp_crtc_init(drm); ret = malidp_crtc_init(drm);
if (ret) if (ret)

View File

@@ -927,6 +927,11 @@ static const struct drm_plane_helper_funcs malidp_de_plane_helper_funcs = {
.atomic_disable = malidp_de_plane_disable, .atomic_disable = malidp_de_plane_disable,
}; };
static const uint64_t linear_only_modifiers[] = {
DRM_FORMAT_MOD_LINEAR,
DRM_FORMAT_MOD_INVALID
};
int malidp_de_planes_init(struct drm_device *drm) int malidp_de_planes_init(struct drm_device *drm)
{ {
struct malidp_drm *malidp = drm->dev_private; struct malidp_drm *malidp = drm->dev_private;
@@ -990,8 +995,8 @@ int malidp_de_planes_init(struct drm_device *drm)
*/ */
ret = drm_universal_plane_init(drm, &plane->base, crtcs, ret = drm_universal_plane_init(drm, &plane->base, crtcs,
&malidp_de_plane_funcs, formats, n, &malidp_de_plane_funcs, formats, n,
(id == DE_SMART) ? NULL : modifiers, plane_type, (id == DE_SMART) ? linear_only_modifiers : modifiers,
NULL); plane_type, NULL);
if (ret < 0) if (ret < 0)
goto cleanup; goto cleanup;

View File

@@ -9,6 +9,7 @@
#include <linux/of_graph.h> #include <linux/of_graph.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <drm/drm_aperture.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include <drm/drm_drv.h> #include <drm/drm_drv.h>
#include <drm/drm_ioctl.h> #include <drm/drm_ioctl.h>
@@ -94,9 +95,7 @@ static int armada_drm_bind(struct device *dev)
} }
/* Remove early framebuffers */ /* Remove early framebuffers */
ret = drm_fb_helper_remove_conflicting_framebuffers(NULL, ret = drm_aperture_remove_framebuffers(false, "armada-drm-fb");
"armada-drm-fb",
false);
if (ret) { if (ret) {
dev_err(dev, "[" DRM_NAME ":%s] can't kick out simple-fb: %d\n", dev_err(dev, "[" DRM_NAME ":%s] can't kick out simple-fb: %d\n",
__func__, ret); __func__, ret);

View File

@@ -189,6 +189,9 @@ bool ast_backup_fw(struct drm_device *dev, u8 *addr, u32 size)
u32 i, data; u32 i, data;
u32 boot_address; u32 boot_address;
if (ast->config_mode != ast_use_p2a)
return false;
data = ast_mindwm(ast, 0x1e6e2100) & 0x01; data = ast_mindwm(ast, 0x1e6e2100) & 0x01;
if (data) { if (data) {
boot_address = get_fw_base(ast); boot_address = get_fw_base(ast);
@@ -207,6 +210,9 @@ static bool ast_launch_m68k(struct drm_device *dev)
u8 *fw_addr = NULL; u8 *fw_addr = NULL;
u8 jreg; u8 jreg;
if (ast->config_mode != ast_use_p2a)
return false;
data = ast_mindwm(ast, 0x1e6e2100) & 0x01; data = ast_mindwm(ast, 0x1e6e2100) & 0x01;
if (!data) { if (!data) {
@@ -271,18 +277,21 @@ u8 ast_get_dp501_max_clk(struct drm_device *dev)
struct ast_private *ast = to_ast_private(dev); struct ast_private *ast = to_ast_private(dev);
u32 boot_address, offset, data; u32 boot_address, offset, data;
u8 linkcap[4], linkrate, linklanes, maxclk = 0xff; u8 linkcap[4], linkrate, linklanes, maxclk = 0xff;
u32 *plinkcap;
if (ast->config_mode == ast_use_p2a) {
boot_address = get_fw_base(ast); boot_address = get_fw_base(ast);
/* validate FW version */ /* validate FW version */
offset = 0xf000; offset = AST_DP501_GBL_VERSION;
data = ast_mindwm(ast, boot_address + offset); data = ast_mindwm(ast, boot_address + offset);
if ((data & 0xf0) != 0x10) /* version: 1x */ if ((data & AST_DP501_FW_VERSION_MASK) != AST_DP501_FW_VERSION_1) /* version: 1x */
return maxclk; return maxclk;
/* Read Link Capability */ /* Read Link Capability */
offset = 0xf014; offset = AST_DP501_LINKRATE;
*(u32 *)linkcap = ast_mindwm(ast, boot_address + offset); plinkcap = (u32 *)linkcap;
*plinkcap = ast_mindwm(ast, boot_address + offset);
if (linkcap[2] == 0) { if (linkcap[2] == 0) {
linkrate = linkcap[0]; linkrate = linkcap[0];
linklanes = linkcap[1]; linklanes = linkcap[1];
@@ -291,6 +300,33 @@ u8 ast_get_dp501_max_clk(struct drm_device *dev)
data = 0xff; data = 0xff;
maxclk = (u8)data; maxclk = (u8)data;
} }
} else {
if (!ast->dp501_fw_buf)
return AST_DP501_DEFAULT_DCLK; /* 1024x768 as default */
/* dummy read */
offset = 0x0000;
data = readl(ast->dp501_fw_buf + offset);
/* validate FW version */
offset = AST_DP501_GBL_VERSION;
data = readl(ast->dp501_fw_buf + offset);
if ((data & AST_DP501_FW_VERSION_MASK) != AST_DP501_FW_VERSION_1) /* version: 1x */
return maxclk;
/* Read Link Capability */
offset = AST_DP501_LINKRATE;
plinkcap = (u32 *)linkcap;
*plinkcap = readl(ast->dp501_fw_buf + offset);
if (linkcap[2] == 0) {
linkrate = linkcap[0];
linklanes = linkcap[1];
data = (linkrate == 0x0a) ? (90 * linklanes) : (54 * linklanes);
if (data > 0xff)
data = 0xff;
maxclk = (u8)data;
}
}
return maxclk; return maxclk;
} }
@@ -298,26 +334,57 @@ bool ast_dp501_read_edid(struct drm_device *dev, u8 *ediddata)
{ {
struct ast_private *ast = to_ast_private(dev); struct ast_private *ast = to_ast_private(dev);
u32 i, boot_address, offset, data; u32 i, boot_address, offset, data;
u32 *pEDIDidx;
if (ast->config_mode == ast_use_p2a) {
boot_address = get_fw_base(ast); boot_address = get_fw_base(ast);
/* validate FW version */ /* validate FW version */
offset = 0xf000; offset = AST_DP501_GBL_VERSION;
data = ast_mindwm(ast, boot_address + offset); data = ast_mindwm(ast, boot_address + offset);
if ((data & 0xf0) != 0x10) if ((data & AST_DP501_FW_VERSION_MASK) != AST_DP501_FW_VERSION_1)
return false; return false;
/* validate PnP Monitor */ /* validate PnP Monitor */
offset = 0xf010; offset = AST_DP501_PNPMONITOR;
data = ast_mindwm(ast, boot_address + offset); data = ast_mindwm(ast, boot_address + offset);
if (!(data & 0x01)) if (!(data & AST_DP501_PNP_CONNECTED))
return false; return false;
/* Read EDID */ /* Read EDID */
offset = 0xf020; offset = AST_DP501_EDID_DATA;
for (i = 0; i < 128; i += 4) { for (i = 0; i < 128; i += 4) {
data = ast_mindwm(ast, boot_address + offset + i); data = ast_mindwm(ast, boot_address + offset + i);
*(u32 *)(ediddata + i) = data; pEDIDidx = (u32 *)(ediddata + i);
*pEDIDidx = data;
}
} else {
if (!ast->dp501_fw_buf)
return false;
/* dummy read */
offset = 0x0000;
data = readl(ast->dp501_fw_buf + offset);
/* validate FW version */
offset = AST_DP501_GBL_VERSION;
data = readl(ast->dp501_fw_buf + offset);
if ((data & AST_DP501_FW_VERSION_MASK) != AST_DP501_FW_VERSION_1)
return false;
/* validate PnP Monitor */
offset = AST_DP501_PNPMONITOR;
data = readl(ast->dp501_fw_buf + offset);
if (!(data & AST_DP501_PNP_CONNECTED))
return false;
/* Read EDID */
offset = AST_DP501_EDID_DATA;
for (i = 0; i < 128; i += 4) {
data = readl(ast->dp501_fw_buf + offset + i);
pEDIDidx = (u32 *)(ediddata + i);
*pEDIDidx = data;
}
} }
return true; return true;

View File

@@ -30,10 +30,10 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <drm/drm_aperture.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h> #include <drm/drm_crtc_helper.h>
#include <drm/drm_drv.h> #include <drm/drm_drv.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_gem_vram_helper.h> #include <drm/drm_gem_vram_helper.h>
#include <drm/drm_probe_helper.h> #include <drm/drm_probe_helper.h>
@@ -89,23 +89,18 @@ static const struct pci_device_id ast_pciidlist[] = {
MODULE_DEVICE_TABLE(pci, ast_pciidlist); MODULE_DEVICE_TABLE(pci, ast_pciidlist);
static void ast_kick_out_firmware_fb(struct pci_dev *pdev) static int ast_remove_conflicting_framebuffers(struct pci_dev *pdev)
{ {
struct apertures_struct *ap;
bool primary = false; bool primary = false;
resource_size_t base, size;
ap = alloc_apertures(1); base = pci_resource_start(pdev, 0);
if (!ap) size = pci_resource_len(pdev, 0);
return;
ap->ranges[0].base = pci_resource_start(pdev, 0);
ap->ranges[0].size = pci_resource_len(pdev, 0);
#ifdef CONFIG_X86 #ifdef CONFIG_X86
primary = pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW; primary = pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW;
#endif #endif
drm_fb_helper_remove_conflicting_framebuffers(ap, "astdrmfb", primary);
kfree(ap); return drm_aperture_remove_conflicting_framebuffers(base, size, primary, "astdrmfb");
} }
static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
@@ -114,7 +109,9 @@ static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
struct drm_device *dev; struct drm_device *dev;
int ret; int ret;
ast_kick_out_firmware_fb(pdev); ret = ast_remove_conflicting_framebuffers(pdev);
if (ret)
return ret;
ret = pcim_enable_device(pdev); ret = pcim_enable_device(pdev);
if (ret) if (ret)

View File

@@ -150,6 +150,7 @@ struct ast_private {
void __iomem *regs; void __iomem *regs;
void __iomem *ioregs; void __iomem *ioregs;
void __iomem *dp501_fw_buf;
enum ast_chip chip; enum ast_chip chip;
bool vga2_clone; bool vga2_clone;
@@ -325,6 +326,17 @@ int ast_mode_config_init(struct ast_private *ast);
#define AST_MM_ALIGN_SHIFT 4 #define AST_MM_ALIGN_SHIFT 4
#define AST_MM_ALIGN_MASK ((1 << AST_MM_ALIGN_SHIFT) - 1) #define AST_MM_ALIGN_MASK ((1 << AST_MM_ALIGN_SHIFT) - 1)
#define AST_DP501_FW_VERSION_MASK GENMASK(7, 4)
#define AST_DP501_FW_VERSION_1 BIT(4)
#define AST_DP501_PNP_CONNECTED BIT(1)
#define AST_DP501_DEFAULT_DCLK 65
#define AST_DP501_GBL_VERSION 0xf000
#define AST_DP501_PNPMONITOR 0xf010
#define AST_DP501_LINKRATE 0xf014
#define AST_DP501_EDID_DATA 0xf020
int ast_mm_init(struct ast_private *ast); int ast_mm_init(struct ast_private *ast);
/* ast post */ /* ast post */

View File

@@ -99,7 +99,7 @@ static void ast_detect_config_mode(struct drm_device *dev, u32 *scu_rev)
if (!(jregd0 & 0x80) || !(jregd1 & 0x10)) { if (!(jregd0 & 0x80) || !(jregd1 & 0x10)) {
/* Double check it's actually working */ /* Double check it's actually working */
data = ast_read32(ast, 0xf004); data = ast_read32(ast, 0xf004);
if (data != 0xFFFFFFFF) { if ((data != 0xFFFFFFFF) && (data != 0x00)) {
/* P2A works, grab silicon revision */ /* P2A works, grab silicon revision */
ast->config_mode = ast_use_p2a; ast->config_mode = ast_use_p2a;
@@ -413,7 +413,7 @@ struct ast_private *ast_device_create(const struct drm_driver *drv,
pci_set_drvdata(pdev, dev); pci_set_drvdata(pdev, dev);
ast->regs = pci_iomap(pdev, 1, 0); ast->regs = pcim_iomap(pdev, 1, 0);
if (!ast->regs) if (!ast->regs)
return ERR_PTR(-EIO); return ERR_PTR(-EIO);
@@ -429,7 +429,7 @@ struct ast_private *ast_device_create(const struct drm_driver *drv,
/* "map" IO regs if the above hasn't done so already */ /* "map" IO regs if the above hasn't done so already */
if (!ast->ioregs) { if (!ast->ioregs) {
ast->ioregs = pci_iomap(pdev, 2, 0); ast->ioregs = pcim_iomap(pdev, 2, 0);
if (!ast->ioregs) if (!ast->ioregs)
return ERR_PTR(-EIO); return ERR_PTR(-EIO);
} }
@@ -450,6 +450,14 @@ struct ast_private *ast_device_create(const struct drm_driver *drv,
if (ret) if (ret)
return ERR_PTR(ret); return ERR_PTR(ret);
/* map reserved buffer */
ast->dp501_fw_buf = NULL;
if (dev->vram_mm->vram_size < pci_resource_len(pdev, 0)) {
ast->dp501_fw_buf = pci_iomap_range(pdev, 0, dev->vram_mm->vram_size, 0);
if (!ast->dp501_fw_buf)
drm_info(dev, "failed to map reserved buffer!\n");
}
ret = ast_mode_config_init(ast); ret = ast_mode_config_init(ast);
if (ret) if (ret)
return ERR_PTR(ret); return ERR_PTR(ret);

View File

@@ -78,6 +78,7 @@ struct bochs_device {
int bochs_hw_init(struct drm_device *dev); int bochs_hw_init(struct drm_device *dev);
void bochs_hw_fini(struct drm_device *dev); void bochs_hw_fini(struct drm_device *dev);
void bochs_hw_blank(struct bochs_device *bochs, bool blank);
void bochs_hw_setmode(struct bochs_device *bochs, void bochs_hw_setmode(struct bochs_device *bochs,
struct drm_display_mode *mode); struct drm_display_mode *mode);
void bochs_hw_setformat(struct bochs_device *bochs, void bochs_hw_setformat(struct bochs_device *bochs,

View File

@@ -6,6 +6,7 @@
#include <linux/pci.h> #include <linux/pci.h>
#include <drm/drm_drv.h> #include <drm/drm_drv.h>
#include <drm/drm_aperture.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include <drm/drm_managed.h> #include <drm/drm_managed.h>
@@ -109,7 +110,7 @@ static int bochs_pci_probe(struct pci_dev *pdev,
return -ENOMEM; return -ENOMEM;
} }
ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "bochsdrmfb"); ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, "bochsdrmfb");
if (ret) if (ret)
return ret; return ret;

View File

@@ -7,6 +7,7 @@
#include <drm/drm_drv.h> #include <drm/drm_drv.h>
#include <drm/drm_fourcc.h> #include <drm/drm_fourcc.h>
#include <video/vga.h>
#include "bochs.h" #include "bochs.h"
/* ---------------------------------------------------------------------- */ /* ---------------------------------------------------------------------- */
@@ -24,6 +25,19 @@ static void bochs_vga_writeb(struct bochs_device *bochs, u16 ioport, u8 val)
} }
} }
static u8 bochs_vga_readb(struct bochs_device *bochs, u16 ioport)
{
if (WARN_ON(ioport < 0x3c0 || ioport > 0x3df))
return 0xff;
if (bochs->mmio) {
int offset = ioport - 0x3c0 + 0x400;
return readb(bochs->mmio + offset);
} else {
return inb(ioport);
}
}
static u16 bochs_dispi_read(struct bochs_device *bochs, u16 reg) static u16 bochs_dispi_read(struct bochs_device *bochs, u16 reg)
{ {
u16 ret = 0; u16 ret = 0;
@@ -205,6 +219,15 @@ void bochs_hw_fini(struct drm_device *dev)
kfree(bochs->edid); kfree(bochs->edid);
} }
void bochs_hw_blank(struct bochs_device *bochs, bool blank)
{
DRM_DEBUG_DRIVER("hw_blank %d\n", blank);
/* discard ar_flip_flop */
(void)bochs_vga_readb(bochs, VGA_IS1_RC);
/* blank or unblank; we need only update index and set 0x20 */
bochs_vga_writeb(bochs, VGA_ATT_W, blank ? 0 : 0x20);
}
void bochs_hw_setmode(struct bochs_device *bochs, void bochs_hw_setmode(struct bochs_device *bochs,
struct drm_display_mode *mode) struct drm_display_mode *mode)
{ {
@@ -223,7 +246,7 @@ void bochs_hw_setmode(struct bochs_device *bochs,
bochs->xres, bochs->yres, bochs->bpp, bochs->xres, bochs->yres, bochs->bpp,
bochs->yres_virtual); bochs->yres_virtual);
bochs_vga_writeb(bochs, 0x3c0, 0x20); /* unblank */ bochs_hw_blank(bochs, false);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_ENABLE, 0); bochs_dispi_write(bochs, VBE_DISPI_INDEX_ENABLE, 0);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_BPP, bochs->bpp); bochs_dispi_write(bochs, VBE_DISPI_INDEX_BPP, bochs->bpp);

View File

@@ -57,6 +57,13 @@ static void bochs_pipe_enable(struct drm_simple_display_pipe *pipe,
bochs_plane_update(bochs, plane_state); bochs_plane_update(bochs, plane_state);
} }
static void bochs_pipe_disable(struct drm_simple_display_pipe *pipe)
{
struct bochs_device *bochs = pipe->crtc.dev->dev_private;
bochs_hw_blank(bochs, true);
}
static void bochs_pipe_update(struct drm_simple_display_pipe *pipe, static void bochs_pipe_update(struct drm_simple_display_pipe *pipe,
struct drm_plane_state *old_state) struct drm_plane_state *old_state)
{ {
@@ -67,6 +74,7 @@ static void bochs_pipe_update(struct drm_simple_display_pipe *pipe,
static const struct drm_simple_display_pipe_funcs bochs_pipe_funcs = { static const struct drm_simple_display_pipe_funcs bochs_pipe_funcs = {
.enable = bochs_pipe_enable, .enable = bochs_pipe_enable,
.disable = bochs_pipe_disable,
.update = bochs_pipe_update, .update = bochs_pipe_update,
.prepare_fb = drm_gem_vram_simple_display_pipe_prepare_fb, .prepare_fb = drm_gem_vram_simple_display_pipe_prepare_fb,
.cleanup_fb = drm_gem_vram_simple_display_pipe_cleanup_fb, .cleanup_fb = drm_gem_vram_simple_display_pipe_cleanup_fb,

View File

@@ -68,6 +68,7 @@ config DRM_LONTIUM_LT8912B
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_MIPI_DSI select DRM_MIPI_DSI
select REGMAP_I2C select REGMAP_I2C
select VIDEOMODE_HELPERS
help help
Driver for Lontium LT8912B DSI to HDMI bridge Driver for Lontium LT8912B DSI to HDMI bridge
chip driver. chip driver.
@@ -104,6 +105,14 @@ config DRM_LONTIUM_LT9611UXC
HDMI signals HDMI signals
Please say Y if you have such hardware. Please say Y if you have such hardware.
config DRM_ITE_IT66121
tristate "ITE IT66121 HDMI bridge"
depends on OF
select DRM_KMS_HELPER
select REGMAP_I2C
help
Support for ITE IT66121 HDMI bridge.
config DRM_LVDS_CODEC config DRM_LVDS_CODEC
tristate "Transparent LVDS encoders and decoders support" tristate "Transparent LVDS encoders and decoders support"
depends on OF depends on OF
@@ -172,7 +181,7 @@ config DRM_SIL_SII8620
tristate "Silicon Image SII8620 HDMI/MHL bridge" tristate "Silicon Image SII8620 HDMI/MHL bridge"
depends on OF depends on OF
select DRM_KMS_HELPER select DRM_KMS_HELPER
imply EXTCON select EXTCON
depends on RC_CORE || !RC_CORE depends on RC_CORE || !RC_CORE
help help
Silicon Image SII8620 HDMI/MHL bridge chip driver. Silicon Image SII8620 HDMI/MHL bridge chip driver.
@@ -270,6 +279,7 @@ config DRM_TI_SN65DSI86
select REGMAP_I2C select REGMAP_I2C
select DRM_PANEL select DRM_PANEL
select DRM_MIPI_DSI select DRM_MIPI_DSI
select AUXILIARY_BUS
help help
Texas Instruments SN65DSI86 DSI to eDP Bridge driver Texas Instruments SN65DSI86 DSI to eDP Bridge driver

View File

@@ -26,6 +26,7 @@ obj-$(CONFIG_DRM_TI_SN65DSI86) += ti-sn65dsi86.o
obj-$(CONFIG_DRM_TI_TFP410) += ti-tfp410.o obj-$(CONFIG_DRM_TI_TFP410) += ti-tfp410.o
obj-$(CONFIG_DRM_TI_TPD12S015) += ti-tpd12s015.o obj-$(CONFIG_DRM_TI_TPD12S015) += ti-tpd12s015.o
obj-$(CONFIG_DRM_NWL_MIPI_DSI) += nwl-dsi.o obj-$(CONFIG_DRM_NWL_MIPI_DSI) += nwl-dsi.o
obj-$(CONFIG_DRM_ITE_IT66121) += ite-it66121.o
obj-y += analogix/ obj-y += analogix/
obj-y += cadence/ obj-y += cadence/

View File

@@ -191,6 +191,7 @@
#define ADV7511_I2S_FORMAT_I2S 0 #define ADV7511_I2S_FORMAT_I2S 0
#define ADV7511_I2S_FORMAT_RIGHT_J 1 #define ADV7511_I2S_FORMAT_RIGHT_J 1
#define ADV7511_I2S_FORMAT_LEFT_J 2 #define ADV7511_I2S_FORMAT_LEFT_J 2
#define ADV7511_I2S_IEC958_DIRECT 3
#define ADV7511_PACKET(p, x) ((p) * 0x20 + (x)) #define ADV7511_PACKET(p, x) ((p) * 0x20 + (x))
#define ADV7511_PACKET_SDP(x) ADV7511_PACKET(0, x) #define ADV7511_PACKET_SDP(x) ADV7511_PACKET(0, x)

View File

@@ -101,6 +101,10 @@ static int adv7511_hdmi_hw_params(struct device *dev, void *data,
case 20: case 20:
len = ADV7511_I2S_SAMPLE_LEN_20; len = ADV7511_I2S_SAMPLE_LEN_20;
break; break;
case 32:
if (fmt->bit_fmt != SNDRV_PCM_FORMAT_IEC958_SUBFRAME_LE)
return -EINVAL;
fallthrough;
case 24: case 24:
len = ADV7511_I2S_SAMPLE_LEN_24; len = ADV7511_I2S_SAMPLE_LEN_24;
break; break;
@@ -112,6 +116,8 @@ static int adv7511_hdmi_hw_params(struct device *dev, void *data,
case HDMI_I2S: case HDMI_I2S:
audio_source = ADV7511_AUDIO_SOURCE_I2S; audio_source = ADV7511_AUDIO_SOURCE_I2S;
i2s_format = ADV7511_I2S_FORMAT_I2S; i2s_format = ADV7511_I2S_FORMAT_I2S;
if (fmt->bit_fmt == SNDRV_PCM_FORMAT_IEC958_SUBFRAME_LE)
i2s_format = ADV7511_I2S_IEC958_DIRECT;
break; break;
case HDMI_RIGHT_J: case HDMI_RIGHT_J:
audio_source = ADV7511_AUDIO_SOURCE_I2S; audio_source = ADV7511_AUDIO_SOURCE_I2S;

View File

@@ -6,7 +6,7 @@ config DRM_ANALOGIX_ANX6345
select DRM_KMS_HELPER select DRM_KMS_HELPER
select REGMAP_I2C select REGMAP_I2C
help help
ANX6345 is an ultra-low Full-HD DisplayPort/eDP ANX6345 is an ultra-low power Full-HD DisplayPort/eDP
transmitter designed for portable devices. The transmitter designed for portable devices. The
ANX6345 transforms the LVTTL RGB output of an ANX6345 transforms the LVTTL RGB output of an
application processor to eDP or DisplayPort. application processor to eDP or DisplayPort.

View File

@@ -537,6 +537,7 @@ static int anx6345_bridge_attach(struct drm_bridge *bridge,
/* Register aux channel */ /* Register aux channel */
anx6345->aux.name = "DP-AUX"; anx6345->aux.name = "DP-AUX";
anx6345->aux.dev = &anx6345->client->dev; anx6345->aux.dev = &anx6345->client->dev;
anx6345->aux.drm_dev = bridge->dev;
anx6345->aux.transfer = anx6345_aux_transfer; anx6345->aux.transfer = anx6345_aux_transfer;
err = drm_dp_aux_register(&anx6345->aux); err = drm_dp_aux_register(&anx6345->aux);

View File

@@ -905,6 +905,7 @@ static int anx78xx_bridge_attach(struct drm_bridge *bridge,
/* Register aux channel */ /* Register aux channel */
anx78xx->aux.name = "DP-AUX"; anx78xx->aux.name = "DP-AUX";
anx78xx->aux.dev = &anx78xx->client->dev; anx78xx->aux.dev = &anx78xx->client->dev;
anx78xx->aux.drm_dev = bridge->dev;
anx78xx->aux.transfer = anx78xx_aux_transfer; anx78xx->aux.transfer = anx78xx_aux_transfer;
err = drm_dp_aux_register(&anx78xx->aux); err = drm_dp_aux_register(&anx78xx->aux);

View File

@@ -1765,6 +1765,7 @@ int analogix_dp_bind(struct analogix_dp_device *dp, struct drm_device *drm_dev)
dp->aux.name = "DP-AUX"; dp->aux.name = "DP-AUX";
dp->aux.transfer = analogix_dpaux_transfer; dp->aux.transfer = analogix_dpaux_transfer;
dp->aux.dev = dp->dev; dp->aux.dev = dp->dev;
dp->aux.drm_dev = drm_dev;
ret = drm_dp_aux_register(&dp->aux); ret = drm_dp_aux_register(&dp->aux);
if (ret) if (ret)

View File

@@ -893,7 +893,7 @@ static void anx7625_power_on(struct anx7625_data *ctx)
usleep_range(2000, 2100); usleep_range(2000, 2100);
} }
usleep_range(4000, 4100); usleep_range(11000, 12000);
/* Power on pin enable */ /* Power on pin enable */
gpiod_set_value(ctx->pdata.gpio_p_on, 1); gpiod_set_value(ctx->pdata.gpio_p_on, 1);

View File

@@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
obj-$(CONFIG_DRM_CDNS_MHDP8546) += cdns-mhdp8546.o obj-$(CONFIG_DRM_CDNS_MHDP8546) += cdns-mhdp8546.o
cdns-mhdp8546-y := cdns-mhdp8546-core.o cdns-mhdp8546-y := cdns-mhdp8546-core.o cdns-mhdp8546-hdcp.o
cdns-mhdp8546-$(CONFIG_DRM_CDNS_MHDP8546_J721E) += cdns-mhdp8546-j721e.o cdns-mhdp8546-$(CONFIG_DRM_CDNS_MHDP8546_J721E) += cdns-mhdp8546-j721e.o

View File

@@ -42,6 +42,7 @@
#include <drm/drm_connector.h> #include <drm/drm_connector.h>
#include <drm/drm_crtc_helper.h> #include <drm/drm_crtc_helper.h>
#include <drm/drm_dp_helper.h> #include <drm/drm_dp_helper.h>
#include <drm/drm_hdcp.h>
#include <drm/drm_modeset_helper_vtables.h> #include <drm/drm_modeset_helper_vtables.h>
#include <drm/drm_print.h> #include <drm/drm_print.h>
#include <drm/drm_probe_helper.h> #include <drm/drm_probe_helper.h>
@@ -49,7 +50,7 @@
#include <asm/unaligned.h> #include <asm/unaligned.h>
#include "cdns-mhdp8546-core.h" #include "cdns-mhdp8546-core.h"
#include "cdns-mhdp8546-hdcp.h"
#include "cdns-mhdp8546-j721e.h" #include "cdns-mhdp8546-j721e.h"
static int cdns_mhdp_mailbox_read(struct cdns_mhdp_device *mhdp) static int cdns_mhdp_mailbox_read(struct cdns_mhdp_device *mhdp)
@@ -1614,10 +1615,51 @@ enum drm_mode_status cdns_mhdp_mode_valid(struct drm_connector *conn,
return MODE_OK; return MODE_OK;
} }
static int cdns_mhdp_connector_atomic_check(struct drm_connector *conn,
struct drm_atomic_state *state)
{
struct cdns_mhdp_device *mhdp = connector_to_mhdp(conn);
struct drm_connector_state *old_state, *new_state;
struct drm_crtc_state *crtc_state;
u64 old_cp, new_cp;
if (!mhdp->hdcp_supported)
return 0;
old_state = drm_atomic_get_old_connector_state(state, conn);
new_state = drm_atomic_get_new_connector_state(state, conn);
old_cp = old_state->content_protection;
new_cp = new_state->content_protection;
if (old_state->hdcp_content_type != new_state->hdcp_content_type &&
new_cp != DRM_MODE_CONTENT_PROTECTION_UNDESIRED) {
new_state->content_protection = DRM_MODE_CONTENT_PROTECTION_DESIRED;
goto mode_changed;
}
if (!new_state->crtc) {
if (old_cp == DRM_MODE_CONTENT_PROTECTION_ENABLED)
new_state->content_protection = DRM_MODE_CONTENT_PROTECTION_DESIRED;
return 0;
}
if (old_cp == new_cp ||
(old_cp == DRM_MODE_CONTENT_PROTECTION_DESIRED &&
new_cp == DRM_MODE_CONTENT_PROTECTION_ENABLED))
return 0;
mode_changed:
crtc_state = drm_atomic_get_new_crtc_state(state, new_state->crtc);
crtc_state->mode_changed = true;
return 0;
}
static const struct drm_connector_helper_funcs cdns_mhdp_conn_helper_funcs = { static const struct drm_connector_helper_funcs cdns_mhdp_conn_helper_funcs = {
.detect_ctx = cdns_mhdp_connector_detect, .detect_ctx = cdns_mhdp_connector_detect,
.get_modes = cdns_mhdp_get_modes, .get_modes = cdns_mhdp_get_modes,
.mode_valid = cdns_mhdp_mode_valid, .mode_valid = cdns_mhdp_mode_valid,
.atomic_check = cdns_mhdp_connector_atomic_check,
}; };
static const struct drm_connector_funcs cdns_mhdp_conn_funcs = { static const struct drm_connector_funcs cdns_mhdp_conn_funcs = {
@@ -1662,7 +1704,10 @@ static int cdns_mhdp_connector_init(struct cdns_mhdp_device *mhdp)
return ret; return ret;
} }
return 0; if (mhdp->hdcp_supported)
ret = drm_connector_attach_content_protection_property(conn, true);
return ret;
} }
static int cdns_mhdp_attach(struct drm_bridge *bridge, static int cdns_mhdp_attach(struct drm_bridge *bridge,
@@ -1674,10 +1719,15 @@ static int cdns_mhdp_attach(struct drm_bridge *bridge,
dev_dbg(mhdp->dev, "%s\n", __func__); dev_dbg(mhdp->dev, "%s\n", __func__);
mhdp->aux.drm_dev = bridge->dev;
ret = drm_dp_aux_register(&mhdp->aux);
if (ret < 0)
return ret;
if (!(flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR)) { if (!(flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR)) {
ret = cdns_mhdp_connector_init(mhdp); ret = cdns_mhdp_connector_init(mhdp);
if (ret) if (ret)
return ret; goto aux_unregister;
} }
spin_lock(&mhdp->start_lock); spin_lock(&mhdp->start_lock);
@@ -1693,6 +1743,9 @@ static int cdns_mhdp_attach(struct drm_bridge *bridge,
mhdp->regs + CDNS_APB_INT_MASK); mhdp->regs + CDNS_APB_INT_MASK);
return 0; return 0;
aux_unregister:
drm_dp_aux_unregister(&mhdp->aux);
return ret;
} }
static void cdns_mhdp_configure_video(struct cdns_mhdp_device *mhdp, static void cdns_mhdp_configure_video(struct cdns_mhdp_device *mhdp,
@@ -1957,6 +2010,15 @@ static void cdns_mhdp_atomic_enable(struct drm_bridge *bridge,
if (WARN_ON(!conn_state)) if (WARN_ON(!conn_state))
goto out; goto out;
if (mhdp->hdcp_supported &&
mhdp->hw_state == MHDP_HW_READY &&
conn_state->content_protection ==
DRM_MODE_CONTENT_PROTECTION_DESIRED) {
mutex_unlock(&mhdp->link_mutex);
cdns_mhdp_hdcp_enable(mhdp, conn_state->hdcp_content_type);
mutex_lock(&mhdp->link_mutex);
}
crtc_state = drm_atomic_get_new_crtc_state(state, conn_state->crtc); crtc_state = drm_atomic_get_new_crtc_state(state, conn_state->crtc);
if (WARN_ON(!crtc_state)) if (WARN_ON(!crtc_state))
goto out; goto out;
@@ -2000,6 +2062,9 @@ static void cdns_mhdp_atomic_disable(struct drm_bridge *bridge,
mutex_lock(&mhdp->link_mutex); mutex_lock(&mhdp->link_mutex);
if (mhdp->hdcp_supported)
cdns_mhdp_hdcp_disable(mhdp);
mhdp->bridge_enabled = false; mhdp->bridge_enabled = false;
cdns_mhdp_reg_read(mhdp, CDNS_DP_FRAMER_GLOBAL_CONFIG, &resp); cdns_mhdp_reg_read(mhdp, CDNS_DP_FRAMER_GLOBAL_CONFIG, &resp);
resp &= ~CDNS_DP_FRAMER_EN; resp &= ~CDNS_DP_FRAMER_EN;
@@ -2025,6 +2090,8 @@ static void cdns_mhdp_detach(struct drm_bridge *bridge)
dev_dbg(mhdp->dev, "%s\n", __func__); dev_dbg(mhdp->dev, "%s\n", __func__);
drm_dp_aux_unregister(&mhdp->aux);
spin_lock(&mhdp->start_lock); spin_lock(&mhdp->start_lock);
mhdp->bridge_attached = false; mhdp->bridge_attached = false;
@@ -2288,7 +2355,6 @@ static irqreturn_t cdns_mhdp_irq_handler(int irq, void *data)
struct cdns_mhdp_device *mhdp = data; struct cdns_mhdp_device *mhdp = data;
u32 apb_stat, sw_ev0; u32 apb_stat, sw_ev0;
bool bridge_attached; bool bridge_attached;
int ret;
apb_stat = readl(mhdp->regs + CDNS_APB_INT_STATUS); apb_stat = readl(mhdp->regs + CDNS_APB_INT_STATUS);
if (!(apb_stat & CDNS_APB_INT_MASK_SW_EVENT_INT)) if (!(apb_stat & CDNS_APB_INT_MASK_SW_EVENT_INT))
@@ -2307,6 +2373,43 @@ static irqreturn_t cdns_mhdp_irq_handler(int irq, void *data)
spin_unlock(&mhdp->start_lock); spin_unlock(&mhdp->start_lock);
if (bridge_attached && (sw_ev0 & CDNS_DPTX_HPD)) { if (bridge_attached && (sw_ev0 & CDNS_DPTX_HPD)) {
schedule_work(&mhdp->hpd_work);
}
if (sw_ev0 & ~CDNS_DPTX_HPD) {
mhdp->sw_events |= (sw_ev0 & ~CDNS_DPTX_HPD);
wake_up(&mhdp->sw_events_wq);
}
return IRQ_HANDLED;
}
u32 cdns_mhdp_wait_for_sw_event(struct cdns_mhdp_device *mhdp, u32 event)
{
u32 ret;
ret = wait_event_timeout(mhdp->sw_events_wq,
mhdp->sw_events & event,
msecs_to_jiffies(500));
if (!ret) {
dev_dbg(mhdp->dev, "SW event 0x%x timeout\n", event);
goto sw_event_out;
}
ret = mhdp->sw_events;
mhdp->sw_events &= ~event;
sw_event_out:
return ret;
}
static void cdns_mhdp_hpd_work(struct work_struct *work)
{
struct cdns_mhdp_device *mhdp = container_of(work,
struct cdns_mhdp_device,
hpd_work);
int ret;
ret = cdns_mhdp_update_link_status(mhdp); ret = cdns_mhdp_update_link_status(mhdp);
if (mhdp->connector.dev) { if (mhdp->connector.dev) {
if (ret < 0) if (ret < 0)
@@ -2316,9 +2419,6 @@ static irqreturn_t cdns_mhdp_irq_handler(int irq, void *data)
} else { } else {
drm_bridge_hpd_notify(&mhdp->bridge, cdns_mhdp_detect(mhdp)); drm_bridge_hpd_notify(&mhdp->bridge, cdns_mhdp_detect(mhdp));
} }
}
return IRQ_HANDLED;
} }
static int cdns_mhdp_probe(struct platform_device *pdev) static int cdns_mhdp_probe(struct platform_device *pdev)
@@ -2356,6 +2456,15 @@ static int cdns_mhdp_probe(struct platform_device *pdev)
return PTR_ERR(mhdp->regs); return PTR_ERR(mhdp->regs);
} }
mhdp->sapb_regs = devm_platform_ioremap_resource_byname(pdev, "mhdptx-sapb");
if (IS_ERR(mhdp->sapb_regs)) {
mhdp->hdcp_supported = false;
dev_warn(dev,
"Failed to get SAPB memory resource, HDCP not supported\n");
} else {
mhdp->hdcp_supported = true;
}
mhdp->phy = devm_of_phy_get_by_index(dev, pdev->dev.of_node, 0); mhdp->phy = devm_of_phy_get_by_index(dev, pdev->dev.of_node, 0);
if (IS_ERR(mhdp->phy)) { if (IS_ERR(mhdp->phy)) {
dev_err(dev, "no PHY configured\n"); dev_err(dev, "no PHY configured\n");
@@ -2430,13 +2539,18 @@ static int cdns_mhdp_probe(struct platform_device *pdev)
/* Initialize the work for modeset in case of link train failure */ /* Initialize the work for modeset in case of link train failure */
INIT_WORK(&mhdp->modeset_retry_work, cdns_mhdp_modeset_retry_fn); INIT_WORK(&mhdp->modeset_retry_work, cdns_mhdp_modeset_retry_fn);
INIT_WORK(&mhdp->hpd_work, cdns_mhdp_hpd_work);
init_waitqueue_head(&mhdp->fw_load_wq); init_waitqueue_head(&mhdp->fw_load_wq);
init_waitqueue_head(&mhdp->sw_events_wq);
ret = cdns_mhdp_load_firmware(mhdp); ret = cdns_mhdp_load_firmware(mhdp);
if (ret) if (ret)
goto phy_exit; goto phy_exit;
if (mhdp->hdcp_supported)
cdns_mhdp_hdcp_init(mhdp);
drm_bridge_add(&mhdp->bridge); drm_bridge_add(&mhdp->bridge);
return 0; return 0;

View File

@@ -47,6 +47,10 @@ struct phy;
#define CDNS_SW_EVENT0 0x00044 #define CDNS_SW_EVENT0 0x00044
#define CDNS_DPTX_HPD BIT(0) #define CDNS_DPTX_HPD BIT(0)
#define CDNS_HDCP_TX_STATUS BIT(4)
#define CDNS_HDCP2_TX_IS_KM_STORED BIT(5)
#define CDNS_HDCP2_TX_STORE_KM BIT(6)
#define CDNS_HDCP_TX_IS_RCVR_ID_VALID BIT(7)
#define CDNS_SW_EVENT1 0x00048 #define CDNS_SW_EVENT1 0x00048
#define CDNS_SW_EVENT2 0x0004c #define CDNS_SW_EVENT2 0x0004c
@@ -339,8 +343,17 @@ struct cdns_mhdp_platform_info {
#define to_cdns_mhdp_bridge_state(s) \ #define to_cdns_mhdp_bridge_state(s) \
container_of(s, struct cdns_mhdp_bridge_state, base) container_of(s, struct cdns_mhdp_bridge_state, base)
struct cdns_mhdp_hdcp {
struct delayed_work check_work;
struct work_struct prop_work;
struct mutex mutex; /* mutex to protect hdcp.value */
u32 value;
u8 hdcp_content_type;
};
struct cdns_mhdp_device { struct cdns_mhdp_device {
void __iomem *regs; void __iomem *regs;
void __iomem *sapb_regs;
void __iomem *j721e_regs; void __iomem *j721e_regs;
struct device *dev; struct device *dev;
@@ -392,9 +405,18 @@ struct cdns_mhdp_device {
/* Work struct to schedule a uevent on link train failure */ /* Work struct to schedule a uevent on link train failure */
struct work_struct modeset_retry_work; struct work_struct modeset_retry_work;
struct work_struct hpd_work;
wait_queue_head_t sw_events_wq;
u32 sw_events;
struct cdns_mhdp_hdcp hdcp;
bool hdcp_supported;
}; };
#define connector_to_mhdp(x) container_of(x, struct cdns_mhdp_device, connector) #define connector_to_mhdp(x) container_of(x, struct cdns_mhdp_device, connector)
#define bridge_to_mhdp(x) container_of(x, struct cdns_mhdp_device, bridge) #define bridge_to_mhdp(x) container_of(x, struct cdns_mhdp_device, bridge)
u32 cdns_mhdp_wait_for_sw_event(struct cdns_mhdp_device *mhdp, uint32_t event);
#endif #endif

View File

@@ -0,0 +1,570 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Cadence MHDP8546 DP bridge driver.
*
* Copyright (C) 2020 Cadence Design Systems, Inc.
*
*/
#include <linux/io.h>
#include <linux/iopoll.h>
#include <asm/unaligned.h>
#include <drm/drm_hdcp.h>
#include "cdns-mhdp8546-hdcp.h"
static int cdns_mhdp_secure_mailbox_read(struct cdns_mhdp_device *mhdp)
{
int ret, empty;
WARN_ON(!mutex_is_locked(&mhdp->mbox_mutex));
ret = readx_poll_timeout(readl, mhdp->sapb_regs + CDNS_MAILBOX_EMPTY,
empty, !empty, MAILBOX_RETRY_US,
MAILBOX_TIMEOUT_US);
if (ret < 0)
return ret;
return readl(mhdp->sapb_regs + CDNS_MAILBOX_RX_DATA) & 0xff;
}
static int cdns_mhdp_secure_mailbox_write(struct cdns_mhdp_device *mhdp,
u8 val)
{
int ret, full;
WARN_ON(!mutex_is_locked(&mhdp->mbox_mutex));
ret = readx_poll_timeout(readl, mhdp->sapb_regs + CDNS_MAILBOX_FULL,
full, !full, MAILBOX_RETRY_US,
MAILBOX_TIMEOUT_US);
if (ret < 0)
return ret;
writel(val, mhdp->sapb_regs + CDNS_MAILBOX_TX_DATA);
return 0;
}
static int cdns_mhdp_secure_mailbox_recv_header(struct cdns_mhdp_device *mhdp,
u8 module_id,
u8 opcode,
u16 req_size)
{
u32 mbox_size, i;
u8 header[4];
int ret;
/* read the header of the message */
for (i = 0; i < sizeof(header); i++) {
ret = cdns_mhdp_secure_mailbox_read(mhdp);
if (ret < 0)
return ret;
header[i] = ret;
}
mbox_size = get_unaligned_be16(header + 2);
if (opcode != header[0] || module_id != header[1] ||
(opcode != HDCP_TRAN_IS_REC_ID_VALID && req_size != mbox_size)) {
for (i = 0; i < mbox_size; i++)
if (cdns_mhdp_secure_mailbox_read(mhdp) < 0)
break;
return -EINVAL;
}
return 0;
}
static int cdns_mhdp_secure_mailbox_recv_data(struct cdns_mhdp_device *mhdp,
u8 *buff, u16 buff_size)
{
int ret;
u32 i;
for (i = 0; i < buff_size; i++) {
ret = cdns_mhdp_secure_mailbox_read(mhdp);
if (ret < 0)
return ret;
buff[i] = ret;
}
return 0;
}
static int cdns_mhdp_secure_mailbox_send(struct cdns_mhdp_device *mhdp,
u8 module_id,
u8 opcode,
u16 size,
u8 *message)
{
u8 header[4];
int ret;
u32 i;
header[0] = opcode;
header[1] = module_id;
put_unaligned_be16(size, header + 2);
for (i = 0; i < sizeof(header); i++) {
ret = cdns_mhdp_secure_mailbox_write(mhdp, header[i]);
if (ret)
return ret;
}
for (i = 0; i < size; i++) {
ret = cdns_mhdp_secure_mailbox_write(mhdp, message[i]);
if (ret)
return ret;
}
return 0;
}
static int cdns_mhdp_hdcp_get_status(struct cdns_mhdp_device *mhdp,
u16 *hdcp_port_status)
{
u8 hdcp_status[HDCP_STATUS_SIZE];
int ret;
mutex_lock(&mhdp->mbox_mutex);
ret = cdns_mhdp_secure_mailbox_send(mhdp, MB_MODULE_ID_HDCP_TX,
HDCP_TRAN_STATUS_CHANGE, 0, NULL);
if (ret)
goto err_get_hdcp_status;
ret = cdns_mhdp_secure_mailbox_recv_header(mhdp, MB_MODULE_ID_HDCP_TX,
HDCP_TRAN_STATUS_CHANGE,
sizeof(hdcp_status));
if (ret)
goto err_get_hdcp_status;
ret = cdns_mhdp_secure_mailbox_recv_data(mhdp, hdcp_status,
sizeof(hdcp_status));
if (ret)
goto err_get_hdcp_status;
*hdcp_port_status = ((u16)(hdcp_status[0] << 8) | hdcp_status[1]);
err_get_hdcp_status:
mutex_unlock(&mhdp->mbox_mutex);
return ret;
}
static u8 cdns_mhdp_hdcp_handle_status(struct cdns_mhdp_device *mhdp,
u16 status)
{
u8 err = GET_HDCP_PORT_STS_LAST_ERR(status);
if (err)
dev_dbg(mhdp->dev, "HDCP Error = %d", err);
return err;
}
static int cdns_mhdp_hdcp_rx_id_valid_response(struct cdns_mhdp_device *mhdp,
u8 valid)
{
int ret;
mutex_lock(&mhdp->mbox_mutex);
ret = cdns_mhdp_secure_mailbox_send(mhdp, MB_MODULE_ID_HDCP_TX,
HDCP_TRAN_RESPOND_RECEIVER_ID_VALID,
1, &valid);
mutex_unlock(&mhdp->mbox_mutex);
return ret;
}
static int cdns_mhdp_hdcp_rx_id_valid(struct cdns_mhdp_device *mhdp,
u8 *recv_num, u8 *hdcp_rx_id)
{
u8 rec_id_hdr[2];
u8 status;
int ret;
mutex_lock(&mhdp->mbox_mutex);
ret = cdns_mhdp_secure_mailbox_send(mhdp, MB_MODULE_ID_HDCP_TX,
HDCP_TRAN_IS_REC_ID_VALID, 0, NULL);
if (ret)
goto err_rx_id_valid;
ret = cdns_mhdp_secure_mailbox_recv_header(mhdp, MB_MODULE_ID_HDCP_TX,
HDCP_TRAN_IS_REC_ID_VALID,
sizeof(status));
if (ret)
goto err_rx_id_valid;
ret = cdns_mhdp_secure_mailbox_recv_data(mhdp, rec_id_hdr, 2);
if (ret)
goto err_rx_id_valid;
*recv_num = rec_id_hdr[0];
ret = cdns_mhdp_secure_mailbox_recv_data(mhdp, hdcp_rx_id, 5 * *recv_num);
err_rx_id_valid:
mutex_unlock(&mhdp->mbox_mutex);
return ret;
}
static int cdns_mhdp_hdcp_km_stored_resp(struct cdns_mhdp_device *mhdp,
u32 size, u8 *km)
{
int ret;
mutex_lock(&mhdp->mbox_mutex);
ret = cdns_mhdp_secure_mailbox_send(mhdp, MB_MODULE_ID_HDCP_TX,
HDCP2X_TX_RESPOND_KM, size, km);
mutex_unlock(&mhdp->mbox_mutex);
return ret;
}
static int cdns_mhdp_hdcp_tx_is_km_stored(struct cdns_mhdp_device *mhdp,
u8 *resp, u32 size)
{
int ret;
mutex_lock(&mhdp->mbox_mutex);
ret = cdns_mhdp_secure_mailbox_send(mhdp, MB_MODULE_ID_HDCP_TX,
HDCP2X_TX_IS_KM_STORED, 0, NULL);
if (ret)
goto err_is_km_stored;
ret = cdns_mhdp_secure_mailbox_recv_header(mhdp, MB_MODULE_ID_HDCP_TX,
HDCP2X_TX_IS_KM_STORED,
size);
if (ret)
goto err_is_km_stored;
ret = cdns_mhdp_secure_mailbox_recv_data(mhdp, resp, size);
err_is_km_stored:
mutex_unlock(&mhdp->mbox_mutex);
return ret;
}
static int cdns_mhdp_hdcp_tx_config(struct cdns_mhdp_device *mhdp,
u8 hdcp_cfg)
{
int ret;
mutex_lock(&mhdp->mbox_mutex);
ret = cdns_mhdp_secure_mailbox_send(mhdp, MB_MODULE_ID_HDCP_TX,
HDCP_TRAN_CONFIGURATION, 1, &hdcp_cfg);
mutex_unlock(&mhdp->mbox_mutex);
return ret;
}
static int cdns_mhdp_hdcp_set_config(struct cdns_mhdp_device *mhdp,
u8 hdcp_config, bool enable)
{
u16 hdcp_port_status;
u32 ret_event;
u8 hdcp_cfg;
int ret;
hdcp_cfg = hdcp_config | (enable ? 0x04 : 0) |
(HDCP_CONTENT_TYPE_0 << 3);
cdns_mhdp_hdcp_tx_config(mhdp, hdcp_cfg);
ret_event = cdns_mhdp_wait_for_sw_event(mhdp, CDNS_HDCP_TX_STATUS);
if (!ret_event)
return -1;
ret = cdns_mhdp_hdcp_get_status(mhdp, &hdcp_port_status);
if (ret || cdns_mhdp_hdcp_handle_status(mhdp, hdcp_port_status))
return -1;
return 0;
}
static int cdns_mhdp_hdcp_auth_check(struct cdns_mhdp_device *mhdp)
{
u16 hdcp_port_status;
u32 ret_event;
int ret;
ret_event = cdns_mhdp_wait_for_sw_event(mhdp, CDNS_HDCP_TX_STATUS);
if (!ret_event)
return -1;
ret = cdns_mhdp_hdcp_get_status(mhdp, &hdcp_port_status);
if (ret || cdns_mhdp_hdcp_handle_status(mhdp, hdcp_port_status))
return -1;
if (hdcp_port_status & 1) {
dev_dbg(mhdp->dev, "Authentication completed successfully!\n");
return 0;
}
dev_dbg(mhdp->dev, "Authentication failed\n");
return -1;
}
static int cdns_mhdp_hdcp_check_receviers(struct cdns_mhdp_device *mhdp)
{
u8 hdcp_rec_id[HDCP_MAX_RECEIVERS][HDCP_RECEIVER_ID_SIZE_BYTES];
u8 hdcp_num_rec;
u32 ret_event;
ret_event = cdns_mhdp_wait_for_sw_event(mhdp,
CDNS_HDCP_TX_IS_RCVR_ID_VALID);
if (!ret_event)
return -1;
hdcp_num_rec = 0;
memset(&hdcp_rec_id, 0, sizeof(hdcp_rec_id));
cdns_mhdp_hdcp_rx_id_valid(mhdp, &hdcp_num_rec, (u8 *)hdcp_rec_id);
cdns_mhdp_hdcp_rx_id_valid_response(mhdp, 1);
return 0;
}
static int cdns_mhdp_hdcp_auth_22(struct cdns_mhdp_device *mhdp)
{
u8 resp[HDCP_STATUS_SIZE];
u16 hdcp_port_status;
u32 ret_event;
int ret;
dev_dbg(mhdp->dev, "HDCP: Start 2.2 Authentication\n");
ret_event = cdns_mhdp_wait_for_sw_event(mhdp,
CDNS_HDCP2_TX_IS_KM_STORED);
if (!ret_event)
return -1;
if (ret_event & CDNS_HDCP_TX_STATUS) {
mhdp->sw_events &= ~CDNS_HDCP_TX_STATUS;
ret = cdns_mhdp_hdcp_get_status(mhdp, &hdcp_port_status);
if (ret || cdns_mhdp_hdcp_handle_status(mhdp, hdcp_port_status))
return -1;
}
cdns_mhdp_hdcp_tx_is_km_stored(mhdp, resp, sizeof(resp));
cdns_mhdp_hdcp_km_stored_resp(mhdp, 0, NULL);
if (cdns_mhdp_hdcp_check_receviers(mhdp))
return -1;
return 0;
}
static inline int cdns_mhdp_hdcp_auth_14(struct cdns_mhdp_device *mhdp)
{
dev_dbg(mhdp->dev, "HDCP: Starting 1.4 Authentication\n");
return cdns_mhdp_hdcp_check_receviers(mhdp);
}
static int cdns_mhdp_hdcp_auth(struct cdns_mhdp_device *mhdp,
u8 hdcp_config)
{
int ret;
ret = cdns_mhdp_hdcp_set_config(mhdp, hdcp_config, true);
if (ret)
goto auth_failed;
if (hdcp_config == HDCP_TX_1)
ret = cdns_mhdp_hdcp_auth_14(mhdp);
else
ret = cdns_mhdp_hdcp_auth_22(mhdp);
if (ret)
goto auth_failed;
ret = cdns_mhdp_hdcp_auth_check(mhdp);
if (ret)
ret = cdns_mhdp_hdcp_auth_check(mhdp);
auth_failed:
return ret;
}
static int _cdns_mhdp_hdcp_disable(struct cdns_mhdp_device *mhdp)
{
int ret;
dev_dbg(mhdp->dev, "[%s:%d] HDCP is being disabled...\n",
mhdp->connector.name, mhdp->connector.base.id);
ret = cdns_mhdp_hdcp_set_config(mhdp, 0, false);
return ret;
}
static int _cdns_mhdp_hdcp_enable(struct cdns_mhdp_device *mhdp, u8 content_type)
{
int ret, tries = 3;
u32 i;
for (i = 0; i < tries; i++) {
if (content_type == DRM_MODE_HDCP_CONTENT_TYPE0 ||
content_type == DRM_MODE_HDCP_CONTENT_TYPE1) {
ret = cdns_mhdp_hdcp_auth(mhdp, HDCP_TX_2);
if (!ret)
return 0;
_cdns_mhdp_hdcp_disable(mhdp);
}
if (content_type == DRM_MODE_HDCP_CONTENT_TYPE0) {
ret = cdns_mhdp_hdcp_auth(mhdp, HDCP_TX_1);
if (!ret)
return 0;
_cdns_mhdp_hdcp_disable(mhdp);
}
}
dev_err(mhdp->dev, "HDCP authentication failed (%d tries/%d)\n",
tries, ret);
return ret;
}
static int cdns_mhdp_hdcp_check_link(struct cdns_mhdp_device *mhdp)
{
u16 hdcp_port_status;
int ret = 0;
mutex_lock(&mhdp->hdcp.mutex);
if (mhdp->hdcp.value == DRM_MODE_CONTENT_PROTECTION_UNDESIRED)
goto out;
ret = cdns_mhdp_hdcp_get_status(mhdp, &hdcp_port_status);
if (!ret && hdcp_port_status & HDCP_PORT_STS_AUTH)
goto out;
dev_err(mhdp->dev,
"[%s:%d] HDCP link failed, retrying authentication\n",
mhdp->connector.name, mhdp->connector.base.id);
ret = _cdns_mhdp_hdcp_disable(mhdp);
if (ret) {
mhdp->hdcp.value = DRM_MODE_CONTENT_PROTECTION_DESIRED;
schedule_work(&mhdp->hdcp.prop_work);
goto out;
}
ret = _cdns_mhdp_hdcp_enable(mhdp, mhdp->hdcp.hdcp_content_type);
if (ret) {
mhdp->hdcp.value = DRM_MODE_CONTENT_PROTECTION_DESIRED;
schedule_work(&mhdp->hdcp.prop_work);
}
out:
mutex_unlock(&mhdp->hdcp.mutex);
return ret;
}
static void cdns_mhdp_hdcp_check_work(struct work_struct *work)
{
struct delayed_work *d_work = to_delayed_work(work);
struct cdns_mhdp_hdcp *hdcp = container_of(d_work,
struct cdns_mhdp_hdcp,
check_work);
struct cdns_mhdp_device *mhdp = container_of(hdcp,
struct cdns_mhdp_device,
hdcp);
if (!cdns_mhdp_hdcp_check_link(mhdp))
schedule_delayed_work(&hdcp->check_work,
DRM_HDCP_CHECK_PERIOD_MS);
}
static void cdns_mhdp_hdcp_prop_work(struct work_struct *work)
{
struct cdns_mhdp_hdcp *hdcp = container_of(work,
struct cdns_mhdp_hdcp,
prop_work);
struct cdns_mhdp_device *mhdp = container_of(hdcp,
struct cdns_mhdp_device,
hdcp);
struct drm_device *dev = mhdp->connector.dev;
struct drm_connector_state *state;
drm_modeset_lock(&dev->mode_config.connection_mutex, NULL);
mutex_lock(&mhdp->hdcp.mutex);
if (mhdp->hdcp.value != DRM_MODE_CONTENT_PROTECTION_UNDESIRED) {
state = mhdp->connector.state;
state->content_protection = mhdp->hdcp.value;
}
mutex_unlock(&mhdp->hdcp.mutex);
drm_modeset_unlock(&dev->mode_config.connection_mutex);
}
int cdns_mhdp_hdcp_set_lc(struct cdns_mhdp_device *mhdp, u8 *val)
{
int ret;
mutex_lock(&mhdp->mbox_mutex);
ret = cdns_mhdp_secure_mailbox_send(mhdp, MB_MODULE_ID_HDCP_GENERAL,
HDCP_GENERAL_SET_LC_128,
16, val);
mutex_unlock(&mhdp->mbox_mutex);
return ret;
}
int
cdns_mhdp_hdcp_set_public_key_param(struct cdns_mhdp_device *mhdp,
struct cdns_hdcp_tx_public_key_param *val)
{
int ret;
mutex_lock(&mhdp->mbox_mutex);
ret = cdns_mhdp_secure_mailbox_send(mhdp, MB_MODULE_ID_HDCP_TX,
HDCP2X_TX_SET_PUBLIC_KEY_PARAMS,
sizeof(*val), (u8 *)val);
mutex_unlock(&mhdp->mbox_mutex);
return ret;
}
int cdns_mhdp_hdcp_enable(struct cdns_mhdp_device *mhdp, u8 content_type)
{
int ret;
mutex_lock(&mhdp->hdcp.mutex);
ret = _cdns_mhdp_hdcp_enable(mhdp, content_type);
if (ret)
goto out;
mhdp->hdcp.hdcp_content_type = content_type;
mhdp->hdcp.value = DRM_MODE_CONTENT_PROTECTION_ENABLED;
schedule_work(&mhdp->hdcp.prop_work);
schedule_delayed_work(&mhdp->hdcp.check_work,
DRM_HDCP_CHECK_PERIOD_MS);
out:
mutex_unlock(&mhdp->hdcp.mutex);
return ret;
}
int cdns_mhdp_hdcp_disable(struct cdns_mhdp_device *mhdp)
{
int ret = 0;
mutex_lock(&mhdp->hdcp.mutex);
if (mhdp->hdcp.value != DRM_MODE_CONTENT_PROTECTION_UNDESIRED) {
mhdp->hdcp.value = DRM_MODE_CONTENT_PROTECTION_UNDESIRED;
schedule_work(&mhdp->hdcp.prop_work);
ret = _cdns_mhdp_hdcp_disable(mhdp);
}
mutex_unlock(&mhdp->hdcp.mutex);
cancel_delayed_work_sync(&mhdp->hdcp.check_work);
return ret;
}
void cdns_mhdp_hdcp_init(struct cdns_mhdp_device *mhdp)
{
INIT_DELAYED_WORK(&mhdp->hdcp.check_work, cdns_mhdp_hdcp_check_work);
INIT_WORK(&mhdp->hdcp.prop_work, cdns_mhdp_hdcp_prop_work);
mutex_init(&mhdp->hdcp.mutex);
}

View File

@@ -0,0 +1,92 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Cadence MHDP8546 DP bridge driver.
*
* Copyright (C) 2020 Cadence Design Systems, Inc.
*
*/
#ifndef CDNS_MHDP8546_HDCP_H
#define CDNS_MHDP8546_HDCP_H
#include "cdns-mhdp8546-core.h"
#define HDCP_MAX_RECEIVERS 32
#define HDCP_RECEIVER_ID_SIZE_BYTES 5
#define HDCP_STATUS_SIZE 0x5
#define HDCP_PORT_STS_AUTH 0x1
#define HDCP_PORT_STS_LAST_ERR_SHIFT 0x5
#define HDCP_PORT_STS_LAST_ERR_MASK (0x0F << 5)
#define GET_HDCP_PORT_STS_LAST_ERR(__sts__) \
(((__sts__) & HDCP_PORT_STS_LAST_ERR_MASK) >> \
HDCP_PORT_STS_LAST_ERR_SHIFT)
#define HDCP_CONFIG_1_4 BIT(0) /* use HDCP 1.4 only */
#define HDCP_CONFIG_2_2 BIT(1) /* use HDCP 2.2 only */
/* use All HDCP versions */
#define HDCP_CONFIG_ALL (BIT(0) | BIT(1))
#define HDCP_CONFIG_NONE 0
enum {
HDCP_GENERAL_SET_LC_128,
HDCP_SET_SEED,
};
enum {
HDCP_TRAN_CONFIGURATION,
HDCP2X_TX_SET_PUBLIC_KEY_PARAMS,
HDCP2X_TX_SET_DEBUG_RANDOM_NUMBERS,
HDCP2X_TX_RESPOND_KM,
HDCP1_TX_SEND_KEYS,
HDCP1_TX_SEND_RANDOM_AN,
HDCP_TRAN_STATUS_CHANGE,
HDCP2X_TX_IS_KM_STORED,
HDCP2X_TX_STORE_KM,
HDCP_TRAN_IS_REC_ID_VALID,
HDCP_TRAN_RESPOND_RECEIVER_ID_VALID,
HDCP_TRAN_TEST_KEYS,
HDCP2X_TX_SET_KM_KEY_PARAMS,
HDCP_NUM_OF_SUPPORTED_MESSAGES
};
enum {
HDCP_CONTENT_TYPE_0,
HDCP_CONTENT_TYPE_1,
};
#define DRM_HDCP_CHECK_PERIOD_MS (128 * 16)
#define HDCP_PAIRING_R_ID 5
#define HDCP_PAIRING_M_LEN 16
#define HDCP_KM_LEN 16
#define HDCP_PAIRING_M_EKH 16
struct cdns_hdcp_pairing_data {
u8 receiver_id[HDCP_PAIRING_R_ID];
u8 m[HDCP_PAIRING_M_LEN];
u8 km[HDCP_KM_LEN];
u8 ekh[HDCP_PAIRING_M_EKH];
};
enum {
HDCP_TX_2,
HDCP_TX_1,
HDCP_TX_BOTH,
};
#define DLP_MODULUS_N 384
#define DLP_E 3
struct cdns_hdcp_tx_public_key_param {
u8 N[DLP_MODULUS_N];
u8 E[DLP_E];
};
int cdns_mhdp_hdcp_set_public_key_param(struct cdns_mhdp_device *mhdp,
struct cdns_hdcp_tx_public_key_param *val);
int cdns_mhdp_hdcp_set_lc(struct cdns_mhdp_device *mhdp, u8 *val);
int cdns_mhdp_hdcp_enable(struct cdns_mhdp_device *mhdp, u8 content_type);
int cdns_mhdp_hdcp_disable(struct cdns_mhdp_device *mhdp);
void cdns_mhdp_hdcp_init(struct cdns_mhdp_device *mhdp);
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -21,6 +21,7 @@
#include <linux/sys_soc.h> #include <linux/sys_soc.h>
#include <linux/time64.h> #include <linux/time64.h>
#include <drm/drm_atomic_state_helper.h>
#include <drm/drm_bridge.h> #include <drm/drm_bridge.h>
#include <drm/drm_mipi_dsi.h> #include <drm/drm_mipi_dsi.h>
#include <drm/drm_of.h> #include <drm/drm_of.h>
@@ -661,7 +662,7 @@ static irqreturn_t nwl_dsi_irq_handler(int irq, void *data)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static int nwl_dsi_enable(struct nwl_dsi *dsi) static int nwl_dsi_mode_set(struct nwl_dsi *dsi)
{ {
struct device *dev = dsi->dev; struct device *dev = dsi->dev;
union phy_configure_opts *phy_cfg = &dsi->phy_cfg; union phy_configure_opts *phy_cfg = &dsi->phy_cfg;
@@ -742,7 +743,9 @@ static int nwl_dsi_disable(struct nwl_dsi *dsi)
return 0; return 0;
} }
static void nwl_dsi_bridge_disable(struct drm_bridge *bridge) static void
nwl_dsi_bridge_atomic_disable(struct drm_bridge *bridge,
struct drm_bridge_state *old_bridge_state)
{ {
struct nwl_dsi *dsi = bridge_to_dsi(bridge); struct nwl_dsi *dsi = bridge_to_dsi(bridge);
int ret; int ret;
@@ -803,17 +806,6 @@ static int nwl_dsi_get_dphy_params(struct nwl_dsi *dsi,
return 0; return 0;
} }
static bool nwl_dsi_bridge_mode_fixup(struct drm_bridge *bridge,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
/* At least LCDIF + NWL needs active high sync */
adjusted_mode->flags |= (DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC);
adjusted_mode->flags &= ~(DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC);
return true;
}
static enum drm_mode_status static enum drm_mode_status
nwl_dsi_bridge_mode_valid(struct drm_bridge *bridge, nwl_dsi_bridge_mode_valid(struct drm_bridge *bridge,
const struct drm_display_info *info, const struct drm_display_info *info,
@@ -831,6 +823,29 @@ nwl_dsi_bridge_mode_valid(struct drm_bridge *bridge,
return MODE_OK; return MODE_OK;
} }
static int nwl_dsi_bridge_atomic_check(struct drm_bridge *bridge,
struct drm_bridge_state *bridge_state,
struct drm_crtc_state *crtc_state,
struct drm_connector_state *conn_state)
{
struct drm_display_mode *adjusted_mode = &crtc_state->adjusted_mode;
/* At least LCDIF + NWL needs active high sync */
adjusted_mode->flags |= (DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC);
adjusted_mode->flags &= ~(DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC);
/*
* Do a full modeset if crtc_state->active is changed to be true.
* This ensures our ->mode_set() is called to get the DSI controller
* and the PHY ready to send DCS commands, when only the connector's
* DPMS is brought out of "Off" status.
*/
if (crtc_state->active_changed && crtc_state->active)
crtc_state->mode_changed = true;
return 0;
}
static void static void
nwl_dsi_bridge_mode_set(struct drm_bridge *bridge, nwl_dsi_bridge_mode_set(struct drm_bridge *bridge,
const struct drm_display_mode *mode, const struct drm_display_mode *mode,
@@ -846,13 +861,6 @@ nwl_dsi_bridge_mode_set(struct drm_bridge *bridge,
if (ret < 0) if (ret < 0)
return; return;
/*
* If hs clock is unchanged, we're all good - all parameters are
* derived from it atm.
*/
if (new_cfg.mipi_dphy.hs_clk_rate == dsi->phy_cfg.mipi_dphy.hs_clk_rate)
return;
phy_ref_rate = clk_get_rate(dsi->phy_ref_clk); phy_ref_rate = clk_get_rate(dsi->phy_ref_clk);
DRM_DEV_DEBUG_DRIVER(dev, "PHY at ref rate: %lu\n", phy_ref_rate); DRM_DEV_DEBUG_DRIVER(dev, "PHY at ref rate: %lu\n", phy_ref_rate);
/* Save the new desired phy config */ /* Save the new desired phy config */
@@ -860,14 +868,8 @@ nwl_dsi_bridge_mode_set(struct drm_bridge *bridge,
memcpy(&dsi->mode, adjusted_mode, sizeof(dsi->mode)); memcpy(&dsi->mode, adjusted_mode, sizeof(dsi->mode));
drm_mode_debug_printmodeline(adjusted_mode); drm_mode_debug_printmodeline(adjusted_mode);
}
static void nwl_dsi_bridge_pre_enable(struct drm_bridge *bridge) pm_runtime_get_sync(dev);
{
struct nwl_dsi *dsi = bridge_to_dsi(bridge);
int ret;
pm_runtime_get_sync(dsi->dev);
if (clk_prepare_enable(dsi->lcdif_clk) < 0) if (clk_prepare_enable(dsi->lcdif_clk) < 0)
return; return;
@@ -877,27 +879,29 @@ static void nwl_dsi_bridge_pre_enable(struct drm_bridge *bridge)
/* Step 1 from DSI reset-out instructions */ /* Step 1 from DSI reset-out instructions */
ret = reset_control_deassert(dsi->rst_pclk); ret = reset_control_deassert(dsi->rst_pclk);
if (ret < 0) { if (ret < 0) {
DRM_DEV_ERROR(dsi->dev, "Failed to deassert PCLK: %d\n", ret); DRM_DEV_ERROR(dev, "Failed to deassert PCLK: %d\n", ret);
return; return;
} }
/* Step 2 from DSI reset-out instructions */ /* Step 2 from DSI reset-out instructions */
nwl_dsi_enable(dsi); nwl_dsi_mode_set(dsi);
/* Step 3 from DSI reset-out instructions */ /* Step 3 from DSI reset-out instructions */
ret = reset_control_deassert(dsi->rst_esc); ret = reset_control_deassert(dsi->rst_esc);
if (ret < 0) { if (ret < 0) {
DRM_DEV_ERROR(dsi->dev, "Failed to deassert ESC: %d\n", ret); DRM_DEV_ERROR(dev, "Failed to deassert ESC: %d\n", ret);
return; return;
} }
ret = reset_control_deassert(dsi->rst_byte); ret = reset_control_deassert(dsi->rst_byte);
if (ret < 0) { if (ret < 0) {
DRM_DEV_ERROR(dsi->dev, "Failed to deassert BYTE: %d\n", ret); DRM_DEV_ERROR(dev, "Failed to deassert BYTE: %d\n", ret);
return; return;
} }
} }
static void nwl_dsi_bridge_enable(struct drm_bridge *bridge) static void
nwl_dsi_bridge_atomic_enable(struct drm_bridge *bridge,
struct drm_bridge_state *old_bridge_state)
{ {
struct nwl_dsi *dsi = bridge_to_dsi(bridge); struct nwl_dsi *dsi = bridge_to_dsi(bridge);
int ret; int ret;
@@ -942,10 +946,12 @@ static void nwl_dsi_bridge_detach(struct drm_bridge *bridge)
} }
static const struct drm_bridge_funcs nwl_dsi_bridge_funcs = { static const struct drm_bridge_funcs nwl_dsi_bridge_funcs = {
.pre_enable = nwl_dsi_bridge_pre_enable, .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
.enable = nwl_dsi_bridge_enable, .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
.disable = nwl_dsi_bridge_disable, .atomic_reset = drm_atomic_helper_bridge_reset,
.mode_fixup = nwl_dsi_bridge_mode_fixup, .atomic_check = nwl_dsi_bridge_atomic_check,
.atomic_enable = nwl_dsi_bridge_atomic_enable,
.atomic_disable = nwl_dsi_bridge_atomic_disable,
.mode_set = nwl_dsi_bridge_mode_set, .mode_set = nwl_dsi_bridge_mode_set,
.mode_valid = nwl_dsi_bridge_mode_valid, .mode_valid = nwl_dsi_bridge_mode_valid,
.attach = nwl_dsi_bridge_attach, .attach = nwl_dsi_bridge_attach,

View File

@@ -2395,21 +2395,6 @@ static int dw_hdmi_connector_get_modes(struct drm_connector *connector)
return ret; return ret;
} }
static bool hdr_metadata_equal(const struct drm_connector_state *old_state,
const struct drm_connector_state *new_state)
{
struct drm_property_blob *old_blob = old_state->hdr_output_metadata;
struct drm_property_blob *new_blob = new_state->hdr_output_metadata;
if (!old_blob || !new_blob)
return old_blob == new_blob;
if (old_blob->length != new_blob->length)
return false;
return !memcmp(old_blob->data, new_blob->data, old_blob->length);
}
static int dw_hdmi_connector_atomic_check(struct drm_connector *connector, static int dw_hdmi_connector_atomic_check(struct drm_connector *connector,
struct drm_atomic_state *state) struct drm_atomic_state *state)
{ {
@@ -2423,7 +2408,7 @@ static int dw_hdmi_connector_atomic_check(struct drm_connector *connector,
if (!crtc) if (!crtc)
return 0; return 0;
if (!hdr_metadata_equal(old_state, new_state)) { if (!drm_connector_atomic_hdr_metadata_equal(old_state, new_state)) {
crtc_state = drm_atomic_get_crtc_state(state, crtc); crtc_state = drm_atomic_get_crtc_state(state, crtc);
if (IS_ERR(crtc_state)) if (IS_ERR(crtc_state))
return PTR_ERR(crtc_state); return PTR_ERR(crtc_state);
@@ -2492,8 +2477,7 @@ static int dw_hdmi_connector_create(struct dw_hdmi *hdmi)
drm_connector_attach_max_bpc_property(connector, 8, 16); drm_connector_attach_max_bpc_property(connector, 8, 16);
if (hdmi->version >= 0x200a && hdmi->plat_data->use_drm_infoframe) if (hdmi->version >= 0x200a && hdmi->plat_data->use_drm_infoframe)
drm_object_attach_property(&connector->base, drm_connector_attach_hdr_output_metadata_property(connector);
connector->dev->mode_config.hdr_output_metadata_property, 0);
drm_connector_attach_encoder(connector, hdmi->bridge.encoder); drm_connector_attach_encoder(connector, hdmi->bridge.encoder);
@@ -3421,7 +3405,7 @@ struct dw_hdmi *dw_hdmi_probe(struct platform_device *pdev,
hdmi->audio = platform_device_register_full(&pdevinfo); hdmi->audio = platform_device_register_full(&pdevinfo);
} }
if (config0 & HDMI_CONFIG0_CEC) { if (!plat_data->disable_cec && (config0 & HDMI_CONFIG0_CEC)) {
cec.hdmi = hdmi; cec.hdmi = hdmi;
cec.ops = &dw_hdmi_cec_ops; cec.ops = &dw_hdmi_cec_ops;
cec.irq = irq; cec.irq = irq;

View File

@@ -1414,6 +1414,7 @@ static int tc_bridge_attach(struct drm_bridge *bridge,
if (flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR) if (flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR)
return 0; return 0;
tc->aux.drm_dev = drm;
ret = drm_dp_aux_register(&tc->aux); ret = drm_dp_aux_register(&tc->aux);
if (ret < 0) if (ret < 0)
return ret; return ret;

File diff suppressed because it is too large Load Diff

View File

@@ -35,9 +35,10 @@
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/slab.h> #include <linux/slab.h>
#if IS_ENABLED(CONFIG_AGP)
#include <asm/agp.h> #include <asm/agp.h>
#endif
#include <drm/drm_agpsupport.h>
#include <drm/drm_device.h> #include <drm/drm_device.h>
#include <drm/drm_drv.h> #include <drm/drm_drv.h>
#include <drm/drm_file.h> #include <drm/drm_file.h>
@@ -45,6 +46,8 @@
#include "drm_legacy.h" #include "drm_legacy.h"
#if IS_ENABLED(CONFIG_AGP)
/* /*
* Get AGP information. * Get AGP information.
* *
@@ -53,7 +56,7 @@
* Verifies the AGP device has been initialized and acquired and fills in the * Verifies the AGP device has been initialized and acquired and fills in the
* drm_agp_info structure with the information in drm_agp_head::agp_info. * drm_agp_info structure with the information in drm_agp_head::agp_info.
*/ */
int drm_agp_info(struct drm_device *dev, struct drm_agp_info *info) int drm_legacy_agp_info(struct drm_device *dev, struct drm_agp_info *info)
{ {
struct agp_kern_info *kern; struct agp_kern_info *kern;
@@ -73,15 +76,15 @@ int drm_agp_info(struct drm_device *dev, struct drm_agp_info *info)
return 0; return 0;
} }
EXPORT_SYMBOL(drm_agp_info); EXPORT_SYMBOL(drm_legacy_agp_info);
int drm_agp_info_ioctl(struct drm_device *dev, void *data, int drm_legacy_agp_info_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv) struct drm_file *file_priv)
{ {
struct drm_agp_info *info = data; struct drm_agp_info *info = data;
int err; int err;
err = drm_agp_info(dev, info); err = drm_legacy_agp_info(dev, info);
if (err) if (err)
return err; return err;
@@ -97,7 +100,7 @@ int drm_agp_info_ioctl(struct drm_device *dev, void *data,
* Verifies the AGP device hasn't been acquired before and calls * Verifies the AGP device hasn't been acquired before and calls
* \c agp_backend_acquire. * \c agp_backend_acquire.
*/ */
int drm_agp_acquire(struct drm_device *dev) int drm_legacy_agp_acquire(struct drm_device *dev)
{ {
struct pci_dev *pdev = to_pci_dev(dev->dev); struct pci_dev *pdev = to_pci_dev(dev->dev);
@@ -111,7 +114,7 @@ int drm_agp_acquire(struct drm_device *dev)
dev->agp->acquired = 1; dev->agp->acquired = 1;
return 0; return 0;
} }
EXPORT_SYMBOL(drm_agp_acquire); EXPORT_SYMBOL(drm_legacy_agp_acquire);
/* /*
* Acquire the AGP device (ioctl). * Acquire the AGP device (ioctl).
@@ -121,10 +124,10 @@ EXPORT_SYMBOL(drm_agp_acquire);
* Verifies the AGP device hasn't been acquired before and calls * Verifies the AGP device hasn't been acquired before and calls
* \c agp_backend_acquire. * \c agp_backend_acquire.
*/ */
int drm_agp_acquire_ioctl(struct drm_device *dev, void *data, int drm_legacy_agp_acquire_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv) struct drm_file *file_priv)
{ {
return drm_agp_acquire((struct drm_device *) file_priv->minor->dev); return drm_legacy_agp_acquire((struct drm_device *)file_priv->minor->dev);
} }
/* /*
@@ -135,7 +138,7 @@ int drm_agp_acquire_ioctl(struct drm_device *dev, void *data,
* *
* Verifies the AGP device has been acquired and calls \c agp_backend_release. * Verifies the AGP device has been acquired and calls \c agp_backend_release.
*/ */
int drm_agp_release(struct drm_device *dev) int drm_legacy_agp_release(struct drm_device *dev)
{ {
if (!dev->agp || !dev->agp->acquired) if (!dev->agp || !dev->agp->acquired)
return -EINVAL; return -EINVAL;
@@ -143,12 +146,12 @@ int drm_agp_release(struct drm_device *dev)
dev->agp->acquired = 0; dev->agp->acquired = 0;
return 0; return 0;
} }
EXPORT_SYMBOL(drm_agp_release); EXPORT_SYMBOL(drm_legacy_agp_release);
int drm_agp_release_ioctl(struct drm_device *dev, void *data, int drm_legacy_agp_release_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv) struct drm_file *file_priv)
{ {
return drm_agp_release(dev); return drm_legacy_agp_release(dev);
} }
/* /*
@@ -161,7 +164,7 @@ int drm_agp_release_ioctl(struct drm_device *dev, void *data,
* Verifies the AGP device has been acquired but not enabled, and calls * Verifies the AGP device has been acquired but not enabled, and calls
* \c agp_enable. * \c agp_enable.
*/ */
int drm_agp_enable(struct drm_device *dev, struct drm_agp_mode mode) int drm_legacy_agp_enable(struct drm_device *dev, struct drm_agp_mode mode)
{ {
if (!dev->agp || !dev->agp->acquired) if (!dev->agp || !dev->agp->acquired)
return -EINVAL; return -EINVAL;
@@ -171,14 +174,14 @@ int drm_agp_enable(struct drm_device *dev, struct drm_agp_mode mode)
dev->agp->enabled = 1; dev->agp->enabled = 1;
return 0; return 0;
} }
EXPORT_SYMBOL(drm_agp_enable); EXPORT_SYMBOL(drm_legacy_agp_enable);
int drm_agp_enable_ioctl(struct drm_device *dev, void *data, int drm_legacy_agp_enable_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv) struct drm_file *file_priv)
{ {
struct drm_agp_mode *mode = data; struct drm_agp_mode *mode = data;
return drm_agp_enable(dev, *mode); return drm_legacy_agp_enable(dev, *mode);
} }
/* /*
@@ -189,7 +192,7 @@ int drm_agp_enable_ioctl(struct drm_device *dev, void *data,
* Verifies the AGP device is present and has been acquired, allocates the * Verifies the AGP device is present and has been acquired, allocates the
* memory via agp_allocate_memory() and creates a drm_agp_mem entry for it. * memory via agp_allocate_memory() and creates a drm_agp_mem entry for it.
*/ */
int drm_agp_alloc(struct drm_device *dev, struct drm_agp_buffer *request) int drm_legacy_agp_alloc(struct drm_device *dev, struct drm_agp_buffer *request)
{ {
struct drm_agp_mem *entry; struct drm_agp_mem *entry;
struct agp_memory *memory; struct agp_memory *memory;
@@ -221,15 +224,15 @@ int drm_agp_alloc(struct drm_device *dev, struct drm_agp_buffer *request)
return 0; return 0;
} }
EXPORT_SYMBOL(drm_agp_alloc); EXPORT_SYMBOL(drm_legacy_agp_alloc);
int drm_agp_alloc_ioctl(struct drm_device *dev, void *data, int drm_legacy_agp_alloc_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv) struct drm_file *file_priv)
{ {
struct drm_agp_buffer *request = data; struct drm_agp_buffer *request = data;
return drm_agp_alloc(dev, request); return drm_legacy_agp_alloc(dev, request);
} }
/* /*
@@ -241,7 +244,7 @@ int drm_agp_alloc_ioctl(struct drm_device *dev, void *data,
* *
* Walks through drm_agp_head::memory until finding a matching handle. * Walks through drm_agp_head::memory until finding a matching handle.
*/ */
static struct drm_agp_mem *drm_agp_lookup_entry(struct drm_device *dev, static struct drm_agp_mem *drm_legacy_agp_lookup_entry(struct drm_device *dev,
unsigned long handle) unsigned long handle)
{ {
struct drm_agp_mem *entry; struct drm_agp_mem *entry;
@@ -261,14 +264,14 @@ static struct drm_agp_mem *drm_agp_lookup_entry(struct drm_device *dev,
* Verifies the AGP device is present and acquired, looks-up the AGP memory * Verifies the AGP device is present and acquired, looks-up the AGP memory
* entry and passes it to the unbind_agp() function. * entry and passes it to the unbind_agp() function.
*/ */
int drm_agp_unbind(struct drm_device *dev, struct drm_agp_binding *request) int drm_legacy_agp_unbind(struct drm_device *dev, struct drm_agp_binding *request)
{ {
struct drm_agp_mem *entry; struct drm_agp_mem *entry;
int ret; int ret;
if (!dev->agp || !dev->agp->acquired) if (!dev->agp || !dev->agp->acquired)
return -EINVAL; return -EINVAL;
entry = drm_agp_lookup_entry(dev, request->handle); entry = drm_legacy_agp_lookup_entry(dev, request->handle);
if (!entry || !entry->bound) if (!entry || !entry->bound)
return -EINVAL; return -EINVAL;
ret = agp_unbind_memory(entry->memory); ret = agp_unbind_memory(entry->memory);
@@ -276,15 +279,15 @@ int drm_agp_unbind(struct drm_device *dev, struct drm_agp_binding *request)
entry->bound = 0; entry->bound = 0;
return ret; return ret;
} }
EXPORT_SYMBOL(drm_agp_unbind); EXPORT_SYMBOL(drm_legacy_agp_unbind);
int drm_agp_unbind_ioctl(struct drm_device *dev, void *data, int drm_legacy_agp_unbind_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv) struct drm_file *file_priv)
{ {
struct drm_agp_binding *request = data; struct drm_agp_binding *request = data;
return drm_agp_unbind(dev, request); return drm_legacy_agp_unbind(dev, request);
} }
/* /*
@@ -296,7 +299,7 @@ int drm_agp_unbind_ioctl(struct drm_device *dev, void *data,
* is currently bound into the GATT. Looks-up the AGP memory entry and passes * is currently bound into the GATT. Looks-up the AGP memory entry and passes
* it to bind_agp() function. * it to bind_agp() function.
*/ */
int drm_agp_bind(struct drm_device *dev, struct drm_agp_binding *request) int drm_legacy_agp_bind(struct drm_device *dev, struct drm_agp_binding *request)
{ {
struct drm_agp_mem *entry; struct drm_agp_mem *entry;
int retcode; int retcode;
@@ -304,7 +307,7 @@ int drm_agp_bind(struct drm_device *dev, struct drm_agp_binding *request)
if (!dev->agp || !dev->agp->acquired) if (!dev->agp || !dev->agp->acquired)
return -EINVAL; return -EINVAL;
entry = drm_agp_lookup_entry(dev, request->handle); entry = drm_legacy_agp_lookup_entry(dev, request->handle);
if (!entry || entry->bound) if (!entry || entry->bound)
return -EINVAL; return -EINVAL;
page = DIV_ROUND_UP(request->offset, PAGE_SIZE); page = DIV_ROUND_UP(request->offset, PAGE_SIZE);
@@ -316,15 +319,15 @@ int drm_agp_bind(struct drm_device *dev, struct drm_agp_binding *request)
dev->agp->base, entry->bound); dev->agp->base, entry->bound);
return 0; return 0;
} }
EXPORT_SYMBOL(drm_agp_bind); EXPORT_SYMBOL(drm_legacy_agp_bind);
int drm_agp_bind_ioctl(struct drm_device *dev, void *data, int drm_legacy_agp_bind_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv) struct drm_file *file_priv)
{ {
struct drm_agp_binding *request = data; struct drm_agp_binding *request = data;
return drm_agp_bind(dev, request); return drm_legacy_agp_bind(dev, request);
} }
/* /*
@@ -337,13 +340,13 @@ int drm_agp_bind_ioctl(struct drm_device *dev, void *data,
* unbind_agp(). Frees it via free_agp() as well as the entry itself * unbind_agp(). Frees it via free_agp() as well as the entry itself
* and unlinks from the doubly linked list it's inserted in. * and unlinks from the doubly linked list it's inserted in.
*/ */
int drm_agp_free(struct drm_device *dev, struct drm_agp_buffer *request) int drm_legacy_agp_free(struct drm_device *dev, struct drm_agp_buffer *request)
{ {
struct drm_agp_mem *entry; struct drm_agp_mem *entry;
if (!dev->agp || !dev->agp->acquired) if (!dev->agp || !dev->agp->acquired)
return -EINVAL; return -EINVAL;
entry = drm_agp_lookup_entry(dev, request->handle); entry = drm_legacy_agp_lookup_entry(dev, request->handle);
if (!entry) if (!entry)
return -EINVAL; return -EINVAL;
if (entry->bound) if (entry->bound)
@@ -355,15 +358,15 @@ int drm_agp_free(struct drm_device *dev, struct drm_agp_buffer *request)
kfree(entry); kfree(entry);
return 0; return 0;
} }
EXPORT_SYMBOL(drm_agp_free); EXPORT_SYMBOL(drm_legacy_agp_free);
int drm_agp_free_ioctl(struct drm_device *dev, void *data, int drm_legacy_agp_free_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv) struct drm_file *file_priv)
{ {
struct drm_agp_buffer *request = data; struct drm_agp_buffer *request = data;
return drm_agp_free(dev, request); return drm_legacy_agp_free(dev, request);
} }
/* /*
@@ -378,7 +381,7 @@ int drm_agp_free_ioctl(struct drm_device *dev, void *data,
* Note that final cleanup of the kmalloced structure is directly done in * Note that final cleanup of the kmalloced structure is directly done in
* drm_pci_agp_destroy. * drm_pci_agp_destroy.
*/ */
struct drm_agp_head *drm_agp_init(struct drm_device *dev) struct drm_agp_head *drm_legacy_agp_init(struct drm_device *dev)
{ {
struct pci_dev *pdev = to_pci_dev(dev->dev); struct pci_dev *pdev = to_pci_dev(dev->dev);
struct drm_agp_head *head = NULL; struct drm_agp_head *head = NULL;
@@ -409,7 +412,7 @@ struct drm_agp_head *drm_agp_init(struct drm_device *dev)
return head; return head;
} }
/* Only exported for i810.ko */ /* Only exported for i810.ko */
EXPORT_SYMBOL(drm_agp_init); EXPORT_SYMBOL(drm_legacy_agp_init);
/** /**
* drm_legacy_agp_clear - Clear AGP resource list * drm_legacy_agp_clear - Clear AGP resource list
@@ -439,8 +442,10 @@ void drm_legacy_agp_clear(struct drm_device *dev)
INIT_LIST_HEAD(&dev->agp->memory); INIT_LIST_HEAD(&dev->agp->memory);
if (dev->agp->acquired) if (dev->agp->acquired)
drm_agp_release(dev); drm_legacy_agp_release(dev);
dev->agp->acquired = 0; dev->agp->acquired = 0;
dev->agp->enabled = 0; dev->agp->enabled = 0;
} }
#endif

View File

@@ -0,0 +1,344 @@
// SPDX-License-Identifier: MIT
#include <linux/device.h>
#include <linux/fb.h>
#include <linux/list.h>
#include <linux/mutex.h>
#include <linux/pci.h>
#include <linux/platform_device.h> /* for firmware helpers */
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/vgaarb.h>
#include <drm/drm_aperture.h>
#include <drm/drm_drv.h>
#include <drm/drm_print.h>
/**
* DOC: overview
*
* A graphics device might be supported by different drivers, but only one
* driver can be active at any given time. Many systems load a generic
* graphics drivers, such as EFI-GOP or VESA, early during the boot process.
* During later boot stages, they replace the generic driver with a dedicated,
* hardware-specific driver. To take over the device the dedicated driver
* first has to remove the generic driver. DRM aperture functions manage
* ownership of DRM framebuffer memory and hand-over between drivers.
*
* DRM drivers should call drm_aperture_remove_conflicting_framebuffers()
* at the top of their probe function. The function removes any generic
* driver that is currently associated with the given framebuffer memory.
* If the framebuffer is located at PCI BAR 0, the rsp code looks as in the
* example given below.
*
* .. code-block:: c
*
* static int remove_conflicting_framebuffers(struct pci_dev *pdev)
* {
* bool primary = false;
* resource_size_t base, size;
* int ret;
*
* base = pci_resource_start(pdev, 0);
* size = pci_resource_len(pdev, 0);
* #ifdef CONFIG_X86
* primary = pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW;
* #endif
*
* return drm_aperture_remove_conflicting_framebuffers(base, size, primary,
* "example driver");
* }
*
* static int probe(struct pci_dev *pdev)
* {
* int ret;
*
* // Remove any generic drivers...
* ret = remove_conflicting_framebuffers(pdev);
* if (ret)
* return ret;
*
* // ... and initialize the hardware.
* ...
*
* drm_dev_register();
*
* return 0;
* }
*
* PCI device drivers should call
* drm_aperture_remove_conflicting_pci_framebuffers() and let it detect the
* framebuffer apertures automatically. Device drivers without knowledge of
* the framebuffer's location shall call drm_aperture_remove_framebuffers(),
* which removes all drivers for known framebuffer.
*
* Drivers that are susceptible to being removed by other drivers, such as
* generic EFI or VESA drivers, have to register themselves as owners of their
* given framebuffer memory. Ownership of the framebuffer memory is achived
* by calling devm_aperture_acquire_from_firmware(). On success, the driver
* is the owner of the framebuffer range. The function fails if the
* framebuffer is already by another driver. See below for an example.
*
* .. code-block:: c
*
* static int acquire_framebuffers(struct drm_device *dev, struct platform_device *pdev)
* {
* resource_size_t base, size;
*
* mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
* if (!mem)
* return -EINVAL;
* base = mem->start;
* size = resource_size(mem);
*
* return devm_acquire_aperture_from_firmware(dev, base, size);
* }
*
* static int probe(struct platform_device *pdev)
* {
* struct drm_device *dev;
* int ret;
*
* // ... Initialize the device...
* dev = devm_drm_dev_alloc();
* ...
*
* // ... and acquire ownership of the framebuffer.
* ret = acquire_framebuffers(dev, pdev);
* if (ret)
* return ret;
*
* drm_dev_register(dev, 0);
*
* return 0;
* }
*
* The generic driver is now subject to forced removal by other drivers. This
* only works for platform drivers that support hot unplug.
* When a driver calls drm_aperture_remove_conflicting_framebuffers() et al
* for the registered framebuffer range, the aperture helpers call
* platform_device_unregister() and the generic driver unloads itself. It
* may not access the device's registers, framebuffer memory, ROM, etc
* afterwards.
*/
struct drm_aperture {
struct drm_device *dev;
resource_size_t base;
resource_size_t size;
struct list_head lh;
void (*detach)(struct drm_device *dev);
};
static LIST_HEAD(drm_apertures);
static DEFINE_MUTEX(drm_apertures_lock);
static bool overlap(resource_size_t base1, resource_size_t end1,
resource_size_t base2, resource_size_t end2)
{
return (base1 < end2) && (end1 > base2);
}
static void devm_aperture_acquire_release(void *data)
{
struct drm_aperture *ap = data;
bool detached = !ap->dev;
if (detached)
return;
mutex_lock(&drm_apertures_lock);
list_del(&ap->lh);
mutex_unlock(&drm_apertures_lock);
}
static int devm_aperture_acquire(struct drm_device *dev,
resource_size_t base, resource_size_t size,
void (*detach)(struct drm_device *))
{
size_t end = base + size;
struct list_head *pos;
struct drm_aperture *ap;
mutex_lock(&drm_apertures_lock);
list_for_each(pos, &drm_apertures) {
ap = container_of(pos, struct drm_aperture, lh);
if (overlap(base, end, ap->base, ap->base + ap->size))
return -EBUSY;
}
ap = devm_kzalloc(dev->dev, sizeof(*ap), GFP_KERNEL);
if (!ap)
return -ENOMEM;
ap->dev = dev;
ap->base = base;
ap->size = size;
ap->detach = detach;
INIT_LIST_HEAD(&ap->lh);
list_add(&ap->lh, &drm_apertures);
mutex_unlock(&drm_apertures_lock);
return devm_add_action_or_reset(dev->dev, devm_aperture_acquire_release, ap);
}
static void drm_aperture_detach_firmware(struct drm_device *dev)
{
struct platform_device *pdev = to_platform_device(dev->dev);
/*
* Remove the device from the device hierarchy. This is the right thing
* to do for firmware-based DRM drivers, such as EFI, VESA or VGA. After
* the new driver takes over the hardware, the firmware device's state
* will be lost.
*
* For non-platform devices, a new callback would be required.
*
* If the aperture helpers ever need to handle native drivers, this call
* would only have to unplug the DRM device, so that the hardware device
* stays around after detachment.
*/
platform_device_unregister(pdev);
}
/**
* devm_aperture_acquire_from_firmware - Acquires ownership of a firmware framebuffer
* on behalf of a DRM driver.
* @dev: the DRM device to own the framebuffer memory
* @base: the framebuffer's byte offset in physical memory
* @size: the framebuffer size in bytes
*
* Installs the given device as the new owner of the framebuffer. The function
* expects the framebuffer to be provided by a platform device that has been
* set up by firmware. Firmware can be any generic interface, such as EFI,
* VESA, VGA, etc. If the native hardware driver takes over ownership of the
* framebuffer range, the firmware state gets lost. Aperture helpers will then
* unregister the platform device automatically. Acquired apertures are
* released automatically if the underlying device goes away.
*
* The function fails if the framebuffer range, or parts of it, is currently
* owned by another driver. To evict current owners, callers should use
* drm_aperture_remove_conflicting_framebuffers() et al. before calling this
* function. The function also fails if the given device is not a platform
* device.
*
* Returns:
* 0 on success, or a negative errno value otherwise.
*/
int devm_aperture_acquire_from_firmware(struct drm_device *dev, resource_size_t base,
resource_size_t size)
{
if (drm_WARN_ON(dev, !dev_is_platform(dev->dev)))
return -EINVAL;
return devm_aperture_acquire(dev, base, size, drm_aperture_detach_firmware);
}
EXPORT_SYMBOL(devm_aperture_acquire_from_firmware);
static void drm_aperture_detach_drivers(resource_size_t base, resource_size_t size)
{
resource_size_t end = base + size;
struct list_head *pos, *n;
mutex_lock(&drm_apertures_lock);
list_for_each_safe(pos, n, &drm_apertures) {
struct drm_aperture *ap =
container_of(pos, struct drm_aperture, lh);
struct drm_device *dev = ap->dev;
if (WARN_ON_ONCE(!dev))
continue;
if (!overlap(base, end, ap->base, ap->base + ap->size))
continue;
ap->dev = NULL; /* detach from device */
list_del(&ap->lh);
ap->detach(dev);
}
mutex_unlock(&drm_apertures_lock);
}
/**
* drm_aperture_remove_conflicting_framebuffers - remove existing framebuffers in the given range
* @base: the aperture's base address in physical memory
* @size: aperture size in bytes
* @primary: also kick vga16fb if present
* @name: requesting driver name
*
* This function removes graphics device drivers which use memory range described by
* @base and @size.
*
* Returns:
* 0 on success, or a negative errno code otherwise
*/
int drm_aperture_remove_conflicting_framebuffers(resource_size_t base, resource_size_t size,
bool primary, const char *name)
{
#if IS_REACHABLE(CONFIG_FB)
struct apertures_struct *a;
int ret;
a = alloc_apertures(1);
if (!a)
return -ENOMEM;
a->ranges[0].base = base;
a->ranges[0].size = size;
ret = remove_conflicting_framebuffers(a, name, primary);
kfree(a);
if (ret)
return ret;
#endif
drm_aperture_detach_drivers(base, size);
return 0;
}
EXPORT_SYMBOL(drm_aperture_remove_conflicting_framebuffers);
/**
* drm_aperture_remove_conflicting_pci_framebuffers - remove existing framebuffers for PCI devices
* @pdev: PCI device
* @name: requesting driver name
*
* This function removes graphics device drivers using memory range configured
* for any of @pdev's memory bars. The function assumes that PCI device with
* shadowed ROM drives a primary display and so kicks out vga16fb.
*
* Returns:
* 0 on success, or a negative errno code otherwise
*/
int drm_aperture_remove_conflicting_pci_framebuffers(struct pci_dev *pdev, const char *name)
{
resource_size_t base, size;
int bar, ret = 0;
for (bar = 0; bar < PCI_STD_NUM_BARS; ++bar) {
if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM))
continue;
base = pci_resource_start(pdev, bar);
size = pci_resource_len(pdev, bar);
drm_aperture_detach_drivers(base, size);
}
/*
* WARNING: Apparently we must kick fbdev drivers before vgacon,
* otherwise the vga fbdev driver falls over.
*/
#if IS_REACHABLE(CONFIG_FB)
ret = remove_conflicting_pci_framebuffers(pdev, name);
#endif
if (ret == 0)
ret = vga_remove_vgacon(pdev);
return ret;
}
EXPORT_SYMBOL(drm_aperture_remove_conflicting_pci_framebuffers);

View File

@@ -385,7 +385,8 @@ static int drm_atomic_crtc_check(const struct drm_crtc_state *old_crtc_state,
/* The state->enable vs. state->mode_blob checks can be WARN_ON, /* The state->enable vs. state->mode_blob checks can be WARN_ON,
* as this is a kernel-internal detail that userspace should never * as this is a kernel-internal detail that userspace should never
* be able to trigger. */ * be able to trigger.
*/
if (drm_core_check_feature(crtc->dev, DRIVER_ATOMIC) && if (drm_core_check_feature(crtc->dev, DRIVER_ATOMIC) &&
WARN_ON(new_crtc_state->enable && !new_crtc_state->mode_blob)) { WARN_ON(new_crtc_state->enable && !new_crtc_state->mode_blob)) {
DRM_DEBUG_ATOMIC("[CRTC:%d:%s] enabled without mode blob\n", DRM_DEBUG_ATOMIC("[CRTC:%d:%s] enabled without mode blob\n",
@@ -1302,8 +1303,8 @@ int drm_atomic_check_only(struct drm_atomic_state *state)
struct drm_crtc_state *new_crtc_state; struct drm_crtc_state *new_crtc_state;
struct drm_connector *conn; struct drm_connector *conn;
struct drm_connector_state *conn_state; struct drm_connector_state *conn_state;
unsigned requested_crtc = 0; unsigned int requested_crtc = 0;
unsigned affected_crtc = 0; unsigned int affected_crtc = 0;
int i, ret = 0; int i, ret = 0;
DRM_DEBUG_ATOMIC("checking %p\n", state); DRM_DEBUG_ATOMIC("checking %p\n", state);

View File

@@ -106,7 +106,7 @@ static int handle_conflicting_encoders(struct drm_atomic_state *state,
struct drm_connector *connector; struct drm_connector *connector;
struct drm_connector_list_iter conn_iter; struct drm_connector_list_iter conn_iter;
struct drm_encoder *encoder; struct drm_encoder *encoder;
unsigned encoder_mask = 0; unsigned int encoder_mask = 0;
int i, ret = 0; int i, ret = 0;
/* /*
@@ -609,7 +609,7 @@ drm_atomic_helper_check_modeset(struct drm_device *dev,
struct drm_connector *connector; struct drm_connector *connector;
struct drm_connector_state *old_connector_state, *new_connector_state; struct drm_connector_state *old_connector_state, *new_connector_state;
int i, ret; int i, ret;
unsigned connectors_mask = 0; unsigned int connectors_mask = 0;
for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
bool has_connectors = bool has_connectors =
@@ -1018,8 +1018,10 @@ disable_outputs(struct drm_device *dev, struct drm_atomic_state *old_state)
struct drm_encoder *encoder; struct drm_encoder *encoder;
struct drm_bridge *bridge; struct drm_bridge *bridge;
/* Shut down everything that's in the changeset and currently /*
* still on. So need to check the old, saved state. */ * Shut down everything that's in the changeset and currently
* still on. So need to check the old, saved state.
*/
if (!old_conn_state->crtc) if (!old_conn_state->crtc)
continue; continue;
@@ -1478,7 +1480,7 @@ drm_atomic_helper_wait_for_vblanks(struct drm_device *dev,
struct drm_crtc *crtc; struct drm_crtc *crtc;
struct drm_crtc_state *old_crtc_state, *new_crtc_state; struct drm_crtc_state *old_crtc_state, *new_crtc_state;
int i, ret; int i, ret;
unsigned crtc_mask = 0; unsigned int crtc_mask = 0;
/* /*
* Legacy cursor ioctls are completely unsynced, and userspace * Legacy cursor ioctls are completely unsynced, and userspace
@@ -1953,8 +1955,10 @@ static int stall_checks(struct drm_crtc *crtc, bool nonblock)
list_for_each_entry(commit, &crtc->commit_list, commit_entry) { list_for_each_entry(commit, &crtc->commit_list, commit_entry) {
if (i == 0) { if (i == 0) {
completed = try_wait_for_completion(&commit->flip_done); completed = try_wait_for_completion(&commit->flip_done);
/* Userspace is not allowed to get ahead of the previous /*
* commit with nonblocking ones. */ * Userspace is not allowed to get ahead of the previous
* commit with nonblocking ones.
*/
if (!completed && nonblock) { if (!completed && nonblock) {
spin_unlock(&crtc->commit_lock); spin_unlock(&crtc->commit_lock);
DRM_DEBUG_ATOMIC("[CRTC:%d:%s] busy with a previous commit\n", DRM_DEBUG_ATOMIC("[CRTC:%d:%s] busy with a previous commit\n",
@@ -2103,9 +2107,11 @@ int drm_atomic_helper_setup_commit(struct drm_atomic_state *state,
if (ret) if (ret)
return ret; return ret;
/* Drivers only send out events when at least either current or /*
* Drivers only send out events when at least either current or
* new CRTC state is active. Complete right away if everything * new CRTC state is active. Complete right away if everything
* stays off. */ * stays off.
*/
if (!old_crtc_state->active && !new_crtc_state->active) { if (!old_crtc_state->active && !new_crtc_state->active) {
complete_all(&commit->flip_done); complete_all(&commit->flip_done);
continue; continue;
@@ -2137,8 +2143,10 @@ int drm_atomic_helper_setup_commit(struct drm_atomic_state *state,
} }
for_each_oldnew_connector_in_state(state, conn, old_conn_state, new_conn_state, i) { for_each_oldnew_connector_in_state(state, conn, old_conn_state, new_conn_state, i) {
/* Userspace is not allowed to get ahead of the previous /*
* commit with nonblocking ones. */ * Userspace is not allowed to get ahead of the previous
* commit with nonblocking ones.
*/
if (nonblock && old_conn_state->commit && if (nonblock && old_conn_state->commit &&
!try_wait_for_completion(&old_conn_state->commit->flip_done)) { !try_wait_for_completion(&old_conn_state->commit->flip_done)) {
DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] busy with a previous commit\n", DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] busy with a previous commit\n",
@@ -2156,8 +2164,10 @@ int drm_atomic_helper_setup_commit(struct drm_atomic_state *state,
} }
for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, i) { for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, i) {
/* Userspace is not allowed to get ahead of the previous /*
* commit with nonblocking ones. */ * Userspace is not allowed to get ahead of the previous
* commit with nonblocking ones.
*/
if (nonblock && old_plane_state->commit && if (nonblock && old_plane_state->commit &&
!try_wait_for_completion(&old_plane_state->commit->flip_done)) { !try_wait_for_completion(&old_plane_state->commit->flip_done)) {
DRM_DEBUG_ATOMIC("[PLANE:%d:%s] busy with a previous commit\n", DRM_DEBUG_ATOMIC("[PLANE:%d:%s] busy with a previous commit\n",
@@ -2575,7 +2585,7 @@ drm_atomic_helper_commit_planes_on_crtc(struct drm_crtc_state *old_crtc_state)
struct drm_crtc_state *new_crtc_state = struct drm_crtc_state *new_crtc_state =
drm_atomic_get_new_crtc_state(old_state, crtc); drm_atomic_get_new_crtc_state(old_state, crtc);
struct drm_plane *plane; struct drm_plane *plane;
unsigned plane_mask; unsigned int plane_mask;
plane_mask = old_crtc_state->plane_mask; plane_mask = old_crtc_state->plane_mask;
plane_mask |= new_crtc_state->plane_mask; plane_mask |= new_crtc_state->plane_mask;

View File

@@ -300,7 +300,8 @@ int drm_master_open(struct drm_file *file_priv)
int ret = 0; int ret = 0;
/* if there is no current master make this fd it, but do not create /* if there is no current master make this fd it, but do not create
* any master object for render clients */ * any master object for render clients
*/
mutex_lock(&dev->master_mutex); mutex_lock(&dev->master_mutex);
if (!dev->master) if (!dev->master)
ret = drm_new_set_master(dev, file_priv); ret = drm_new_set_master(dev, file_priv);

View File

@@ -522,6 +522,9 @@ void drm_bridge_chain_pre_enable(struct drm_bridge *bridge)
list_for_each_entry_reverse(iter, &encoder->bridge_chain, chain_node) { list_for_each_entry_reverse(iter, &encoder->bridge_chain, chain_node) {
if (iter->funcs->pre_enable) if (iter->funcs->pre_enable)
iter->funcs->pre_enable(iter); iter->funcs->pre_enable(iter);
if (iter == bridge)
break;
} }
} }
EXPORT_SYMBOL(drm_bridge_chain_pre_enable); EXPORT_SYMBOL(drm_bridge_chain_pre_enable);

View File

@@ -40,7 +40,6 @@
#include <asm/shmparam.h> #include <asm/shmparam.h>
#include <drm/drm_agpsupport.h>
#include <drm/drm_device.h> #include <drm/drm_device.h>
#include <drm/drm_drv.h> #include <drm/drm_drv.h>
#include <drm/drm_file.h> #include <drm/drm_file.h>
@@ -79,7 +78,7 @@ static struct drm_map_list *drm_find_matching_map(struct drm_device *dev,
return entry; return entry;
break; break;
default: /* Make gcc happy */ default: /* Make gcc happy */
; break;
} }
if (entry->map->offset == map->offset) if (entry->map->offset == map->offset)
return entry; return entry;
@@ -325,7 +324,8 @@ static int drm_addmap_core(struct drm_device *dev, resource_size_t offset,
/* dma_addr_t is 64bit on i386 with CONFIG_HIGHMEM64G, /* dma_addr_t is 64bit on i386 with CONFIG_HIGHMEM64G,
* As we're limiting the address to 2^32-1 (or less), * As we're limiting the address to 2^32-1 (or less),
* casting it down to 32 bits is no problem, but we * casting it down to 32 bits is no problem, but we
* need to point to a 64bit variable first. */ * need to point to a 64bit variable first.
*/
map->handle = dma_alloc_coherent(dev->dev, map->handle = dma_alloc_coherent(dev->dev,
map->size, map->size,
&map->offset, &map->offset,
@@ -674,12 +674,17 @@ int drm_legacy_rmmap_ioctl(struct drm_device *dev, void *data,
static void drm_cleanup_buf_error(struct drm_device *dev, static void drm_cleanup_buf_error(struct drm_device *dev,
struct drm_buf_entry *entry) struct drm_buf_entry *entry)
{ {
drm_dma_handle_t *dmah;
int i; int i;
if (entry->seg_count) { if (entry->seg_count) {
for (i = 0; i < entry->seg_count; i++) { for (i = 0; i < entry->seg_count; i++) {
if (entry->seglist[i]) { if (entry->seglist[i]) {
drm_pci_free(dev, entry->seglist[i]); dmah = entry->seglist[i];
dma_free_coherent(dev->dev,
dmah->size,
dmah->vaddr,
dmah->busaddr);
} }
} }
kfree(entry->seglist); kfree(entry->seglist);
@@ -978,10 +983,18 @@ int drm_legacy_addbufs_pci(struct drm_device *dev,
page_count = 0; page_count = 0;
while (entry->buf_count < count) { while (entry->buf_count < count) {
dmah = kmalloc(sizeof(drm_dma_handle_t), GFP_KERNEL);
if (!dmah)
return -ENOMEM;
dmah = drm_pci_alloc(dev, PAGE_SIZE << page_order, 0x1000); dmah->size = total;
dmah->vaddr = dma_alloc_coherent(dev->dev,
dmah->size,
&dmah->busaddr,
GFP_KERNEL);
if (!dmah->vaddr) {
kfree(dmah);
if (!dmah) {
/* Set count correctly so we free the proper amount. */ /* Set count correctly so we free the proper amount. */
entry->buf_count = count; entry->buf_count = count;
entry->seg_count = count; entry->seg_count = count;

View File

@@ -20,6 +20,7 @@
* OF THIS SOFTWARE. * OF THIS SOFTWARE.
*/ */
#include <drm/drm_auth.h>
#include <drm/drm_connector.h> #include <drm/drm_connector.h>
#include <drm/drm_edid.h> #include <drm/drm_edid.h>
#include <drm/drm_encoder.h> #include <drm/drm_encoder.h>
@@ -279,7 +280,8 @@ int drm_connector_init(struct drm_device *dev,
drm_connector_get_cmdline_mode(connector); drm_connector_get_cmdline_mode(connector);
/* We should add connectors at the end to avoid upsetting the connector /* We should add connectors at the end to avoid upsetting the connector
* index too much. */ * index too much.
*/
spin_lock_irq(&config->connector_list_lock); spin_lock_irq(&config->connector_list_lock);
list_add_tail(&connector->head, &config->connector_list); list_add_tail(&connector->head, &config->connector_list);
config->num_connector++; config->num_connector++;
@@ -2150,6 +2152,75 @@ int drm_connector_attach_max_bpc_property(struct drm_connector *connector,
} }
EXPORT_SYMBOL(drm_connector_attach_max_bpc_property); EXPORT_SYMBOL(drm_connector_attach_max_bpc_property);
/**
* drm_connector_attach_hdr_output_metadata_property - attach "HDR_OUTPUT_METADA" property
* @connector: connector to attach the property on.
*
* This is used to allow the userspace to send HDR Metadata to the
* driver.
*
* Returns:
* Zero on success, negative errno on failure.
*/
int drm_connector_attach_hdr_output_metadata_property(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct drm_property *prop = dev->mode_config.hdr_output_metadata_property;
drm_object_attach_property(&connector->base, prop, 0);
return 0;
}
EXPORT_SYMBOL(drm_connector_attach_hdr_output_metadata_property);
/**
* drm_connector_attach_colorspace_property - attach "Colorspace" property
* @connector: connector to attach the property on.
*
* This is used to allow the userspace to signal the output colorspace
* to the driver.
*
* Returns:
* Zero on success, negative errno on failure.
*/
int drm_connector_attach_colorspace_property(struct drm_connector *connector)
{
struct drm_property *prop = connector->colorspace_property;
drm_object_attach_property(&connector->base, prop, DRM_MODE_COLORIMETRY_DEFAULT);
return 0;
}
EXPORT_SYMBOL(drm_connector_attach_colorspace_property);
/**
* drm_connector_atomic_hdr_metadata_equal - checks if the hdr metadata changed
* @old_state: old connector state to compare
* @new_state: new connector state to compare
*
* This is used by HDR-enabled drivers to test whether the HDR metadata
* have changed between two different connector state (and thus probably
* requires a full blown mode change).
*
* Returns:
* True if the metadata are equal, False otherwise
*/
bool drm_connector_atomic_hdr_metadata_equal(struct drm_connector_state *old_state,
struct drm_connector_state *new_state)
{
struct drm_property_blob *old_blob = old_state->hdr_output_metadata;
struct drm_property_blob *new_blob = new_state->hdr_output_metadata;
if (!old_blob || !new_blob)
return old_blob == new_blob;
if (old_blob->length != new_blob->length)
return false;
return !memcmp(old_blob->data, new_blob->data, old_blob->length);
}
EXPORT_SYMBOL(drm_connector_atomic_hdr_metadata_equal);
/** /**
* drm_connector_set_vrr_capable_property - sets the variable refresh rate * drm_connector_set_vrr_capable_property - sets the variable refresh rate
* capable property for a connector * capable property for a connector
@@ -2288,7 +2359,8 @@ int drm_connector_property_set_ioctl(struct drm_device *dev,
static struct drm_encoder *drm_connector_get_encoder(struct drm_connector *connector) static struct drm_encoder *drm_connector_get_encoder(struct drm_connector *connector)
{ {
/* For atomic drivers only state objects are synchronously updated and /* For atomic drivers only state objects are synchronously updated and
* protected by modeset locks, so check those first. */ * protected by modeset locks, so check those first.
*/
if (connector->state) if (connector->state)
return connector->state->best_encoder; return connector->state->best_encoder;
return connector->encoder; return connector->encoder;
@@ -2374,9 +2446,13 @@ int drm_mode_getconnector(struct drm_device *dev, void *data,
mutex_lock(&dev->mode_config.mutex); mutex_lock(&dev->mode_config.mutex);
if (out_resp->count_modes == 0) { if (out_resp->count_modes == 0) {
if (drm_is_current_master(file_priv))
connector->funcs->fill_modes(connector, connector->funcs->fill_modes(connector,
dev->mode_config.max_width, dev->mode_config.max_width,
dev->mode_config.max_height); dev->mode_config.max_height);
else
drm_dbg_kms(dev, "User-space requested a forced probe on [CONNECTOR:%d:%s] but is not the DRM master, demoting to read-only probe",
connector->base.id, connector->name);
} }
out_resp->mm_width = connector->display_info.width_mm; out_resp->mm_width = connector->display_info.width_mm;
@@ -2450,7 +2526,8 @@ int drm_mode_getconnector(struct drm_device *dev, void *data,
out_resp->encoder_id = 0; out_resp->encoder_id = 0;
/* Only grab properties after probing, to make sure EDID and other /* Only grab properties after probing, to make sure EDID and other
* properties reflect the latest status. */ * properties reflect the latest status.
*/
ret = drm_mode_object_get_properties(&connector->base, file_priv->atomic, ret = drm_mode_object_get_properties(&connector->base, file_priv->atomic,
(uint32_t __user *)(unsigned long)(out_resp->props_ptr), (uint32_t __user *)(unsigned long)(out_resp->props_ptr),
(uint64_t __user *)(unsigned long)(out_resp->prop_values_ptr), (uint64_t __user *)(unsigned long)(out_resp->prop_values_ptr),

View File

@@ -312,7 +312,8 @@ static int drm_context_switch_complete(struct drm_device *dev,
/* If a context switch is ever initiated /* If a context switch is ever initiated
when the kernel holds the lock, release when the kernel holds the lock, release
that lock here. */ that lock here.
*/
clear_bit(0, &dev->context_flag); clear_bit(0, &dev->context_flag);
return 0; return 0;

View File

@@ -81,6 +81,7 @@ int drm_legacy_dma_setup(struct drm_device *dev)
void drm_legacy_dma_takedown(struct drm_device *dev) void drm_legacy_dma_takedown(struct drm_device *dev)
{ {
struct drm_device_dma *dma = dev->dma; struct drm_device_dma *dma = dev->dma;
drm_dma_handle_t *dmah;
int i, j; int i, j;
if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA) || if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA) ||
@@ -100,7 +101,12 @@ void drm_legacy_dma_takedown(struct drm_device *dev)
dma->bufs[i].seg_count); dma->bufs[i].seg_count);
for (j = 0; j < dma->bufs[i].seg_count; j++) { for (j = 0; j < dma->bufs[i].seg_count; j++) {
if (dma->bufs[i].seglist[j]) { if (dma->bufs[i].seglist[j]) {
drm_pci_free(dev, dma->bufs[i].seglist[j]); dmah = dma->bufs[i].seglist[j];
dma_free_coherent(dev->dev,
dmah->size,
dmah->vaddr,
dmah->busaddr);
kfree(dmah);
} }
} }
kfree(dma->bufs[i].seglist); kfree(dma->bufs[i].seglist);

View File

@@ -278,6 +278,12 @@ void drm_dp_aux_unregister_devnode(struct drm_dp_aux *aux)
if (!aux_dev) /* attach must have failed */ if (!aux_dev) /* attach must have failed */
return; return;
/*
* As some AUX adapters may exist as platform devices which outlive their respective DRM
* devices, we clear drm_dev to ensure that we never accidentally reference a stale pointer
*/
aux->drm_dev = NULL;
mutex_lock(&aux_idr_mutex); mutex_lock(&aux_idr_mutex);
idr_remove(&aux_idr, aux_dev->index); idr_remove(&aux_idr, aux_dev->index);
mutex_unlock(&aux_idr_mutex); mutex_unlock(&aux_idr_mutex);

View File

@@ -27,6 +27,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/string.h> #include <linux/string.h>
#include <drm/drm_device.h>
#include <drm/drm_dp_dual_mode_helper.h> #include <drm/drm_dp_dual_mode_helper.h>
#include <drm/drm_print.h> #include <drm/drm_print.h>
@@ -165,6 +166,7 @@ static bool is_lspcon_adaptor(const char hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN],
/** /**
* drm_dp_dual_mode_detect - Identify the DP dual mode adaptor * drm_dp_dual_mode_detect - Identify the DP dual mode adaptor
* @dev: &drm_device to use
* @adapter: I2C adapter for the DDC bus * @adapter: I2C adapter for the DDC bus
* *
* Attempt to identify the type of the DP dual mode adaptor used. * Attempt to identify the type of the DP dual mode adaptor used.
@@ -178,7 +180,8 @@ static bool is_lspcon_adaptor(const char hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN],
* Returns: * Returns:
* The type of the DP dual mode adaptor used * The type of the DP dual mode adaptor used
*/ */
enum drm_dp_dual_mode_type drm_dp_dual_mode_detect(struct i2c_adapter *adapter) enum drm_dp_dual_mode_type drm_dp_dual_mode_detect(const struct drm_device *dev,
struct i2c_adapter *adapter)
{ {
char hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN] = {}; char hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN] = {};
uint8_t adaptor_id = 0x00; uint8_t adaptor_id = 0x00;
@@ -200,7 +203,7 @@ enum drm_dp_dual_mode_type drm_dp_dual_mode_detect(struct i2c_adapter *adapter)
*/ */
ret = drm_dp_dual_mode_read(adapter, DP_DUAL_MODE_HDMI_ID, ret = drm_dp_dual_mode_read(adapter, DP_DUAL_MODE_HDMI_ID,
hdmi_id, sizeof(hdmi_id)); hdmi_id, sizeof(hdmi_id));
DRM_DEBUG_KMS("DP dual mode HDMI ID: %*pE (err %zd)\n", drm_dbg_kms(dev, "DP dual mode HDMI ID: %*pE (err %zd)\n",
ret ? 0 : (int)sizeof(hdmi_id), hdmi_id, ret); ret ? 0 : (int)sizeof(hdmi_id), hdmi_id, ret);
if (ret) if (ret)
return DRM_DP_DUAL_MODE_UNKNOWN; return DRM_DP_DUAL_MODE_UNKNOWN;
@@ -219,8 +222,7 @@ enum drm_dp_dual_mode_type drm_dp_dual_mode_detect(struct i2c_adapter *adapter)
*/ */
ret = drm_dp_dual_mode_read(adapter, DP_DUAL_MODE_ADAPTOR_ID, ret = drm_dp_dual_mode_read(adapter, DP_DUAL_MODE_ADAPTOR_ID,
&adaptor_id, sizeof(adaptor_id)); &adaptor_id, sizeof(adaptor_id));
DRM_DEBUG_KMS("DP dual mode adaptor ID: %02x (err %zd)\n", drm_dbg_kms(dev, "DP dual mode adaptor ID: %02x (err %zd)\n", adaptor_id, ret);
adaptor_id, ret);
if (ret == 0) { if (ret == 0) {
if (is_lspcon_adaptor(hdmi_id, adaptor_id)) if (is_lspcon_adaptor(hdmi_id, adaptor_id))
return DRM_DP_DUAL_MODE_LSPCON; return DRM_DP_DUAL_MODE_LSPCON;
@@ -236,8 +238,7 @@ enum drm_dp_dual_mode_type drm_dp_dual_mode_detect(struct i2c_adapter *adapter)
* that we may have misdetected the type. * that we may have misdetected the type.
*/ */
if (!is_type1_adaptor(adaptor_id) && adaptor_id != hdmi_id[0]) if (!is_type1_adaptor(adaptor_id) && adaptor_id != hdmi_id[0])
DRM_ERROR("Unexpected DP dual mode adaptor ID %02x\n", drm_err(dev, "Unexpected DP dual mode adaptor ID %02x\n", adaptor_id);
adaptor_id);
} }
@@ -250,6 +251,7 @@ EXPORT_SYMBOL(drm_dp_dual_mode_detect);
/** /**
* drm_dp_dual_mode_max_tmds_clock - Max TMDS clock for DP dual mode adaptor * drm_dp_dual_mode_max_tmds_clock - Max TMDS clock for DP dual mode adaptor
* @dev: &drm_device to use
* @type: DP dual mode adaptor type * @type: DP dual mode adaptor type
* @adapter: I2C adapter for the DDC bus * @adapter: I2C adapter for the DDC bus
* *
@@ -263,7 +265,7 @@ EXPORT_SYMBOL(drm_dp_dual_mode_detect);
* Returns: * Returns:
* Maximum supported TMDS clock rate for the DP dual mode adaptor in kHz. * Maximum supported TMDS clock rate for the DP dual mode adaptor in kHz.
*/ */
int drm_dp_dual_mode_max_tmds_clock(enum drm_dp_dual_mode_type type, int drm_dp_dual_mode_max_tmds_clock(const struct drm_device *dev, enum drm_dp_dual_mode_type type,
struct i2c_adapter *adapter) struct i2c_adapter *adapter)
{ {
uint8_t max_tmds_clock; uint8_t max_tmds_clock;
@@ -283,7 +285,7 @@ int drm_dp_dual_mode_max_tmds_clock(enum drm_dp_dual_mode_type type,
ret = drm_dp_dual_mode_read(adapter, DP_DUAL_MODE_MAX_TMDS_CLOCK, ret = drm_dp_dual_mode_read(adapter, DP_DUAL_MODE_MAX_TMDS_CLOCK,
&max_tmds_clock, sizeof(max_tmds_clock)); &max_tmds_clock, sizeof(max_tmds_clock));
if (ret || max_tmds_clock == 0x00 || max_tmds_clock == 0xff) { if (ret || max_tmds_clock == 0x00 || max_tmds_clock == 0xff) {
DRM_DEBUG_KMS("Failed to query max TMDS clock\n"); drm_dbg_kms(dev, "Failed to query max TMDS clock\n");
return 165000; return 165000;
} }
@@ -293,6 +295,7 @@ EXPORT_SYMBOL(drm_dp_dual_mode_max_tmds_clock);
/** /**
* drm_dp_dual_mode_get_tmds_output - Get the state of the TMDS output buffers in the DP dual mode adaptor * drm_dp_dual_mode_get_tmds_output - Get the state of the TMDS output buffers in the DP dual mode adaptor
* @dev: &drm_device to use
* @type: DP dual mode adaptor type * @type: DP dual mode adaptor type
* @adapter: I2C adapter for the DDC bus * @adapter: I2C adapter for the DDC bus
* @enabled: current state of the TMDS output buffers * @enabled: current state of the TMDS output buffers
@@ -307,8 +310,8 @@ EXPORT_SYMBOL(drm_dp_dual_mode_max_tmds_clock);
* Returns: * Returns:
* 0 on success, negative error code on failure * 0 on success, negative error code on failure
*/ */
int drm_dp_dual_mode_get_tmds_output(enum drm_dp_dual_mode_type type, int drm_dp_dual_mode_get_tmds_output(const struct drm_device *dev,
struct i2c_adapter *adapter, enum drm_dp_dual_mode_type type, struct i2c_adapter *adapter,
bool *enabled) bool *enabled)
{ {
uint8_t tmds_oen; uint8_t tmds_oen;
@@ -322,7 +325,7 @@ int drm_dp_dual_mode_get_tmds_output(enum drm_dp_dual_mode_type type,
ret = drm_dp_dual_mode_read(adapter, DP_DUAL_MODE_TMDS_OEN, ret = drm_dp_dual_mode_read(adapter, DP_DUAL_MODE_TMDS_OEN,
&tmds_oen, sizeof(tmds_oen)); &tmds_oen, sizeof(tmds_oen));
if (ret) { if (ret) {
DRM_DEBUG_KMS("Failed to query state of TMDS output buffers\n"); drm_dbg_kms(dev, "Failed to query state of TMDS output buffers\n");
return ret; return ret;
} }
@@ -334,6 +337,7 @@ EXPORT_SYMBOL(drm_dp_dual_mode_get_tmds_output);
/** /**
* drm_dp_dual_mode_set_tmds_output - Enable/disable TMDS output buffers in the DP dual mode adaptor * drm_dp_dual_mode_set_tmds_output - Enable/disable TMDS output buffers in the DP dual mode adaptor
* @dev: &drm_device to use
* @type: DP dual mode adaptor type * @type: DP dual mode adaptor type
* @adapter: I2C adapter for the DDC bus * @adapter: I2C adapter for the DDC bus
* @enable: enable (as opposed to disable) the TMDS output buffers * @enable: enable (as opposed to disable) the TMDS output buffers
@@ -347,7 +351,7 @@ EXPORT_SYMBOL(drm_dp_dual_mode_get_tmds_output);
* Returns: * Returns:
* 0 on success, negative error code on failure * 0 on success, negative error code on failure
*/ */
int drm_dp_dual_mode_set_tmds_output(enum drm_dp_dual_mode_type type, int drm_dp_dual_mode_set_tmds_output(const struct drm_device *dev, enum drm_dp_dual_mode_type type,
struct i2c_adapter *adapter, bool enable) struct i2c_adapter *adapter, bool enable)
{ {
uint8_t tmds_oen = enable ? 0 : DP_DUAL_MODE_TMDS_DISABLE; uint8_t tmds_oen = enable ? 0 : DP_DUAL_MODE_TMDS_DISABLE;
@@ -367,18 +371,17 @@ int drm_dp_dual_mode_set_tmds_output(enum drm_dp_dual_mode_type type,
ret = drm_dp_dual_mode_write(adapter, DP_DUAL_MODE_TMDS_OEN, ret = drm_dp_dual_mode_write(adapter, DP_DUAL_MODE_TMDS_OEN,
&tmds_oen, sizeof(tmds_oen)); &tmds_oen, sizeof(tmds_oen));
if (ret) { if (ret) {
DRM_DEBUG_KMS("Failed to %s TMDS output buffers (%d attempts)\n", drm_dbg_kms(dev, "Failed to %s TMDS output buffers (%d attempts)\n",
enable ? "enable" : "disable", enable ? "enable" : "disable", retry + 1);
retry + 1);
return ret; return ret;
} }
ret = drm_dp_dual_mode_read(adapter, DP_DUAL_MODE_TMDS_OEN, ret = drm_dp_dual_mode_read(adapter, DP_DUAL_MODE_TMDS_OEN,
&tmp, sizeof(tmp)); &tmp, sizeof(tmp));
if (ret) { if (ret) {
DRM_DEBUG_KMS("I2C read failed during TMDS output buffer %s (%d attempts)\n", drm_dbg_kms(dev,
enable ? "enabling" : "disabling", "I2C read failed during TMDS output buffer %s (%d attempts)\n",
retry + 1); enable ? "enabling" : "disabling", retry + 1);
return ret; return ret;
} }
@@ -386,7 +389,7 @@ int drm_dp_dual_mode_set_tmds_output(enum drm_dp_dual_mode_type type,
return 0; return 0;
} }
DRM_DEBUG_KMS("I2C write value mismatch during TMDS output buffer %s\n", drm_dbg_kms(dev, "I2C write value mismatch during TMDS output buffer %s\n",
enable ? "enabling" : "disabling"); enable ? "enabling" : "disabling");
return -EIO; return -EIO;
@@ -425,6 +428,7 @@ EXPORT_SYMBOL(drm_dp_get_dual_mode_type_name);
/** /**
* drm_lspcon_get_mode: Get LSPCON's current mode of operation by * drm_lspcon_get_mode: Get LSPCON's current mode of operation by
* reading offset (0x80, 0x41) * reading offset (0x80, 0x41)
* @dev: &drm_device to use
* @adapter: I2C-over-aux adapter * @adapter: I2C-over-aux adapter
* @mode: current lspcon mode of operation output variable * @mode: current lspcon mode of operation output variable
* *
@@ -432,7 +436,7 @@ EXPORT_SYMBOL(drm_dp_get_dual_mode_type_name);
* 0 on success, sets the current_mode value to appropriate mode * 0 on success, sets the current_mode value to appropriate mode
* -error on failure * -error on failure
*/ */
int drm_lspcon_get_mode(struct i2c_adapter *adapter, int drm_lspcon_get_mode(const struct drm_device *dev, struct i2c_adapter *adapter,
enum drm_lspcon_mode *mode) enum drm_lspcon_mode *mode)
{ {
u8 data; u8 data;
@@ -440,7 +444,7 @@ int drm_lspcon_get_mode(struct i2c_adapter *adapter,
int retry; int retry;
if (!mode) { if (!mode) {
DRM_ERROR("NULL input\n"); drm_err(dev, "NULL input\n");
return -EINVAL; return -EINVAL;
} }
@@ -457,7 +461,7 @@ int drm_lspcon_get_mode(struct i2c_adapter *adapter,
} }
if (ret < 0) { if (ret < 0) {
DRM_DEBUG_KMS("LSPCON read(0x80, 0x41) failed\n"); drm_dbg_kms(dev, "LSPCON read(0x80, 0x41) failed\n");
return -EFAULT; return -EFAULT;
} }
@@ -472,13 +476,14 @@ EXPORT_SYMBOL(drm_lspcon_get_mode);
/** /**
* drm_lspcon_set_mode: Change LSPCON's mode of operation by * drm_lspcon_set_mode: Change LSPCON's mode of operation by
* writing offset (0x80, 0x40) * writing offset (0x80, 0x40)
* @dev: &drm_device to use
* @adapter: I2C-over-aux adapter * @adapter: I2C-over-aux adapter
* @mode: required mode of operation * @mode: required mode of operation
* *
* Returns: * Returns:
* 0 on success, -error on failure/timeout * 0 on success, -error on failure/timeout
*/ */
int drm_lspcon_set_mode(struct i2c_adapter *adapter, int drm_lspcon_set_mode(const struct drm_device *dev, struct i2c_adapter *adapter,
enum drm_lspcon_mode mode) enum drm_lspcon_mode mode)
{ {
u8 data = 0; u8 data = 0;
@@ -493,7 +498,7 @@ int drm_lspcon_set_mode(struct i2c_adapter *adapter,
ret = drm_dp_dual_mode_write(adapter, DP_DUAL_MODE_LSPCON_MODE_CHANGE, ret = drm_dp_dual_mode_write(adapter, DP_DUAL_MODE_LSPCON_MODE_CHANGE,
&data, sizeof(data)); &data, sizeof(data));
if (ret < 0) { if (ret < 0) {
DRM_ERROR("LSPCON mode change failed\n"); drm_err(dev, "LSPCON mode change failed\n");
return ret; return ret;
} }
@@ -503,24 +508,23 @@ int drm_lspcon_set_mode(struct i2c_adapter *adapter,
* so wait and retry until time out or done. * so wait and retry until time out or done.
*/ */
do { do {
ret = drm_lspcon_get_mode(adapter, &current_mode); ret = drm_lspcon_get_mode(dev, adapter, &current_mode);
if (ret) { if (ret) {
DRM_ERROR("can't confirm LSPCON mode change\n"); drm_err(dev, "can't confirm LSPCON mode change\n");
return ret; return ret;
} else { } else {
if (current_mode != mode) { if (current_mode != mode) {
msleep(10); msleep(10);
time_out -= 10; time_out -= 10;
} else { } else {
DRM_DEBUG_KMS("LSPCON mode changed to %s\n", drm_dbg_kms(dev, "LSPCON mode changed to %s\n",
mode == DRM_LSPCON_MODE_LS ? mode == DRM_LSPCON_MODE_LS ? "LS" : "PCON");
"LS" : "PCON");
return 0; return 0;
} }
} }
} while (time_out); } while (time_out);
DRM_ERROR("LSPCON mode change timed out\n"); drm_err(dev, "LSPCON mode change timed out\n");
return -ETIMEDOUT; return -ETIMEDOUT;
} }
EXPORT_SYMBOL(drm_lspcon_set_mode); EXPORT_SYMBOL(drm_lspcon_set_mode);

View File

@@ -132,14 +132,15 @@ u8 drm_dp_get_adjust_request_post_cursor(const u8 link_status[DP_LINK_STATUS_SIZ
} }
EXPORT_SYMBOL(drm_dp_get_adjust_request_post_cursor); EXPORT_SYMBOL(drm_dp_get_adjust_request_post_cursor);
void drm_dp_link_train_clock_recovery_delay(const u8 dpcd[DP_RECEIVER_CAP_SIZE]) void drm_dp_link_train_clock_recovery_delay(const struct drm_dp_aux *aux,
const u8 dpcd[DP_RECEIVER_CAP_SIZE])
{ {
unsigned long rd_interval = dpcd[DP_TRAINING_AUX_RD_INTERVAL] & unsigned long rd_interval = dpcd[DP_TRAINING_AUX_RD_INTERVAL] &
DP_TRAINING_AUX_RD_MASK; DP_TRAINING_AUX_RD_MASK;
if (rd_interval > 4) if (rd_interval > 4)
DRM_DEBUG_KMS("AUX interval %lu, out of range (max 4)\n", drm_dbg_kms(aux->drm_dev, "%s: AUX interval %lu, out of range (max 4)\n",
rd_interval); aux->name, rd_interval);
if (rd_interval == 0 || dpcd[DP_DPCD_REV] >= DP_DPCD_REV_14) if (rd_interval == 0 || dpcd[DP_DPCD_REV] >= DP_DPCD_REV_14)
rd_interval = 100; rd_interval = 100;
@@ -150,11 +151,12 @@ void drm_dp_link_train_clock_recovery_delay(const u8 dpcd[DP_RECEIVER_CAP_SIZE])
} }
EXPORT_SYMBOL(drm_dp_link_train_clock_recovery_delay); EXPORT_SYMBOL(drm_dp_link_train_clock_recovery_delay);
static void __drm_dp_link_train_channel_eq_delay(unsigned long rd_interval) static void __drm_dp_link_train_channel_eq_delay(const struct drm_dp_aux *aux,
unsigned long rd_interval)
{ {
if (rd_interval > 4) if (rd_interval > 4)
DRM_DEBUG_KMS("AUX interval %lu, out of range (max 4)\n", drm_dbg_kms(aux->drm_dev, "%s: AUX interval %lu, out of range (max 4)\n",
rd_interval); aux->name, rd_interval);
if (rd_interval == 0) if (rd_interval == 0)
rd_interval = 400; rd_interval = 400;
@@ -164,9 +166,11 @@ static void __drm_dp_link_train_channel_eq_delay(unsigned long rd_interval)
usleep_range(rd_interval, rd_interval * 2); usleep_range(rd_interval, rd_interval * 2);
} }
void drm_dp_link_train_channel_eq_delay(const u8 dpcd[DP_RECEIVER_CAP_SIZE]) void drm_dp_link_train_channel_eq_delay(const struct drm_dp_aux *aux,
const u8 dpcd[DP_RECEIVER_CAP_SIZE])
{ {
__drm_dp_link_train_channel_eq_delay(dpcd[DP_TRAINING_AUX_RD_INTERVAL] & __drm_dp_link_train_channel_eq_delay(aux,
dpcd[DP_TRAINING_AUX_RD_INTERVAL] &
DP_TRAINING_AUX_RD_MASK); DP_TRAINING_AUX_RD_MASK);
} }
EXPORT_SYMBOL(drm_dp_link_train_channel_eq_delay); EXPORT_SYMBOL(drm_dp_link_train_channel_eq_delay);
@@ -182,13 +186,14 @@ static u8 dp_lttpr_phy_cap(const u8 phy_cap[DP_LTTPR_PHY_CAP_SIZE], int r)
return phy_cap[r - DP_TRAINING_AUX_RD_INTERVAL_PHY_REPEATER1]; return phy_cap[r - DP_TRAINING_AUX_RD_INTERVAL_PHY_REPEATER1];
} }
void drm_dp_lttpr_link_train_channel_eq_delay(const u8 phy_cap[DP_LTTPR_PHY_CAP_SIZE]) void drm_dp_lttpr_link_train_channel_eq_delay(const struct drm_dp_aux *aux,
const u8 phy_cap[DP_LTTPR_PHY_CAP_SIZE])
{ {
u8 interval = dp_lttpr_phy_cap(phy_cap, u8 interval = dp_lttpr_phy_cap(phy_cap,
DP_TRAINING_AUX_RD_INTERVAL_PHY_REPEATER1) & DP_TRAINING_AUX_RD_INTERVAL_PHY_REPEATER1) &
DP_TRAINING_AUX_RD_MASK; DP_TRAINING_AUX_RD_MASK;
__drm_dp_link_train_channel_eq_delay(interval); __drm_dp_link_train_channel_eq_delay(aux, interval);
} }
EXPORT_SYMBOL(drm_dp_lttpr_link_train_channel_eq_delay); EXPORT_SYMBOL(drm_dp_lttpr_link_train_channel_eq_delay);
@@ -215,10 +220,10 @@ drm_dp_dump_access(const struct drm_dp_aux *aux,
const char *arrow = request == DP_AUX_NATIVE_READ ? "->" : "<-"; const char *arrow = request == DP_AUX_NATIVE_READ ? "->" : "<-";
if (ret > 0) if (ret > 0)
DRM_DEBUG_DP("%s: 0x%05x AUX %s (ret=%3d) %*ph\n", drm_dbg_dp(aux->drm_dev, "%s: 0x%05x AUX %s (ret=%3d) %*ph\n",
aux->name, offset, arrow, ret, min(ret, 20), buffer); aux->name, offset, arrow, ret, min(ret, 20), buffer);
else else
DRM_DEBUG_DP("%s: 0x%05x AUX %s (ret=%3d)\n", drm_dbg_dp(aux->drm_dev, "%s: 0x%05x AUX %s (ret=%3d)\n",
aux->name, offset, arrow, ret); aux->name, offset, arrow, ret);
} }
@@ -282,7 +287,7 @@ static int drm_dp_dpcd_access(struct drm_dp_aux *aux, u8 request,
err = ret; err = ret;
} }
DRM_DEBUG_KMS("%s: Too many retries, giving up. First error: %d\n", drm_dbg_kms(aux->drm_dev, "%s: Too many retries, giving up. First error: %d\n",
aux->name, err); aux->name, err);
ret = err; ret = err;
@@ -519,28 +524,28 @@ bool drm_dp_send_real_edid_checksum(struct drm_dp_aux *aux,
if (drm_dp_dpcd_read(aux, DP_DEVICE_SERVICE_IRQ_VECTOR, if (drm_dp_dpcd_read(aux, DP_DEVICE_SERVICE_IRQ_VECTOR,
&auto_test_req, 1) < 1) { &auto_test_req, 1) < 1) {
DRM_ERROR("%s: DPCD failed read at register 0x%x\n", drm_err(aux->drm_dev, "%s: DPCD failed read at register 0x%x\n",
aux->name, DP_DEVICE_SERVICE_IRQ_VECTOR); aux->name, DP_DEVICE_SERVICE_IRQ_VECTOR);
return false; return false;
} }
auto_test_req &= DP_AUTOMATED_TEST_REQUEST; auto_test_req &= DP_AUTOMATED_TEST_REQUEST;
if (drm_dp_dpcd_read(aux, DP_TEST_REQUEST, &link_edid_read, 1) < 1) { if (drm_dp_dpcd_read(aux, DP_TEST_REQUEST, &link_edid_read, 1) < 1) {
DRM_ERROR("%s: DPCD failed read at register 0x%x\n", drm_err(aux->drm_dev, "%s: DPCD failed read at register 0x%x\n",
aux->name, DP_TEST_REQUEST); aux->name, DP_TEST_REQUEST);
return false; return false;
} }
link_edid_read &= DP_TEST_LINK_EDID_READ; link_edid_read &= DP_TEST_LINK_EDID_READ;
if (!auto_test_req || !link_edid_read) { if (!auto_test_req || !link_edid_read) {
DRM_DEBUG_KMS("%s: Source DUT does not support TEST_EDID_READ\n", drm_dbg_kms(aux->drm_dev, "%s: Source DUT does not support TEST_EDID_READ\n",
aux->name); aux->name);
return false; return false;
} }
if (drm_dp_dpcd_write(aux, DP_DEVICE_SERVICE_IRQ_VECTOR, if (drm_dp_dpcd_write(aux, DP_DEVICE_SERVICE_IRQ_VECTOR,
&auto_test_req, 1) < 1) { &auto_test_req, 1) < 1) {
DRM_ERROR("%s: DPCD failed write at register 0x%x\n", drm_err(aux->drm_dev, "%s: DPCD failed write at register 0x%x\n",
aux->name, DP_DEVICE_SERVICE_IRQ_VECTOR); aux->name, DP_DEVICE_SERVICE_IRQ_VECTOR);
return false; return false;
} }
@@ -548,14 +553,14 @@ bool drm_dp_send_real_edid_checksum(struct drm_dp_aux *aux,
/* send back checksum for the last edid extension block data */ /* send back checksum for the last edid extension block data */
if (drm_dp_dpcd_write(aux, DP_TEST_EDID_CHECKSUM, if (drm_dp_dpcd_write(aux, DP_TEST_EDID_CHECKSUM,
&real_edid_checksum, 1) < 1) { &real_edid_checksum, 1) < 1) {
DRM_ERROR("%s: DPCD failed write at register 0x%x\n", drm_err(aux->drm_dev, "%s: DPCD failed write at register 0x%x\n",
aux->name, DP_TEST_EDID_CHECKSUM); aux->name, DP_TEST_EDID_CHECKSUM);
return false; return false;
} }
test_resp |= DP_TEST_EDID_CHECKSUM_WRITE; test_resp |= DP_TEST_EDID_CHECKSUM_WRITE;
if (drm_dp_dpcd_write(aux, DP_TEST_RESPONSE, &test_resp, 1) < 1) { if (drm_dp_dpcd_write(aux, DP_TEST_RESPONSE, &test_resp, 1) < 1) {
DRM_ERROR("%s: DPCD failed write at register 0x%x\n", drm_err(aux->drm_dev, "%s: DPCD failed write at register 0x%x\n",
aux->name, DP_TEST_RESPONSE); aux->name, DP_TEST_RESPONSE);
return false; return false;
} }
@@ -599,17 +604,16 @@ static int drm_dp_read_extended_dpcd_caps(struct drm_dp_aux *aux,
return -EIO; return -EIO;
if (dpcd[DP_DPCD_REV] > dpcd_ext[DP_DPCD_REV]) { if (dpcd[DP_DPCD_REV] > dpcd_ext[DP_DPCD_REV]) {
DRM_DEBUG_KMS("%s: Extended DPCD rev less than base DPCD rev (%d > %d)\n", drm_dbg_kms(aux->drm_dev,
aux->name, dpcd[DP_DPCD_REV], "%s: Extended DPCD rev less than base DPCD rev (%d > %d)\n",
dpcd_ext[DP_DPCD_REV]); aux->name, dpcd[DP_DPCD_REV], dpcd_ext[DP_DPCD_REV]);
return 0; return 0;
} }
if (!memcmp(dpcd, dpcd_ext, sizeof(dpcd_ext))) if (!memcmp(dpcd, dpcd_ext, sizeof(dpcd_ext)))
return 0; return 0;
DRM_DEBUG_KMS("%s: Base DPCD: %*ph\n", drm_dbg_kms(aux->drm_dev, "%s: Base DPCD: %*ph\n", aux->name, DP_RECEIVER_CAP_SIZE, dpcd);
aux->name, DP_RECEIVER_CAP_SIZE, dpcd);
memcpy(dpcd, dpcd_ext, sizeof(dpcd_ext)); memcpy(dpcd, dpcd_ext, sizeof(dpcd_ext));
@@ -644,8 +648,7 @@ int drm_dp_read_dpcd_caps(struct drm_dp_aux *aux,
if (ret < 0) if (ret < 0)
return ret; return ret;
DRM_DEBUG_KMS("%s: DPCD: %*ph\n", drm_dbg_kms(aux->drm_dev, "%s: DPCD: %*ph\n", aux->name, DP_RECEIVER_CAP_SIZE, dpcd);
aux->name, DP_RECEIVER_CAP_SIZE, dpcd);
return ret; return ret;
} }
@@ -674,12 +677,17 @@ int drm_dp_read_downstream_info(struct drm_dp_aux *aux,
memset(downstream_ports, 0, DP_MAX_DOWNSTREAM_PORTS); memset(downstream_ports, 0, DP_MAX_DOWNSTREAM_PORTS);
/* No downstream info to read */ /* No downstream info to read */
if (!drm_dp_is_branch(dpcd) || if (!drm_dp_is_branch(dpcd) || dpcd[DP_DPCD_REV] == DP_DPCD_REV_10)
dpcd[DP_DPCD_REV] < DP_DPCD_REV_10 ||
!(dpcd[DP_DOWNSTREAMPORT_PRESENT] & DP_DWN_STRM_PORT_PRESENT))
return 0; return 0;
/* Some branches advertise having 0 downstream ports, despite also advertising they have a
* downstream port present. The DP spec isn't clear on if this is allowed or not, but since
* some branches do it we need to handle it regardless.
*/
len = drm_dp_downstream_port_count(dpcd); len = drm_dp_downstream_port_count(dpcd);
if (!len)
return 0;
if (dpcd[DP_DOWNSTREAMPORT_PRESENT] & DP_DETAILED_CAP_INFO_AVAILABLE) if (dpcd[DP_DOWNSTREAMPORT_PRESENT] & DP_DETAILED_CAP_INFO_AVAILABLE)
len *= 4; len *= 4;
@@ -689,8 +697,7 @@ int drm_dp_read_downstream_info(struct drm_dp_aux *aux,
if (ret != len) if (ret != len)
return -EIO; return -EIO;
DRM_DEBUG_KMS("%s: DPCD DFP: %*ph\n", drm_dbg_kms(aux->drm_dev, "%s: DPCD DFP: %*ph\n", aux->name, len, downstream_ports);
aux->name, len, downstream_ports);
return 0; return 0;
} }
@@ -1407,10 +1414,10 @@ static int drm_dp_i2c_do_msg(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg)
* Avoid spamming the kernel log with timeout errors. * Avoid spamming the kernel log with timeout errors.
*/ */
if (ret == -ETIMEDOUT) if (ret == -ETIMEDOUT)
DRM_DEBUG_KMS_RATELIMITED("%s: transaction timed out\n", drm_dbg_kms_ratelimited(aux->drm_dev, "%s: transaction timed out\n",
aux->name); aux->name);
else else
DRM_DEBUG_KMS("%s: transaction failed: %d\n", drm_dbg_kms(aux->drm_dev, "%s: transaction failed: %d\n",
aux->name, ret); aux->name, ret);
return ret; return ret;
} }
@@ -1425,12 +1432,12 @@ static int drm_dp_i2c_do_msg(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg)
break; break;
case DP_AUX_NATIVE_REPLY_NACK: case DP_AUX_NATIVE_REPLY_NACK:
DRM_DEBUG_KMS("%s: native nack (result=%d, size=%zu)\n", drm_dbg_kms(aux->drm_dev, "%s: native nack (result=%d, size=%zu)\n",
aux->name, ret, msg->size); aux->name, ret, msg->size);
return -EREMOTEIO; return -EREMOTEIO;
case DP_AUX_NATIVE_REPLY_DEFER: case DP_AUX_NATIVE_REPLY_DEFER:
DRM_DEBUG_KMS("%s: native defer\n", aux->name); drm_dbg_kms(aux->drm_dev, "%s: native defer\n", aux->name);
/* /*
* We could check for I2C bit rate capabilities and if * We could check for I2C bit rate capabilities and if
* available adjust this interval. We could also be * available adjust this interval. We could also be
@@ -1444,7 +1451,7 @@ static int drm_dp_i2c_do_msg(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg)
continue; continue;
default: default:
DRM_ERROR("%s: invalid native reply %#04x\n", drm_err(aux->drm_dev, "%s: invalid native reply %#04x\n",
aux->name, msg->reply); aux->name, msg->reply);
return -EREMOTEIO; return -EREMOTEIO;
} }
@@ -1460,13 +1467,13 @@ static int drm_dp_i2c_do_msg(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg)
return ret; return ret;
case DP_AUX_I2C_REPLY_NACK: case DP_AUX_I2C_REPLY_NACK:
DRM_DEBUG_KMS("%s: I2C nack (result=%d, size=%zu)\n", drm_dbg_kms(aux->drm_dev, "%s: I2C nack (result=%d, size=%zu)\n",
aux->name, ret, msg->size); aux->name, ret, msg->size);
aux->i2c_nack_count++; aux->i2c_nack_count++;
return -EREMOTEIO; return -EREMOTEIO;
case DP_AUX_I2C_REPLY_DEFER: case DP_AUX_I2C_REPLY_DEFER:
DRM_DEBUG_KMS("%s: I2C defer\n", aux->name); drm_dbg_kms(aux->drm_dev, "%s: I2C defer\n", aux->name);
/* DP Compliance Test 4.2.2.5 Requirement: /* DP Compliance Test 4.2.2.5 Requirement:
* Must have at least 7 retries for I2C defers on the * Must have at least 7 retries for I2C defers on the
* transaction to pass this test * transaction to pass this test
@@ -1480,13 +1487,13 @@ static int drm_dp_i2c_do_msg(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg)
continue; continue;
default: default:
DRM_ERROR("%s: invalid I2C reply %#04x\n", drm_err(aux->drm_dev, "%s: invalid I2C reply %#04x\n",
aux->name, msg->reply); aux->name, msg->reply);
return -EREMOTEIO; return -EREMOTEIO;
} }
} }
DRM_DEBUG_KMS("%s: Too many retries, giving up\n", aux->name); drm_dbg_kms(aux->drm_dev, "%s: Too many retries, giving up\n", aux->name);
return -EREMOTEIO; return -EREMOTEIO;
} }
@@ -1515,7 +1522,8 @@ static int drm_dp_i2c_drain_msg(struct drm_dp_aux *aux, struct drm_dp_aux_msg *o
return err == 0 ? -EPROTO : err; return err == 0 ? -EPROTO : err;
if (err < msg.size && err < ret) { if (err < msg.size && err < ret) {
DRM_DEBUG_KMS("%s: Partial I2C reply: requested %zu bytes got %d bytes\n", drm_dbg_kms(aux->drm_dev,
"%s: Partial I2C reply: requested %zu bytes got %d bytes\n",
aux->name, msg.size, err); aux->name, msg.size, err);
ret = err; ret = err;
} }
@@ -1695,12 +1703,11 @@ static void drm_dp_aux_crc_work(struct work_struct *work)
} }
if (ret == -EAGAIN) { if (ret == -EAGAIN) {
DRM_DEBUG_KMS("%s: Get CRC failed after retrying: %d\n", drm_dbg_kms(aux->drm_dev, "%s: Get CRC failed after retrying: %d\n",
aux->name, ret); aux->name, ret);
continue; continue;
} else if (ret) { } else if (ret) {
DRM_DEBUG_KMS("%s: Failed to get a CRC: %d\n", drm_dbg_kms(aux->drm_dev, "%s: Failed to get a CRC: %d\n", aux->name, ret);
aux->name, ret);
continue; continue;
} }
@@ -1728,10 +1735,18 @@ EXPORT_SYMBOL(drm_dp_remote_aux_init);
* drm_dp_aux_init() - minimally initialise an aux channel * drm_dp_aux_init() - minimally initialise an aux channel
* @aux: DisplayPort AUX channel * @aux: DisplayPort AUX channel
* *
* If you need to use the drm_dp_aux's i2c adapter prior to registering it * If you need to use the drm_dp_aux's i2c adapter prior to registering it with
* with the outside world, call drm_dp_aux_init() first. You must still * the outside world, call drm_dp_aux_init() first. For drivers which are
* call drm_dp_aux_register() once the connector has been registered to * grandparents to their AUX adapters (e.g. the AUX adapter is parented by a
* allow userspace access to the auxiliary DP channel. * &drm_connector), you must still call drm_dp_aux_register() once the connector
* has been registered to allow userspace access to the auxiliary DP channel.
* Likewise, for such drivers you should also assign &drm_dp_aux.drm_dev as
* early as possible so that the &drm_device that corresponds to the AUX adapter
* may be mentioned in debugging output from the DRM DP helpers.
*
* For devices which use a separate platform device for their AUX adapters, this
* may be called as early as required by the driver.
*
*/ */
void drm_dp_aux_init(struct drm_dp_aux *aux) void drm_dp_aux_init(struct drm_dp_aux *aux)
{ {
@@ -1751,15 +1766,26 @@ EXPORT_SYMBOL(drm_dp_aux_init);
* drm_dp_aux_register() - initialise and register aux channel * drm_dp_aux_register() - initialise and register aux channel
* @aux: DisplayPort AUX channel * @aux: DisplayPort AUX channel
* *
* Automatically calls drm_dp_aux_init() if this hasn't been done yet. * Automatically calls drm_dp_aux_init() if this hasn't been done yet. This
* This should only be called when the underlying &struct drm_connector is * should only be called once the parent of @aux, &drm_dp_aux.dev, is
* initialiazed already. Therefore the best place to call this is from * initialized. For devices which are grandparents of their AUX channels,
* &drm_connector_funcs.late_register. Not that drivers which don't follow this * &drm_dp_aux.dev will typically be the &drm_connector &device which
* will Oops when CONFIG_DRM_DP_AUX_CHARDEV is enabled. * corresponds to @aux. For these devices, it's advised to call
* drm_dp_aux_register() in &drm_connector_funcs.late_register, and likewise to
* call drm_dp_aux_unregister() in &drm_connector_funcs.early_unregister.
* Functions which don't follow this will likely Oops when
* %CONFIG_DRM_DP_AUX_CHARDEV is enabled.
* *
* Drivers which need to use the aux channel before that point (e.g. at driver * For devices where the AUX channel is a device that exists independently of
* load time, before drm_dev_register() has been called) need to call * the &drm_device that uses it, such as SoCs and bridge devices, it is
* drm_dp_aux_init(). * recommended to call drm_dp_aux_register() after a &drm_device has been
* assigned to &drm_dp_aux.drm_dev, and likewise to call
* drm_dp_aux_unregister() once the &drm_device should no longer be associated
* with the AUX channel (e.g. on bridge detach).
*
* Drivers which need to use the aux channel before either of the two points
* mentioned above need to call drm_dp_aux_init() in order to use the AUX
* channel before registration.
* *
* Returns 0 on success or a negative error code on failure. * Returns 0 on success or a negative error code on failure.
*/ */
@@ -1767,6 +1793,8 @@ int drm_dp_aux_register(struct drm_dp_aux *aux)
{ {
int ret; int ret;
WARN_ON_ONCE(!aux->drm_dev);
if (!aux->ddc.algo) if (!aux->ddc.algo)
drm_dp_aux_init(aux); drm_dp_aux_init(aux);
@@ -1983,13 +2011,12 @@ int drm_dp_read_desc(struct drm_dp_aux *aux, struct drm_dp_desc *desc,
dev_id_len = strnlen(ident->device_id, sizeof(ident->device_id)); dev_id_len = strnlen(ident->device_id, sizeof(ident->device_id));
DRM_DEBUG_KMS("%s: DP %s: OUI %*phD dev-ID %*pE HW-rev %d.%d SW-rev %d.%d quirks 0x%04x\n", drm_dbg_kms(aux->drm_dev,
"%s: DP %s: OUI %*phD dev-ID %*pE HW-rev %d.%d SW-rev %d.%d quirks 0x%04x\n",
aux->name, is_branch ? "branch" : "sink", aux->name, is_branch ? "branch" : "sink",
(int)sizeof(ident->oui), ident->oui, (int)sizeof(ident->oui), ident->oui, dev_id_len,
dev_id_len, ident->device_id, ident->device_id, ident->hw_rev >> 4, ident->hw_rev & 0xf,
ident->hw_rev >> 4, ident->hw_rev & 0xf, ident->sw_major_rev, ident->sw_minor_rev, desc->quirks);
ident->sw_major_rev, ident->sw_minor_rev,
desc->quirks);
return 0; return 0;
} }
@@ -2755,7 +2782,8 @@ int drm_dp_pcon_frl_enable(struct drm_dp_aux *aux)
if (ret < 0) if (ret < 0)
return ret; return ret;
if (!(buf & DP_PCON_ENABLE_SOURCE_CTL_MODE)) { if (!(buf & DP_PCON_ENABLE_SOURCE_CTL_MODE)) {
DRM_DEBUG_KMS("PCON in Autonomous mode, can't enable FRL\n"); drm_dbg_kms(aux->drm_dev, "%s: PCON in Autonomous mode, can't enable FRL\n",
aux->name);
return -EINVAL; return -EINVAL;
} }
buf |= DP_PCON_ENABLE_HDMI_LINK; buf |= DP_PCON_ENABLE_HDMI_LINK;
@@ -2850,7 +2878,8 @@ void drm_dp_pcon_hdmi_frl_link_error_count(struct drm_dp_aux *aux,
num_error = 0; num_error = 0;
} }
DRM_ERROR("More than %d errors since the last read for lane %d", num_error, i); drm_err(aux->drm_dev, "%s: More than %d errors since the last read for lane %d",
aux->name, num_error, i);
} }
} }
EXPORT_SYMBOL(drm_dp_pcon_hdmi_frl_link_error_count); EXPORT_SYMBOL(drm_dp_pcon_hdmi_frl_link_error_count);

View File

@@ -286,7 +286,8 @@ static void drm_dp_encode_sideband_msg_hdr(struct drm_dp_sideband_msg_hdr *hdr,
*len = idx; *len = idx;
} }
static bool drm_dp_decode_sideband_msg_hdr(struct drm_dp_sideband_msg_hdr *hdr, static bool drm_dp_decode_sideband_msg_hdr(const struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_sideband_msg_hdr *hdr,
u8 *buf, int buflen, u8 *hdrlen) u8 *buf, int buflen, u8 *hdrlen)
{ {
u8 crc4; u8 crc4;
@@ -303,7 +304,7 @@ static bool drm_dp_decode_sideband_msg_hdr(struct drm_dp_sideband_msg_hdr *hdr,
crc4 = drm_dp_msg_header_crc4(buf, (len * 2) - 1); crc4 = drm_dp_msg_header_crc4(buf, (len * 2) - 1);
if ((crc4 & 0xf) != (buf[len - 1] & 0xf)) { if ((crc4 & 0xf) != (buf[len - 1] & 0xf)) {
DRM_DEBUG_KMS("crc4 mismatch 0x%x 0x%x\n", crc4, buf[len - 1]); drm_dbg_kms(mgr->dev, "crc4 mismatch 0x%x 0x%x\n", crc4, buf[len - 1]);
return false; return false;
} }
@@ -789,7 +790,8 @@ static bool drm_dp_sideband_append_payload(struct drm_dp_sideband_msg_rx *msg,
return true; return true;
} }
static bool drm_dp_sideband_parse_link_address(struct drm_dp_sideband_msg_rx *raw, static bool drm_dp_sideband_parse_link_address(const struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_sideband_msg_rx *raw,
struct drm_dp_sideband_msg_reply_body *repmsg) struct drm_dp_sideband_msg_reply_body *repmsg)
{ {
int idx = 1; int idx = 1;
@@ -1014,7 +1016,8 @@ drm_dp_sideband_parse_query_stream_enc_status(
return true; return true;
} }
static bool drm_dp_sideband_parse_reply(struct drm_dp_sideband_msg_rx *raw, static bool drm_dp_sideband_parse_reply(const struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_sideband_msg_rx *raw,
struct drm_dp_sideband_msg_reply_body *msg) struct drm_dp_sideband_msg_reply_body *msg)
{ {
memset(msg, 0, sizeof(*msg)); memset(msg, 0, sizeof(*msg));
@@ -1030,7 +1033,7 @@ static bool drm_dp_sideband_parse_reply(struct drm_dp_sideband_msg_rx *raw,
switch (msg->req_type) { switch (msg->req_type) {
case DP_LINK_ADDRESS: case DP_LINK_ADDRESS:
return drm_dp_sideband_parse_link_address(raw, msg); return drm_dp_sideband_parse_link_address(mgr, raw, msg);
case DP_QUERY_PAYLOAD: case DP_QUERY_PAYLOAD:
return drm_dp_sideband_parse_query_payload_ack(raw, msg); return drm_dp_sideband_parse_query_payload_ack(raw, msg);
case DP_REMOTE_DPCD_READ: case DP_REMOTE_DPCD_READ:
@@ -1053,13 +1056,15 @@ static bool drm_dp_sideband_parse_reply(struct drm_dp_sideband_msg_rx *raw,
case DP_QUERY_STREAM_ENC_STATUS: case DP_QUERY_STREAM_ENC_STATUS:
return drm_dp_sideband_parse_query_stream_enc_status(raw, msg); return drm_dp_sideband_parse_query_stream_enc_status(raw, msg);
default: default:
DRM_ERROR("Got unknown reply 0x%02x (%s)\n", msg->req_type, drm_err(mgr->dev, "Got unknown reply 0x%02x (%s)\n",
drm_dp_mst_req_type_str(msg->req_type)); msg->req_type, drm_dp_mst_req_type_str(msg->req_type));
return false; return false;
} }
} }
static bool drm_dp_sideband_parse_connection_status_notify(struct drm_dp_sideband_msg_rx *raw, static bool
drm_dp_sideband_parse_connection_status_notify(const struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_sideband_msg_rx *raw,
struct drm_dp_sideband_msg_req_body *msg) struct drm_dp_sideband_msg_req_body *msg)
{ {
int idx = 1; int idx = 1;
@@ -1082,11 +1087,13 @@ static bool drm_dp_sideband_parse_connection_status_notify(struct drm_dp_sideban
idx++; idx++;
return true; return true;
fail_len: fail_len:
DRM_DEBUG_KMS("connection status reply parse length fail %d %d\n", idx, raw->curlen); drm_dbg_kms(mgr->dev, "connection status reply parse length fail %d %d\n",
idx, raw->curlen);
return false; return false;
} }
static bool drm_dp_sideband_parse_resource_status_notify(struct drm_dp_sideband_msg_rx *raw, static bool drm_dp_sideband_parse_resource_status_notify(const struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_sideband_msg_rx *raw,
struct drm_dp_sideband_msg_req_body *msg) struct drm_dp_sideband_msg_req_body *msg)
{ {
int idx = 1; int idx = 1;
@@ -1105,11 +1112,12 @@ static bool drm_dp_sideband_parse_resource_status_notify(struct drm_dp_sideband_
idx++; idx++;
return true; return true;
fail_len: fail_len:
DRM_DEBUG_KMS("resource status reply parse length fail %d %d\n", idx, raw->curlen); drm_dbg_kms(mgr->dev, "resource status reply parse length fail %d %d\n", idx, raw->curlen);
return false; return false;
} }
static bool drm_dp_sideband_parse_req(struct drm_dp_sideband_msg_rx *raw, static bool drm_dp_sideband_parse_req(const struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_sideband_msg_rx *raw,
struct drm_dp_sideband_msg_req_body *msg) struct drm_dp_sideband_msg_req_body *msg)
{ {
memset(msg, 0, sizeof(*msg)); memset(msg, 0, sizeof(*msg));
@@ -1117,12 +1125,12 @@ static bool drm_dp_sideband_parse_req(struct drm_dp_sideband_msg_rx *raw,
switch (msg->req_type) { switch (msg->req_type) {
case DP_CONNECTION_STATUS_NOTIFY: case DP_CONNECTION_STATUS_NOTIFY:
return drm_dp_sideband_parse_connection_status_notify(raw, msg); return drm_dp_sideband_parse_connection_status_notify(mgr, raw, msg);
case DP_RESOURCE_STATUS_NOTIFY: case DP_RESOURCE_STATUS_NOTIFY:
return drm_dp_sideband_parse_resource_status_notify(raw, msg); return drm_dp_sideband_parse_resource_status_notify(mgr, raw, msg);
default: default:
DRM_ERROR("Got unknown request 0x%02x (%s)\n", msg->req_type, drm_err(mgr->dev, "Got unknown request 0x%02x (%s)\n",
drm_dp_mst_req_type_str(msg->req_type)); msg->req_type, drm_dp_mst_req_type_str(msg->req_type));
return false; return false;
} }
} }
@@ -1232,14 +1240,14 @@ static int drm_dp_mst_assign_payload_id(struct drm_dp_mst_topology_mgr *mgr,
ret = find_first_zero_bit(&mgr->payload_mask, mgr->max_payloads + 1); ret = find_first_zero_bit(&mgr->payload_mask, mgr->max_payloads + 1);
if (ret > mgr->max_payloads) { if (ret > mgr->max_payloads) {
ret = -EINVAL; ret = -EINVAL;
DRM_DEBUG_KMS("out of payload ids %d\n", ret); drm_dbg_kms(mgr->dev, "out of payload ids %d\n", ret);
goto out_unlock; goto out_unlock;
} }
vcpi_ret = find_first_zero_bit(&mgr->vcpi_mask, mgr->max_payloads + 1); vcpi_ret = find_first_zero_bit(&mgr->vcpi_mask, mgr->max_payloads + 1);
if (vcpi_ret > mgr->max_payloads) { if (vcpi_ret > mgr->max_payloads) {
ret = -EINVAL; ret = -EINVAL;
DRM_DEBUG_KMS("out of vcpi ids %d\n", ret); drm_dbg_kms(mgr->dev, "out of vcpi ids %d\n", ret);
goto out_unlock; goto out_unlock;
} }
@@ -1261,7 +1269,7 @@ static void drm_dp_mst_put_payload_id(struct drm_dp_mst_topology_mgr *mgr,
return; return;
mutex_lock(&mgr->payload_lock); mutex_lock(&mgr->payload_lock);
DRM_DEBUG_KMS("putting payload %d\n", vcpi); drm_dbg_kms(mgr->dev, "putting payload %d\n", vcpi);
clear_bit(vcpi - 1, &mgr->vcpi_mask); clear_bit(vcpi - 1, &mgr->vcpi_mask);
for (i = 0; i < mgr->max_payloads; i++) { for (i = 0; i < mgr->max_payloads; i++) {
@@ -1331,7 +1339,8 @@ static int drm_dp_mst_wait_tx_reply(struct drm_dp_mst_branch *mstb,
goto out; goto out;
} }
} else { } else {
DRM_DEBUG_KMS("timedout msg send %p %d %d\n", txmsg, txmsg->state, txmsg->seqno); drm_dbg_kms(mgr->dev, "timedout msg send %p %d %d\n",
txmsg, txmsg->state, txmsg->seqno);
/* dump some state */ /* dump some state */
ret = -EIO; ret = -EIO;
@@ -1485,7 +1494,7 @@ static void
drm_dp_mst_get_mstb_malloc(struct drm_dp_mst_branch *mstb) drm_dp_mst_get_mstb_malloc(struct drm_dp_mst_branch *mstb)
{ {
kref_get(&mstb->malloc_kref); kref_get(&mstb->malloc_kref);
DRM_DEBUG("mstb %p (%d)\n", mstb, kref_read(&mstb->malloc_kref)); drm_dbg(mstb->mgr->dev, "mstb %p (%d)\n", mstb, kref_read(&mstb->malloc_kref));
} }
/** /**
@@ -1502,7 +1511,7 @@ drm_dp_mst_get_mstb_malloc(struct drm_dp_mst_branch *mstb)
static void static void
drm_dp_mst_put_mstb_malloc(struct drm_dp_mst_branch *mstb) drm_dp_mst_put_mstb_malloc(struct drm_dp_mst_branch *mstb)
{ {
DRM_DEBUG("mstb %p (%d)\n", mstb, kref_read(&mstb->malloc_kref) - 1); drm_dbg(mstb->mgr->dev, "mstb %p (%d)\n", mstb, kref_read(&mstb->malloc_kref) - 1);
kref_put(&mstb->malloc_kref, drm_dp_free_mst_branch_device); kref_put(&mstb->malloc_kref, drm_dp_free_mst_branch_device);
} }
@@ -1536,7 +1545,7 @@ void
drm_dp_mst_get_port_malloc(struct drm_dp_mst_port *port) drm_dp_mst_get_port_malloc(struct drm_dp_mst_port *port)
{ {
kref_get(&port->malloc_kref); kref_get(&port->malloc_kref);
DRM_DEBUG("port %p (%d)\n", port, kref_read(&port->malloc_kref)); drm_dbg(port->mgr->dev, "port %p (%d)\n", port, kref_read(&port->malloc_kref));
} }
EXPORT_SYMBOL(drm_dp_mst_get_port_malloc); EXPORT_SYMBOL(drm_dp_mst_get_port_malloc);
@@ -1553,7 +1562,7 @@ EXPORT_SYMBOL(drm_dp_mst_get_port_malloc);
void void
drm_dp_mst_put_port_malloc(struct drm_dp_mst_port *port) drm_dp_mst_put_port_malloc(struct drm_dp_mst_port *port)
{ {
DRM_DEBUG("port %p (%d)\n", port, kref_read(&port->malloc_kref) - 1); drm_dbg(port->mgr->dev, "port %p (%d)\n", port, kref_read(&port->malloc_kref) - 1);
kref_put(&port->malloc_kref, drm_dp_free_mst_port); kref_put(&port->malloc_kref, drm_dp_free_mst_port);
} }
EXPORT_SYMBOL(drm_dp_mst_put_port_malloc); EXPORT_SYMBOL(drm_dp_mst_put_port_malloc);
@@ -1778,8 +1787,7 @@ drm_dp_mst_topology_try_get_mstb(struct drm_dp_mst_branch *mstb)
topology_ref_history_lock(mstb->mgr); topology_ref_history_lock(mstb->mgr);
ret = kref_get_unless_zero(&mstb->topology_kref); ret = kref_get_unless_zero(&mstb->topology_kref);
if (ret) { if (ret) {
DRM_DEBUG("mstb %p (%d)\n", drm_dbg(mstb->mgr->dev, "mstb %p (%d)\n", mstb, kref_read(&mstb->topology_kref));
mstb, kref_read(&mstb->topology_kref));
save_mstb_topology_ref(mstb, DRM_DP_MST_TOPOLOGY_REF_GET); save_mstb_topology_ref(mstb, DRM_DP_MST_TOPOLOGY_REF_GET);
} }
@@ -1809,7 +1817,7 @@ static void drm_dp_mst_topology_get_mstb(struct drm_dp_mst_branch *mstb)
save_mstb_topology_ref(mstb, DRM_DP_MST_TOPOLOGY_REF_GET); save_mstb_topology_ref(mstb, DRM_DP_MST_TOPOLOGY_REF_GET);
WARN_ON(kref_read(&mstb->topology_kref) == 0); WARN_ON(kref_read(&mstb->topology_kref) == 0);
kref_get(&mstb->topology_kref); kref_get(&mstb->topology_kref);
DRM_DEBUG("mstb %p (%d)\n", mstb, kref_read(&mstb->topology_kref)); drm_dbg(mstb->mgr->dev, "mstb %p (%d)\n", mstb, kref_read(&mstb->topology_kref));
topology_ref_history_unlock(mstb->mgr); topology_ref_history_unlock(mstb->mgr);
} }
@@ -1831,8 +1839,7 @@ drm_dp_mst_topology_put_mstb(struct drm_dp_mst_branch *mstb)
{ {
topology_ref_history_lock(mstb->mgr); topology_ref_history_lock(mstb->mgr);
DRM_DEBUG("mstb %p (%d)\n", drm_dbg(mstb->mgr->dev, "mstb %p (%d)\n", mstb, kref_read(&mstb->topology_kref) - 1);
mstb, kref_read(&mstb->topology_kref) - 1);
save_mstb_topology_ref(mstb, DRM_DP_MST_TOPOLOGY_REF_PUT); save_mstb_topology_ref(mstb, DRM_DP_MST_TOPOLOGY_REF_PUT);
topology_ref_history_unlock(mstb->mgr); topology_ref_history_unlock(mstb->mgr);
@@ -1895,8 +1902,7 @@ drm_dp_mst_topology_try_get_port(struct drm_dp_mst_port *port)
topology_ref_history_lock(port->mgr); topology_ref_history_lock(port->mgr);
ret = kref_get_unless_zero(&port->topology_kref); ret = kref_get_unless_zero(&port->topology_kref);
if (ret) { if (ret) {
DRM_DEBUG("port %p (%d)\n", drm_dbg(port->mgr->dev, "port %p (%d)\n", port, kref_read(&port->topology_kref));
port, kref_read(&port->topology_kref));
save_port_topology_ref(port, DRM_DP_MST_TOPOLOGY_REF_GET); save_port_topology_ref(port, DRM_DP_MST_TOPOLOGY_REF_GET);
} }
@@ -1923,7 +1929,7 @@ static void drm_dp_mst_topology_get_port(struct drm_dp_mst_port *port)
WARN_ON(kref_read(&port->topology_kref) == 0); WARN_ON(kref_read(&port->topology_kref) == 0);
kref_get(&port->topology_kref); kref_get(&port->topology_kref);
DRM_DEBUG("port %p (%d)\n", port, kref_read(&port->topology_kref)); drm_dbg(port->mgr->dev, "port %p (%d)\n", port, kref_read(&port->topology_kref));
save_port_topology_ref(port, DRM_DP_MST_TOPOLOGY_REF_GET); save_port_topology_ref(port, DRM_DP_MST_TOPOLOGY_REF_GET);
topology_ref_history_unlock(port->mgr); topology_ref_history_unlock(port->mgr);
@@ -1944,8 +1950,7 @@ static void drm_dp_mst_topology_put_port(struct drm_dp_mst_port *port)
{ {
topology_ref_history_lock(port->mgr); topology_ref_history_lock(port->mgr);
DRM_DEBUG("port %p (%d)\n", drm_dbg(port->mgr->dev, "port %p (%d)\n", port, kref_read(&port->topology_kref) - 1);
port, kref_read(&port->topology_kref) - 1);
save_port_topology_ref(port, DRM_DP_MST_TOPOLOGY_REF_PUT); save_port_topology_ref(port, DRM_DP_MST_TOPOLOGY_REF_PUT);
topology_ref_history_unlock(port->mgr); topology_ref_history_unlock(port->mgr);
@@ -2130,8 +2135,7 @@ drm_dp_port_set_pdt(struct drm_dp_mst_port *port, u8 new_pdt,
mstb = drm_dp_add_mst_branch_device(lct, rad); mstb = drm_dp_add_mst_branch_device(lct, rad);
if (!mstb) { if (!mstb) {
ret = -ENOMEM; ret = -ENOMEM;
DRM_ERROR("Failed to create MSTB for port %p", drm_err(mgr->dev, "Failed to create MSTB for port %p", port);
port);
goto out; goto out;
} }
@@ -2261,7 +2265,7 @@ static void build_mst_prop_path(const struct drm_dp_mst_branch *mstb,
int drm_dp_mst_connector_late_register(struct drm_connector *connector, int drm_dp_mst_connector_late_register(struct drm_connector *connector,
struct drm_dp_mst_port *port) struct drm_dp_mst_port *port)
{ {
DRM_DEBUG_KMS("registering %s remote bus for %s\n", drm_dbg_kms(port->mgr->dev, "registering %s remote bus for %s\n",
port->aux.name, connector->kdev->kobj.name); port->aux.name, connector->kdev->kobj.name);
port->aux.dev = connector->kdev; port->aux.dev = connector->kdev;
@@ -2281,7 +2285,7 @@ EXPORT_SYMBOL(drm_dp_mst_connector_late_register);
void drm_dp_mst_connector_early_unregister(struct drm_connector *connector, void drm_dp_mst_connector_early_unregister(struct drm_connector *connector,
struct drm_dp_mst_port *port) struct drm_dp_mst_port *port)
{ {
DRM_DEBUG_KMS("unregistering %s remote bus for %s\n", drm_dbg_kms(port->mgr->dev, "unregistering %s remote bus for %s\n",
port->aux.name, connector->kdev->kobj.name); port->aux.name, connector->kdev->kobj.name);
drm_dp_aux_unregister_devnode(&port->aux); drm_dp_aux_unregister_devnode(&port->aux);
} }
@@ -2312,7 +2316,7 @@ drm_dp_mst_port_add_connector(struct drm_dp_mst_branch *mstb,
return; return;
error: error:
DRM_ERROR("Failed to create connector for port %p: %d\n", port, ret); drm_err(mgr->dev, "Failed to create connector for port %p: %d\n", port, ret);
} }
/* /*
@@ -2350,6 +2354,7 @@ drm_dp_mst_add_port(struct drm_device *dev,
port->aux.is_remote = true; port->aux.is_remote = true;
/* initialize the MST downstream port's AUX crc work queue */ /* initialize the MST downstream port's AUX crc work queue */
port->aux.drm_dev = dev;
drm_dp_remote_aux_init(&port->aux); drm_dp_remote_aux_init(&port->aux);
/* /*
@@ -2451,8 +2456,7 @@ drm_dp_mst_handle_link_address_port(struct drm_dp_mst_branch *mstb,
if (ret == 1) { if (ret == 1) {
send_link_addr = true; send_link_addr = true;
} else if (ret < 0) { } else if (ret < 0) {
DRM_ERROR("Failed to change PDT on port %p: %d\n", drm_err(dev, "Failed to change PDT on port %p: %d\n", port, ret);
port, ret);
goto fail; goto fail;
} }
@@ -2547,8 +2551,7 @@ drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb,
if (ret == 1) { if (ret == 1) {
dowork = true; dowork = true;
} else if (ret < 0) { } else if (ret < 0) {
DRM_ERROR("Failed to change PDT for port %p: %d\n", drm_err(mgr->dev, "Failed to change PDT for port %p: %d\n", port, ret);
port, ret);
dowork = false; dowork = false;
} }
@@ -2607,7 +2610,9 @@ static struct drm_dp_mst_branch *drm_dp_get_mst_branch_device(struct drm_dp_mst_
if (port->port_num == port_num) { if (port->port_num == port_num) {
mstb = port->mstb; mstb = port->mstb;
if (!mstb) { if (!mstb) {
DRM_ERROR("failed to lookup MSTB with lct %d, rad %02x\n", lct, rad[0]); drm_err(mgr->dev,
"failed to lookup MSTB with lct %d, rad %02x\n",
lct, rad[0]);
goto out; goto out;
} }
@@ -2743,7 +2748,7 @@ static void drm_dp_mst_link_probe_work(struct work_struct *work)
* things work again. * things work again.
*/ */
if (clear_payload_id_table) { if (clear_payload_id_table) {
DRM_DEBUG_KMS("Clearing payload ID table\n"); drm_dbg_kms(dev, "Clearing payload ID table\n");
drm_dp_send_clear_payload_id_table(mgr, mstb); drm_dp_send_clear_payload_id_table(mgr, mstb);
} }
@@ -2805,7 +2810,7 @@ retry:
retries++; retries++;
goto retry; goto retry;
} }
DRM_DEBUG_KMS("failed to dpcd write %d %d\n", tosend, ret); drm_dbg_kms(mgr->dev, "failed to dpcd write %d %d\n", tosend, ret);
return -EIO; return -EIO;
} }
@@ -2918,7 +2923,7 @@ static void process_single_down_tx_qlock(struct drm_dp_mst_topology_mgr *mgr)
struct drm_dp_sideband_msg_tx, next); struct drm_dp_sideband_msg_tx, next);
ret = process_single_tx_qlock(mgr, txmsg, false); ret = process_single_tx_qlock(mgr, txmsg, false);
if (ret < 0) { if (ret < 0) {
DRM_DEBUG_KMS("failed to send msg in q %d\n", ret); drm_dbg_kms(mgr->dev, "failed to send msg in q %d\n", ret);
list_del(&txmsg->next); list_del(&txmsg->next);
txmsg->state = DRM_DP_SIDEBAND_TX_TIMEOUT; txmsg->state = DRM_DP_SIDEBAND_TX_TIMEOUT;
wake_up_all(&mgr->tx_waitq); wake_up_all(&mgr->tx_waitq);
@@ -2943,14 +2948,16 @@ static void drm_dp_queue_down_tx(struct drm_dp_mst_topology_mgr *mgr,
} }
static void static void
drm_dp_dump_link_address(struct drm_dp_link_address_ack_reply *reply) drm_dp_dump_link_address(const struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_link_address_ack_reply *reply)
{ {
struct drm_dp_link_addr_reply_port *port_reply; struct drm_dp_link_addr_reply_port *port_reply;
int i; int i;
for (i = 0; i < reply->nports; i++) { for (i = 0; i < reply->nports; i++) {
port_reply = &reply->ports[i]; port_reply = &reply->ports[i];
DRM_DEBUG_KMS("port %d: input %d, pdt: %d, pn: %d, dpcd_rev: %02x, mcs: %d, ddps: %d, ldps %d, sdp %d/%d\n", drm_dbg_kms(mgr->dev,
"port %d: input %d, pdt: %d, pn: %d, dpcd_rev: %02x, mcs: %d, ddps: %d, ldps %d, sdp %d/%d\n",
i, i,
port_reply->input_port, port_reply->input_port,
port_reply->peer_device_type, port_reply->peer_device_type,
@@ -2986,26 +2993,25 @@ static int drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,
/* FIXME: Actually do some real error handling here */ /* FIXME: Actually do some real error handling here */
ret = drm_dp_mst_wait_tx_reply(mstb, txmsg); ret = drm_dp_mst_wait_tx_reply(mstb, txmsg);
if (ret <= 0) { if (ret <= 0) {
DRM_ERROR("Sending link address failed with %d\n", ret); drm_err(mgr->dev, "Sending link address failed with %d\n", ret);
goto out; goto out;
} }
if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) { if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) {
DRM_ERROR("link address NAK received\n"); drm_err(mgr->dev, "link address NAK received\n");
ret = -EIO; ret = -EIO;
goto out; goto out;
} }
reply = &txmsg->reply.u.link_addr; reply = &txmsg->reply.u.link_addr;
DRM_DEBUG_KMS("link address reply: %d\n", reply->nports); drm_dbg_kms(mgr->dev, "link address reply: %d\n", reply->nports);
drm_dp_dump_link_address(reply); drm_dp_dump_link_address(mgr, reply);
ret = drm_dp_check_mstb_guid(mstb, reply->guid); ret = drm_dp_check_mstb_guid(mstb, reply->guid);
if (ret) { if (ret) {
char buf[64]; char buf[64];
drm_dp_mst_rad_to_str(mstb->rad, mstb->lct, buf, sizeof(buf)); drm_dp_mst_rad_to_str(mstb->rad, mstb->lct, buf, sizeof(buf));
DRM_ERROR("GUID check on %s failed: %d\n", drm_err(mgr->dev, "GUID check on %s failed: %d\n", buf, ret);
buf, ret);
goto out; goto out;
} }
@@ -3029,7 +3035,7 @@ static int drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,
if (port_mask & BIT(port->port_num)) if (port_mask & BIT(port->port_num))
continue; continue;
DRM_DEBUG_KMS("port %d was not in link address, removing\n", drm_dbg_kms(mgr->dev, "port %d was not in link address, removing\n",
port->port_num); port->port_num);
list_del(&port->next); list_del(&port->next);
drm_dp_mst_topology_put_port(port); drm_dp_mst_topology_put_port(port);
@@ -3062,7 +3068,7 @@ drm_dp_send_clear_payload_id_table(struct drm_dp_mst_topology_mgr *mgr,
ret = drm_dp_mst_wait_tx_reply(mstb, txmsg); ret = drm_dp_mst_wait_tx_reply(mstb, txmsg);
if (ret > 0 && txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) if (ret > 0 && txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK)
DRM_DEBUG_KMS("clear payload table id nak received\n"); drm_dbg_kms(mgr->dev, "clear payload table id nak received\n");
kfree(txmsg); kfree(txmsg);
} }
@@ -3091,12 +3097,12 @@ drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr,
path_res = &txmsg->reply.u.path_resources; path_res = &txmsg->reply.u.path_resources;
if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) { if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) {
DRM_DEBUG_KMS("enum path resources nak received\n"); drm_dbg_kms(mgr->dev, "enum path resources nak received\n");
} else { } else {
if (port->port_num != path_res->port_number) if (port->port_num != path_res->port_number)
DRM_ERROR("got incorrect port in response\n"); DRM_ERROR("got incorrect port in response\n");
DRM_DEBUG_KMS("enum path resources %d: %d %d\n", drm_dbg_kms(mgr->dev, "enum path resources %d: %d %d\n",
path_res->port_number, path_res->port_number,
path_res->full_payload_bw_number, path_res->full_payload_bw_number,
path_res->avail_payload_bw_number); path_res->avail_payload_bw_number);
@@ -3345,7 +3351,7 @@ static int drm_dp_destroy_payload_step1(struct drm_dp_mst_topology_mgr *mgr,
int id, int id,
struct drm_dp_payload *payload) struct drm_dp_payload *payload)
{ {
DRM_DEBUG_KMS("\n"); drm_dbg_kms(mgr->dev, "\n");
/* it's okay for these to fail */ /* it's okay for these to fail */
if (port) { if (port) {
drm_dp_payload_send_msg(mgr, port, id, 0); drm_dp_payload_send_msg(mgr, port, id, 0);
@@ -3451,7 +3457,7 @@ int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr)
continue; continue;
} }
DRM_DEBUG_KMS("removing payload %d\n", i); drm_dbg_kms(mgr->dev, "removing payload %d\n", i);
for (j = i; j < mgr->max_payloads - 1; j++) { for (j = i; j < mgr->max_payloads - 1; j++) {
mgr->payloads[j] = mgr->payloads[j + 1]; mgr->payloads[j] = mgr->payloads[j + 1];
mgr->proposed_vcpis[j] = mgr->proposed_vcpis[j + 1]; mgr->proposed_vcpis[j] = mgr->proposed_vcpis[j + 1];
@@ -3498,7 +3504,7 @@ int drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr)
port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi); port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi);
DRM_DEBUG_KMS("payload %d %d\n", i, mgr->payloads[i].payload_state); drm_dbg_kms(mgr->dev, "payload %d %d\n", i, mgr->payloads[i].payload_state);
if (mgr->payloads[i].payload_state == DP_PAYLOAD_LOCAL) { if (mgr->payloads[i].payload_state == DP_PAYLOAD_LOCAL) {
ret = drm_dp_create_payload_step2(mgr, port, mgr->proposed_vcpis[i]->vcpi, &mgr->payloads[i]); ret = drm_dp_create_payload_step2(mgr, port, mgr->proposed_vcpis[i]->vcpi, &mgr->payloads[i]);
} else if (mgr->payloads[i].payload_state == DP_PAYLOAD_DELETE_LOCAL) { } else if (mgr->payloads[i].payload_state == DP_PAYLOAD_DELETE_LOCAL) {
@@ -3543,7 +3549,7 @@ static int drm_dp_send_dpcd_read(struct drm_dp_mst_topology_mgr *mgr,
/* DPCD read should never be NACKed */ /* DPCD read should never be NACKed */
if (txmsg->reply.reply_type == 1) { if (txmsg->reply.reply_type == 1) {
DRM_ERROR("mstb %p port %d: DPCD read on addr 0x%x for %d bytes NAKed\n", drm_err(mgr->dev, "mstb %p port %d: DPCD read on addr 0x%x for %d bytes NAKed\n",
mstb, port->port_num, offset, size); mstb, port->port_num, offset, size);
ret = -EIO; ret = -EIO;
goto fail_free; goto fail_free;
@@ -3637,6 +3643,7 @@ static int drm_dp_send_up_ack_reply(struct drm_dp_mst_topology_mgr *mgr,
/** /**
* drm_dp_get_vc_payload_bw - get the VC payload BW for an MST link * drm_dp_get_vc_payload_bw - get the VC payload BW for an MST link
* @mgr: The &drm_dp_mst_topology_mgr to use
* @link_rate: link rate in 10kbits/s units * @link_rate: link rate in 10kbits/s units
* @link_lane_count: lane count * @link_lane_count: lane count
* *
@@ -3645,10 +3652,11 @@ static int drm_dp_send_up_ack_reply(struct drm_dp_mst_topology_mgr *mgr,
* convert the number of PBNs required for a given stream to the number of * convert the number of PBNs required for a given stream to the number of
* timeslots this stream requires in each MTP. * timeslots this stream requires in each MTP.
*/ */
int drm_dp_get_vc_payload_bw(int link_rate, int link_lane_count) int drm_dp_get_vc_payload_bw(const struct drm_dp_mst_topology_mgr *mgr,
int link_rate, int link_lane_count)
{ {
if (link_rate == 0 || link_lane_count == 0) if (link_rate == 0 || link_lane_count == 0)
DRM_DEBUG_KMS("invalid link rate/lane count: (%d / %d)\n", drm_dbg_kms(mgr->dev, "invalid link rate/lane count: (%d / %d)\n",
link_rate, link_lane_count); link_rate, link_lane_count);
/* See DP v2.0 2.6.4.2, VCPayload_Bandwidth_for_OneTimeSlotPer_MTP_Allocation */ /* See DP v2.0 2.6.4.2, VCPayload_Bandwidth_for_OneTimeSlotPer_MTP_Allocation */
@@ -3700,18 +3708,24 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
/* set the device into MST mode */ /* set the device into MST mode */
if (mst_state) { if (mst_state) {
struct drm_dp_payload reset_pay; struct drm_dp_payload reset_pay;
int lane_count;
int link_rate;
WARN_ON(mgr->mst_primary); WARN_ON(mgr->mst_primary);
/* get dpcd info */ /* get dpcd info */
ret = drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd, DP_RECEIVER_CAP_SIZE); ret = drm_dp_read_dpcd_caps(mgr->aux, mgr->dpcd);
if (ret != DP_RECEIVER_CAP_SIZE) { if (ret < 0) {
DRM_DEBUG_KMS("failed to read DPCD\n"); drm_dbg_kms(mgr->dev, "%s: failed to read DPCD, ret %d\n",
mgr->aux->name, ret);
goto out_unlock; goto out_unlock;
} }
mgr->pbn_div = drm_dp_get_vc_payload_bw(drm_dp_bw_code_to_link_rate(mgr->dpcd[1]), lane_count = min_t(int, mgr->dpcd[2] & DP_MAX_LANE_COUNT_MASK, mgr->max_lane_count);
mgr->dpcd[2] & DP_MAX_LANE_COUNT_MASK); link_rate = min_t(int, mgr->dpcd[1], mgr->max_link_rate);
mgr->pbn_div = drm_dp_get_vc_payload_bw(mgr,
drm_dp_bw_code_to_link_rate(link_rate),
lane_count);
if (mgr->pbn_div == 0) { if (mgr->pbn_div == 0) {
ret = -EINVAL; ret = -EINVAL;
goto out_unlock; goto out_unlock;
@@ -3840,7 +3854,7 @@ int drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr,
ret = drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd, ret = drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd,
DP_RECEIVER_CAP_SIZE); DP_RECEIVER_CAP_SIZE);
if (ret != DP_RECEIVER_CAP_SIZE) { if (ret != DP_RECEIVER_CAP_SIZE) {
DRM_DEBUG_KMS("dpcd read failed - undocked during suspend?\n"); drm_dbg_kms(mgr->dev, "dpcd read failed - undocked during suspend?\n");
goto out_fail; goto out_fail;
} }
@@ -3849,20 +3863,20 @@ int drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr,
DP_UP_REQ_EN | DP_UP_REQ_EN |
DP_UPSTREAM_IS_SRC); DP_UPSTREAM_IS_SRC);
if (ret < 0) { if (ret < 0) {
DRM_DEBUG_KMS("mst write failed - undocked during suspend?\n"); drm_dbg_kms(mgr->dev, "mst write failed - undocked during suspend?\n");
goto out_fail; goto out_fail;
} }
/* Some hubs forget their guids after they resume */ /* Some hubs forget their guids after they resume */
ret = drm_dp_dpcd_read(mgr->aux, DP_GUID, guid, 16); ret = drm_dp_dpcd_read(mgr->aux, DP_GUID, guid, 16);
if (ret != 16) { if (ret != 16) {
DRM_DEBUG_KMS("dpcd read failed - undocked during suspend?\n"); drm_dbg_kms(mgr->dev, "dpcd read failed - undocked during suspend?\n");
goto out_fail; goto out_fail;
} }
ret = drm_dp_check_mstb_guid(mgr->mst_primary, guid); ret = drm_dp_check_mstb_guid(mgr->mst_primary, guid);
if (ret) { if (ret) {
DRM_DEBUG_KMS("check mstb failed - undocked during suspend?\n"); drm_dbg_kms(mgr->dev, "check mstb failed - undocked during suspend?\n");
goto out_fail; goto out_fail;
} }
@@ -3875,7 +3889,8 @@ int drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr,
mutex_unlock(&mgr->lock); mutex_unlock(&mgr->lock);
if (sync) { if (sync) {
DRM_DEBUG_KMS("Waiting for link probe work to finish re-syncing topology...\n"); drm_dbg_kms(mgr->dev,
"Waiting for link probe work to finish re-syncing topology...\n");
flush_work(&mgr->work); flush_work(&mgr->work);
} }
@@ -3908,15 +3923,15 @@ drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up,
len = min(mgr->max_dpcd_transaction_bytes, 16); len = min(mgr->max_dpcd_transaction_bytes, 16);
ret = drm_dp_dpcd_read(mgr->aux, basereg, replyblock, len); ret = drm_dp_dpcd_read(mgr->aux, basereg, replyblock, len);
if (ret != len) { if (ret != len) {
DRM_DEBUG_KMS("failed to read DPCD down rep %d %d\n", len, ret); drm_dbg_kms(mgr->dev, "failed to read DPCD down rep %d %d\n", len, ret);
return false; return false;
} }
ret = drm_dp_decode_sideband_msg_hdr(&hdr, replyblock, len, &hdrlen); ret = drm_dp_decode_sideband_msg_hdr(mgr, &hdr, replyblock, len, &hdrlen);
if (ret == false) { if (ret == false) {
print_hex_dump(KERN_DEBUG, "failed hdr", DUMP_PREFIX_NONE, 16, print_hex_dump(KERN_DEBUG, "failed hdr", DUMP_PREFIX_NONE, 16,
1, replyblock, len, false); 1, replyblock, len, false);
DRM_DEBUG_KMS("ERROR: failed header\n"); drm_dbg_kms(mgr->dev, "ERROR: failed header\n");
return false; return false;
} }
@@ -3924,22 +3939,20 @@ drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up,
/* Caller is responsible for giving back this reference */ /* Caller is responsible for giving back this reference */
*mstb = drm_dp_get_mst_branch_device(mgr, hdr.lct, hdr.rad); *mstb = drm_dp_get_mst_branch_device(mgr, hdr.lct, hdr.rad);
if (!*mstb) { if (!*mstb) {
DRM_DEBUG_KMS("Got MST reply from unknown device %d\n", drm_dbg_kms(mgr->dev, "Got MST reply from unknown device %d\n", hdr.lct);
hdr.lct);
return false; return false;
} }
} }
if (!drm_dp_sideband_msg_set_header(msg, &hdr, hdrlen)) { if (!drm_dp_sideband_msg_set_header(msg, &hdr, hdrlen)) {
DRM_DEBUG_KMS("sideband msg set header failed %d\n", drm_dbg_kms(mgr->dev, "sideband msg set header failed %d\n", replyblock[0]);
replyblock[0]);
return false; return false;
} }
replylen = min(msg->curchunk_len, (u8)(len - hdrlen)); replylen = min(msg->curchunk_len, (u8)(len - hdrlen));
ret = drm_dp_sideband_append_payload(msg, replyblock + hdrlen, replylen); ret = drm_dp_sideband_append_payload(msg, replyblock + hdrlen, replylen);
if (!ret) { if (!ret) {
DRM_DEBUG_KMS("sideband msg build failed %d\n", replyblock[0]); drm_dbg_kms(mgr->dev, "sideband msg build failed %d\n", replyblock[0]);
return false; return false;
} }
@@ -3950,14 +3963,14 @@ drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up,
ret = drm_dp_dpcd_read(mgr->aux, basereg + curreply, ret = drm_dp_dpcd_read(mgr->aux, basereg + curreply,
replyblock, len); replyblock, len);
if (ret != len) { if (ret != len) {
DRM_DEBUG_KMS("failed to read a chunk (len %d, ret %d)\n", drm_dbg_kms(mgr->dev, "failed to read a chunk (len %d, ret %d)\n",
len, ret); len, ret);
return false; return false;
} }
ret = drm_dp_sideband_append_payload(msg, replyblock, len); ret = drm_dp_sideband_append_payload(msg, replyblock, len);
if (!ret) { if (!ret) {
DRM_DEBUG_KMS("failed to build sideband msg\n"); drm_dbg_kms(mgr->dev, "failed to build sideband msg\n");
return false; return false;
} }
@@ -3991,16 +4004,16 @@ static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr)
struct drm_dp_sideband_msg_hdr *hdr; struct drm_dp_sideband_msg_hdr *hdr;
hdr = &msg->initial_hdr; hdr = &msg->initial_hdr;
DRM_DEBUG_KMS("Got MST reply with no msg %p %d %d %02x %02x\n", drm_dbg_kms(mgr->dev, "Got MST reply with no msg %p %d %d %02x %02x\n",
mstb, hdr->seqno, hdr->lct, hdr->rad[0], mstb, hdr->seqno, hdr->lct, hdr->rad[0], msg->msg[0]);
msg->msg[0]);
goto out_clear_reply; goto out_clear_reply;
} }
drm_dp_sideband_parse_reply(msg, &txmsg->reply); drm_dp_sideband_parse_reply(mgr, msg, &txmsg->reply);
if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) { if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) {
DRM_DEBUG_KMS("Got NAK reply: req 0x%02x (%s), reason 0x%02x (%s), nak data 0x%02x\n", drm_dbg_kms(mgr->dev,
"Got NAK reply: req 0x%02x (%s), reason 0x%02x (%s), nak data 0x%02x\n",
txmsg->reply.req_type, txmsg->reply.req_type,
drm_dp_mst_req_type_str(txmsg->reply.req_type), drm_dp_mst_req_type_str(txmsg->reply.req_type),
txmsg->reply.u.nak.reason, txmsg->reply.u.nak.reason,
@@ -4053,8 +4066,7 @@ drm_dp_mst_process_up_req(struct drm_dp_mst_topology_mgr *mgr,
} }
if (!mstb) { if (!mstb) {
DRM_DEBUG_KMS("Got MST reply from unknown device %d\n", drm_dbg_kms(mgr->dev, "Got MST reply from unknown device %d\n", hdr->lct);
hdr->lct);
return false; return false;
} }
@@ -4114,11 +4126,11 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
INIT_LIST_HEAD(&up_req->next); INIT_LIST_HEAD(&up_req->next);
drm_dp_sideband_parse_req(&mgr->up_req_recv, &up_req->msg); drm_dp_sideband_parse_req(mgr, &mgr->up_req_recv, &up_req->msg);
if (up_req->msg.req_type != DP_CONNECTION_STATUS_NOTIFY && if (up_req->msg.req_type != DP_CONNECTION_STATUS_NOTIFY &&
up_req->msg.req_type != DP_RESOURCE_STATUS_NOTIFY) { up_req->msg.req_type != DP_RESOURCE_STATUS_NOTIFY) {
DRM_DEBUG_KMS("Received unknown up req type, ignoring: %x\n", drm_dbg_kms(mgr->dev, "Received unknown up req type, ignoring: %x\n",
up_req->msg.req_type); up_req->msg.req_type);
kfree(up_req); kfree(up_req);
goto out; goto out;
@@ -4131,7 +4143,7 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
const struct drm_dp_connection_status_notify *conn_stat = const struct drm_dp_connection_status_notify *conn_stat =
&up_req->msg.u.conn_stat; &up_req->msg.u.conn_stat;
DRM_DEBUG_KMS("Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt: %d\n", drm_dbg_kms(mgr->dev, "Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt: %d\n",
conn_stat->port_number, conn_stat->port_number,
conn_stat->legacy_device_plug_status, conn_stat->legacy_device_plug_status,
conn_stat->displayport_device_plug_status, conn_stat->displayport_device_plug_status,
@@ -4142,7 +4154,7 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
const struct drm_dp_resource_status_notify *res_stat = const struct drm_dp_resource_status_notify *res_stat =
&up_req->msg.u.resource_stat; &up_req->msg.u.resource_stat;
DRM_DEBUG_KMS("Got RSN: pn: %d avail_pbn %d\n", drm_dbg_kms(mgr->dev, "Got RSN: pn: %d avail_pbn %d\n",
res_stat->port_number, res_stat->port_number,
res_stat->available_pbn); res_stat->available_pbn);
} }
@@ -4384,7 +4396,8 @@ int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state,
* which is an error * which is an error
*/ */
if (WARN_ON(!prev_slots)) { if (WARN_ON(!prev_slots)) {
DRM_ERROR("cannot allocate and release VCPI on [MST PORT:%p] in the same state\n", drm_err(mgr->dev,
"cannot allocate and release VCPI on [MST PORT:%p] in the same state\n",
port); port);
return -EINVAL; return -EINVAL;
} }
@@ -4402,10 +4415,10 @@ int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state,
req_slots = DIV_ROUND_UP(pbn, pbn_div); req_slots = DIV_ROUND_UP(pbn, pbn_div);
DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] [MST PORT:%p] VCPI %d -> %d\n", drm_dbg_atomic(mgr->dev, "[CONNECTOR:%d:%s] [MST PORT:%p] VCPI %d -> %d\n",
port->connector->base.id, port->connector->name, port->connector->base.id, port->connector->name,
port, prev_slots, req_slots); port, prev_slots, req_slots);
DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] [MST PORT:%p] PBN %d -> %d\n", drm_dbg_atomic(mgr->dev, "[CONNECTOR:%d:%s] [MST PORT:%p] PBN %d -> %d\n",
port->connector->base.id, port->connector->name, port->connector->base.id, port->connector->name,
port, prev_bw, pbn); port, prev_bw, pbn);
@@ -4471,12 +4484,12 @@ int drm_dp_atomic_release_vcpi_slots(struct drm_atomic_state *state,
} }
} }
if (WARN_ON(!found)) { if (WARN_ON(!found)) {
DRM_ERROR("no VCPI for [MST PORT:%p] found in mst state %p\n", drm_err(mgr->dev, "no VCPI for [MST PORT:%p] found in mst state %p\n",
port, &topology_state->base); port, &topology_state->base);
return -EINVAL; return -EINVAL;
} }
DRM_DEBUG_ATOMIC("[MST PORT:%p] VCPI %d -> 0\n", port, pos->vcpi); drm_dbg_atomic(mgr->dev, "[MST PORT:%p] VCPI %d -> 0\n", port, pos->vcpi);
if (pos->vcpi) { if (pos->vcpi) {
drm_dp_mst_put_port_malloc(port); drm_dp_mst_put_port_malloc(port);
pos->vcpi = 0; pos->vcpi = 0;
@@ -4507,7 +4520,8 @@ bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,
return false; return false;
if (port->vcpi.vcpi > 0) { if (port->vcpi.vcpi > 0) {
DRM_DEBUG_KMS("payload: vcpi %d already allocated for pbn %d - requested pbn %d\n", drm_dbg_kms(mgr->dev,
"payload: vcpi %d already allocated for pbn %d - requested pbn %d\n",
port->vcpi.vcpi, port->vcpi.pbn, pbn); port->vcpi.vcpi, port->vcpi.pbn, pbn);
if (pbn == port->vcpi.pbn) { if (pbn == port->vcpi.pbn) {
drm_dp_mst_topology_put_port(port); drm_dp_mst_topology_put_port(port);
@@ -4517,13 +4531,12 @@ bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,
ret = drm_dp_init_vcpi(mgr, &port->vcpi, pbn, slots); ret = drm_dp_init_vcpi(mgr, &port->vcpi, pbn, slots);
if (ret) { if (ret) {
DRM_DEBUG_KMS("failed to init vcpi slots=%d max=63 ret=%d\n", drm_dbg_kms(mgr->dev, "failed to init vcpi slots=%d max=63 ret=%d\n",
DIV_ROUND_UP(pbn, mgr->pbn_div), ret); DIV_ROUND_UP(pbn, mgr->pbn_div), ret);
drm_dp_mst_topology_put_port(port); drm_dp_mst_topology_put_port(port);
goto out; goto out;
} }
DRM_DEBUG_KMS("initing vcpi for pbn=%d slots=%d\n", drm_dbg_kms(mgr->dev, "initing vcpi for pbn=%d slots=%d\n", pbn, port->vcpi.num_slots);
pbn, port->vcpi.num_slots);
/* Keep port allocated until its payload has been removed */ /* Keep port allocated until its payload has been removed */
drm_dp_mst_get_port_malloc(port); drm_dp_mst_get_port_malloc(port);
@@ -4605,14 +4618,14 @@ static int drm_dp_dpcd_write_payload(struct drm_dp_mst_topology_mgr *mgr,
ret = drm_dp_dpcd_write(mgr->aux, DP_PAYLOAD_ALLOCATE_SET, payload_alloc, 3); ret = drm_dp_dpcd_write(mgr->aux, DP_PAYLOAD_ALLOCATE_SET, payload_alloc, 3);
if (ret != 3) { if (ret != 3) {
DRM_DEBUG_KMS("failed to write payload allocation %d\n", ret); drm_dbg_kms(mgr->dev, "failed to write payload allocation %d\n", ret);
goto fail; goto fail;
} }
retry: retry:
ret = drm_dp_dpcd_readb(mgr->aux, DP_PAYLOAD_TABLE_UPDATE_STATUS, &status); ret = drm_dp_dpcd_readb(mgr->aux, DP_PAYLOAD_TABLE_UPDATE_STATUS, &status);
if (ret < 0) { if (ret < 0) {
DRM_DEBUG_KMS("failed to read payload table status %d\n", ret); drm_dbg_kms(mgr->dev, "failed to read payload table status %d\n", ret);
goto fail; goto fail;
} }
@@ -4622,7 +4635,8 @@ retry:
usleep_range(10000, 20000); usleep_range(10000, 20000);
goto retry; goto retry;
} }
DRM_DEBUG_KMS("status not set after read payload table status %d\n", status); drm_dbg_kms(mgr->dev, "status not set after read payload table status %d\n",
status);
ret = -EINVAL; ret = -EINVAL;
goto fail; goto fail;
} }
@@ -4669,7 +4683,7 @@ int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr)
status & DP_PAYLOAD_ACT_HANDLED || status < 0, status & DP_PAYLOAD_ACT_HANDLED || status < 0,
200, timeout_ms * USEC_PER_MSEC); 200, timeout_ms * USEC_PER_MSEC);
if (ret < 0 && status >= 0) { if (ret < 0 && status >= 0) {
DRM_ERROR("Failed to get ACT after %dms, last status: %02x\n", drm_err(mgr->dev, "Failed to get ACT after %dms, last status: %02x\n",
timeout_ms, status); timeout_ms, status);
return -EINVAL; return -EINVAL;
} else if (status < 0) { } else if (status < 0) {
@@ -4677,8 +4691,7 @@ int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr)
* Failure here isn't unexpected - the hub may have * Failure here isn't unexpected - the hub may have
* just been unplugged * just been unplugged
*/ */
DRM_DEBUG_KMS("Failed to read payload table status: %d\n", drm_dbg_kms(mgr->dev, "Failed to read payload table status: %d\n", status);
status);
return status; return status;
} }
@@ -5118,12 +5131,11 @@ drm_dp_mst_atomic_check_mstb_bw_limit(struct drm_dp_mst_branch *mstb,
return 0; return 0;
if (mstb->port_parent) if (mstb->port_parent)
DRM_DEBUG_ATOMIC("[MSTB:%p] [MST PORT:%p] Checking bandwidth limits on [MSTB:%p]\n", drm_dbg_atomic(mstb->mgr->dev,
mstb->port_parent->parent, mstb->port_parent, "[MSTB:%p] [MST PORT:%p] Checking bandwidth limits on [MSTB:%p]\n",
mstb); mstb->port_parent->parent, mstb->port_parent, mstb);
else else
DRM_DEBUG_ATOMIC("[MSTB:%p] Checking bandwidth limits\n", drm_dbg_atomic(mstb->mgr->dev, "[MSTB:%p] Checking bandwidth limits\n", mstb);
mstb);
list_for_each_entry(port, &mstb->ports, next) { list_for_each_entry(port, &mstb->ports, next) {
ret = drm_dp_mst_atomic_check_port_bw_limit(port, state); ret = drm_dp_mst_atomic_check_port_bw_limit(port, state);
@@ -5181,13 +5193,13 @@ drm_dp_mst_atomic_check_port_bw_limit(struct drm_dp_mst_port *port,
} }
if (pbn_used > port->full_pbn) { if (pbn_used > port->full_pbn) {
DRM_DEBUG_ATOMIC("[MSTB:%p] [MST PORT:%p] required PBN of %d exceeds port limit of %d\n", drm_dbg_atomic(port->mgr->dev,
port->parent, port, pbn_used, "[MSTB:%p] [MST PORT:%p] required PBN of %d exceeds port limit of %d\n",
port->full_pbn); port->parent, port, pbn_used, port->full_pbn);
return -ENOSPC; return -ENOSPC;
} }
DRM_DEBUG_ATOMIC("[MSTB:%p] [MST PORT:%p] uses %d out of %d PBN\n", drm_dbg_atomic(port->mgr->dev, "[MSTB:%p] [MST PORT:%p] uses %d out of %d PBN\n",
port->parent, port, pbn_used, port->full_pbn); port->parent, port, pbn_used, port->full_pbn);
return pbn_used; return pbn_used;
@@ -5203,31 +5215,31 @@ drm_dp_mst_atomic_check_vcpi_alloc_limit(struct drm_dp_mst_topology_mgr *mgr,
list_for_each_entry(vcpi, &mst_state->vcpis, next) { list_for_each_entry(vcpi, &mst_state->vcpis, next) {
/* Releasing VCPI is always OK-even if the port is gone */ /* Releasing VCPI is always OK-even if the port is gone */
if (!vcpi->vcpi) { if (!vcpi->vcpi) {
DRM_DEBUG_ATOMIC("[MST PORT:%p] releases all VCPI slots\n", drm_dbg_atomic(mgr->dev, "[MST PORT:%p] releases all VCPI slots\n",
vcpi->port); vcpi->port);
continue; continue;
} }
DRM_DEBUG_ATOMIC("[MST PORT:%p] requires %d vcpi slots\n", drm_dbg_atomic(mgr->dev, "[MST PORT:%p] requires %d vcpi slots\n",
vcpi->port, vcpi->vcpi); vcpi->port, vcpi->vcpi);
avail_slots -= vcpi->vcpi; avail_slots -= vcpi->vcpi;
if (avail_slots < 0) { if (avail_slots < 0) {
DRM_DEBUG_ATOMIC("[MST PORT:%p] not enough VCPI slots in mst state %p (avail=%d)\n", drm_dbg_atomic(mgr->dev,
vcpi->port, mst_state, "[MST PORT:%p] not enough VCPI slots in mst state %p (avail=%d)\n",
avail_slots + vcpi->vcpi); vcpi->port, mst_state, avail_slots + vcpi->vcpi);
return -ENOSPC; return -ENOSPC;
} }
if (++payload_count > mgr->max_payloads) { if (++payload_count > mgr->max_payloads) {
DRM_DEBUG_ATOMIC("[MST MGR:%p] state %p has too many payloads (max=%d)\n", drm_dbg_atomic(mgr->dev,
"[MST MGR:%p] state %p has too many payloads (max=%d)\n",
mgr, mst_state, mgr->max_payloads); mgr, mst_state, mgr->max_payloads);
return -EINVAL; return -EINVAL;
} }
} }
DRM_DEBUG_ATOMIC("[MST MGR:%p] mst state %p VCPI avail=%d used=%d\n", drm_dbg_atomic(mgr->dev, "[MST MGR:%p] mst state %p VCPI avail=%d used=%d\n",
mgr, mst_state, avail_slots, mgr, mst_state, avail_slots, 63 - avail_slots);
63 - avail_slots);
return 0; return 0;
} }
@@ -5284,7 +5296,7 @@ int drm_dp_mst_add_affected_dsc_crtcs(struct drm_atomic_state *state, struct drm
if (IS_ERR(crtc_state)) if (IS_ERR(crtc_state))
return PTR_ERR(crtc_state); return PTR_ERR(crtc_state);
DRM_DEBUG_ATOMIC("[MST MGR:%p] Setting mode_changed flag on CRTC %p\n", drm_dbg_atomic(mgr->dev, "[MST MGR:%p] Setting mode_changed flag on CRTC %p\n",
mgr, crtc); mgr, crtc);
crtc_state->mode_changed = true; crtc_state->mode_changed = true;
@@ -5330,20 +5342,23 @@ int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state,
} }
if (!found) { if (!found) {
DRM_DEBUG_ATOMIC("[MST PORT:%p] Couldn't find VCPI allocation in mst state %p\n", drm_dbg_atomic(state->dev,
"[MST PORT:%p] Couldn't find VCPI allocation in mst state %p\n",
port, mst_state); port, mst_state);
return -EINVAL; return -EINVAL;
} }
if (pos->dsc_enabled == enable) { if (pos->dsc_enabled == enable) {
DRM_DEBUG_ATOMIC("[MST PORT:%p] DSC flag is already set to %d, returning %d VCPI slots\n", drm_dbg_atomic(state->dev,
"[MST PORT:%p] DSC flag is already set to %d, returning %d VCPI slots\n",
port, enable, pos->vcpi); port, enable, pos->vcpi);
vcpi = pos->vcpi; vcpi = pos->vcpi;
} }
if (enable) { if (enable) {
vcpi = drm_dp_atomic_find_vcpi_slots(state, port->mgr, port, pbn, pbn_div); vcpi = drm_dp_atomic_find_vcpi_slots(state, port->mgr, port, pbn, pbn_div);
DRM_DEBUG_ATOMIC("[MST PORT:%p] Enabling DSC flag, reallocating %d VCPI slots on the port\n", drm_dbg_atomic(state->dev,
"[MST PORT:%p] Enabling DSC flag, reallocating %d VCPI slots on the port\n",
port, vcpi); port, vcpi);
if (vcpi < 0) if (vcpi < 0)
return -EINVAL; return -EINVAL;
@@ -5438,14 +5453,17 @@ EXPORT_SYMBOL(drm_atomic_get_mst_topology_state);
* @aux: DP helper aux channel to talk to this device * @aux: DP helper aux channel to talk to this device
* @max_dpcd_transaction_bytes: hw specific DPCD transaction limit * @max_dpcd_transaction_bytes: hw specific DPCD transaction limit
* @max_payloads: maximum number of payloads this GPU can source * @max_payloads: maximum number of payloads this GPU can source
* @max_lane_count: maximum number of lanes this GPU supports
* @max_link_rate: maximum link rate this GPU supports, units as in DPCD
* @conn_base_id: the connector object ID the MST device is connected to. * @conn_base_id: the connector object ID the MST device is connected to.
* *
* Return 0 for success, or negative error code on failure * Return 0 for success, or negative error code on failure
*/ */
int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr, int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
struct drm_device *dev, struct drm_dp_aux *aux, struct drm_device *dev, struct drm_dp_aux *aux,
int max_dpcd_transaction_bytes, int max_dpcd_transaction_bytes, int max_payloads,
int max_payloads, int conn_base_id) u8 max_lane_count, u8 max_link_rate,
int conn_base_id)
{ {
struct drm_dp_mst_topology_state *mst_state; struct drm_dp_mst_topology_state *mst_state;
@@ -5480,6 +5498,8 @@ int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
mgr->aux = aux; mgr->aux = aux;
mgr->max_dpcd_transaction_bytes = max_dpcd_transaction_bytes; mgr->max_dpcd_transaction_bytes = max_dpcd_transaction_bytes;
mgr->max_payloads = max_payloads; mgr->max_payloads = max_payloads;
mgr->max_lane_count = max_lane_count;
mgr->max_link_rate = max_link_rate;
mgr->conn_base_id = conn_base_id; mgr->conn_base_id = conn_base_id;
if (max_payloads + 1 > sizeof(mgr->payload_mask) * 8 || if (max_payloads + 1 > sizeof(mgr->payload_mask) * 8 ||
max_payloads + 1 > sizeof(mgr->vcpi_mask) * 8) max_payloads + 1 > sizeof(mgr->vcpi_mask) * 8)
@@ -5691,7 +5711,7 @@ static int drm_dp_mst_i2c_xfer(struct i2c_adapter *adapter,
} else if (remote_i2c_write_ok(msgs, num)) { } else if (remote_i2c_write_ok(msgs, num)) {
ret = drm_dp_mst_i2c_write(mstb, port, msgs, num); ret = drm_dp_mst_i2c_write(mstb, port, msgs, num);
} else { } else {
DRM_DEBUG_KMS("Unsupported I2C transaction for MST device\n"); drm_dbg_kms(mgr->dev, "Unsupported I2C transaction for MST device\n");
ret = -EIO; ret = -EIO;
} }
@@ -5886,14 +5906,13 @@ struct drm_dp_aux *drm_dp_mst_dsc_aux_for_port(struct drm_dp_mst_port *port)
if (drm_dp_has_quirk(&desc, DP_DPCD_QUIRK_DSC_WITHOUT_VIRTUAL_DPCD) && if (drm_dp_has_quirk(&desc, DP_DPCD_QUIRK_DSC_WITHOUT_VIRTUAL_DPCD) &&
port->mgr->dpcd[DP_DPCD_REV] >= DP_DPCD_REV_14 && port->mgr->dpcd[DP_DPCD_REV] >= DP_DPCD_REV_14 &&
port->parent == port->mgr->mst_primary) { port->parent == port->mgr->mst_primary) {
u8 downstreamport; u8 dpcd_ext[DP_RECEIVER_CAP_SIZE];
if (drm_dp_dpcd_read(&port->aux, DP_DOWNSTREAMPORT_PRESENT, if (drm_dp_read_dpcd_caps(port->mgr->aux, dpcd_ext) < 0)
&downstreamport, 1) < 0)
return NULL; return NULL;
if ((downstreamport & DP_DWN_STRM_PORT_PRESENT) && if ((dpcd_ext[DP_DOWNSTREAMPORT_PRESENT] & DP_DWN_STRM_PORT_PRESENT) &&
((downstreamport & DP_DWN_STRM_PORT_TYPE_MASK) ((dpcd_ext[DP_DOWNSTREAMPORT_PRESENT] & DP_DWN_STRM_PORT_TYPE_MASK)
!= DP_DWN_STRM_PORT_TYPE_ANALOG)) != DP_DWN_STRM_PORT_TYPE_ANALOG))
return port->mgr->aux; return port->mgr->aux;
} }

View File

@@ -941,9 +941,7 @@ void drm_dev_unregister(struct drm_device *dev)
if (dev->driver->unload) if (dev->driver->unload)
dev->driver->unload(dev); dev->driver->unload(dev);
if (dev->agp) drm_legacy_pci_agp_destroy(dev);
drm_pci_agp_destroy(dev);
drm_legacy_rmmaps(dev); drm_legacy_rmmaps(dev);
remove_compat_control_link(dev); remove_compat_control_link(dev);

View File

@@ -774,19 +774,7 @@ void drm_event_cancel_free(struct drm_device *dev,
} }
EXPORT_SYMBOL(drm_event_cancel_free); EXPORT_SYMBOL(drm_event_cancel_free);
/** static void drm_send_event_helper(struct drm_device *dev,
* drm_send_event_helper - send DRM event to file descriptor
* @dev: DRM device
* @e: DRM event to deliver
* @timestamp: timestamp to set for the fence event in kernel's CLOCK_MONOTONIC
* time domain
*
* This helper function sends the event @e, initialized with
* drm_event_reserve_init(), to its associated userspace DRM file.
* The timestamp variant of dma_fence_signal is used when the caller
* sends a valid timestamp.
*/
void drm_send_event_helper(struct drm_device *dev,
struct drm_pending_event *e, ktime_t timestamp) struct drm_pending_event *e, ktime_t timestamp)
{ {
assert_spin_locked(&dev->event_lock); assert_spin_locked(&dev->event_lock);

View File

@@ -52,6 +52,7 @@ EXPORT_SYMBOL(drm_fb_memcpy);
/** /**
* drm_fb_memcpy_dstclip - Copy clip buffer * drm_fb_memcpy_dstclip - Copy clip buffer
* @dst: Destination buffer (iomem) * @dst: Destination buffer (iomem)
* @dst_pitch: Number of bytes between two consecutive scanlines within dst
* @vaddr: Source buffer * @vaddr: Source buffer
* @fb: DRM framebuffer * @fb: DRM framebuffer
* @clip: Clip rectangle area to copy * @clip: Clip rectangle area to copy
@@ -59,12 +60,12 @@ EXPORT_SYMBOL(drm_fb_memcpy);
* This function applies clipping on dst, i.e. the destination is a * This function applies clipping on dst, i.e. the destination is a
* full (iomem) framebuffer but only the clip rect content is copied over. * full (iomem) framebuffer but only the clip rect content is copied over.
*/ */
void drm_fb_memcpy_dstclip(void __iomem *dst, void *vaddr, void drm_fb_memcpy_dstclip(void __iomem *dst, unsigned int dst_pitch,
struct drm_framebuffer *fb, void *vaddr, struct drm_framebuffer *fb,
struct drm_rect *clip) struct drm_rect *clip)
{ {
unsigned int cpp = fb->format->cpp[0]; unsigned int cpp = fb->format->cpp[0];
unsigned int offset = clip_offset(clip, fb->pitches[0], cpp); unsigned int offset = clip_offset(clip, dst_pitch, cpp);
size_t len = (clip->x2 - clip->x1) * cpp; size_t len = (clip->x2 - clip->x1) * cpp;
unsigned int y, lines = clip->y2 - clip->y1; unsigned int y, lines = clip->y2 - clip->y1;
@@ -73,7 +74,7 @@ void drm_fb_memcpy_dstclip(void __iomem *dst, void *vaddr,
for (y = 0; y < lines; y++) { for (y = 0; y < lines; y++) {
memcpy_toio(dst, vaddr, len); memcpy_toio(dst, vaddr, len);
vaddr += fb->pitches[0]; vaddr += fb->pitches[0];
dst += fb->pitches[0]; dst += dst_pitch;
} }
} }
EXPORT_SYMBOL(drm_fb_memcpy_dstclip); EXPORT_SYMBOL(drm_fb_memcpy_dstclip);
@@ -343,3 +344,90 @@ void drm_fb_xrgb8888_to_gray8(u8 *dst, void *vaddr, struct drm_framebuffer *fb,
} }
EXPORT_SYMBOL(drm_fb_xrgb8888_to_gray8); EXPORT_SYMBOL(drm_fb_xrgb8888_to_gray8);
/**
* drm_fb_blit_rect_dstclip - Copy parts of a framebuffer to display memory
* @dst: The display memory to copy to
* @dst_pitch: Number of bytes between two consecutive scanlines within dst
* @dst_format: FOURCC code of the display's color format
* @vmap: The framebuffer memory to copy from
* @fb: The framebuffer to copy from
* @clip: Clip rectangle area to copy
*
* This function copies parts of a framebuffer to display memory. If the
* formats of the display and the framebuffer mismatch, the blit function
* will attempt to convert between them.
*
* Use drm_fb_blit_dstclip() to copy the full framebuffer.
*
* Returns:
* 0 on success, or
* -EINVAL if the color-format conversion failed, or
* a negative error code otherwise.
*/
int drm_fb_blit_rect_dstclip(void __iomem *dst, unsigned int dst_pitch,
uint32_t dst_format, void *vmap,
struct drm_framebuffer *fb,
struct drm_rect *clip)
{
uint32_t fb_format = fb->format->format;
/* treat alpha channel like filler bits */
if (fb_format == DRM_FORMAT_ARGB8888)
fb_format = DRM_FORMAT_XRGB8888;
if (dst_format == DRM_FORMAT_ARGB8888)
dst_format = DRM_FORMAT_XRGB8888;
if (dst_format == fb_format) {
drm_fb_memcpy_dstclip(dst, dst_pitch, vmap, fb, clip);
return 0;
} else if (dst_format == DRM_FORMAT_RGB565) {
if (fb_format == DRM_FORMAT_XRGB8888) {
drm_fb_xrgb8888_to_rgb565_dstclip(dst, dst_pitch,
vmap, fb, clip,
false);
return 0;
}
} else if (dst_format == DRM_FORMAT_RGB888) {
if (fb_format == DRM_FORMAT_XRGB8888) {
drm_fb_xrgb8888_to_rgb888_dstclip(dst, dst_pitch,
vmap, fb, clip);
return 0;
}
}
return -EINVAL;
}
EXPORT_SYMBOL(drm_fb_blit_rect_dstclip);
/**
* drm_fb_blit_dstclip - Copy framebuffer to display memory
* @dst: The display memory to copy to
* @dst_pitch: Number of bytes between two consecutive scanlines within dst
* @dst_format: FOURCC code of the display's color format
* @vmap: The framebuffer memory to copy from
* @fb: The framebuffer to copy from
*
* This function copies a full framebuffer to display memory. If the formats
* of the display and the framebuffer mismatch, the copy function will
* attempt to convert between them.
*
* See drm_fb_blit_rect_dstclip() for more inforamtion.
*
* Returns:
* 0 on success, or a negative error code otherwise.
*/
int drm_fb_blit_dstclip(void __iomem *dst, unsigned int dst_pitch,
uint32_t dst_format, void *vmap,
struct drm_framebuffer *fb)
{
struct drm_rect fullscreen = {
.x1 = 0,
.x2 = fb->width,
.y1 = 0,
.y2 = fb->height,
};
return drm_fb_blit_rect_dstclip(dst, dst_pitch, dst_format, vmap, fb,
&fullscreen);
}
EXPORT_SYMBOL(drm_fb_blit_dstclip);

View File

@@ -114,5 +114,38 @@ int drm_gem_ttm_mmap(struct drm_gem_object *gem,
} }
EXPORT_SYMBOL(drm_gem_ttm_mmap); EXPORT_SYMBOL(drm_gem_ttm_mmap);
/**
* drm_gem_ttm_dumb_map_offset() - Implements struct &drm_driver.dumb_map_offset
* @file: DRM file pointer.
* @dev: DRM device.
* @handle: GEM handle
* @offset: Returns the mapping's memory offset on success
*
* Provides an implementation of struct &drm_driver.dumb_map_offset for
* TTM-based GEM drivers. TTM allocates the offset internally and
* drm_gem_ttm_dumb_map_offset() returns it for dumb-buffer implementations.
*
* See struct &drm_driver.dumb_map_offset.
*
* Returns:
* 0 on success, or a negative errno code otherwise.
*/
int drm_gem_ttm_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
uint32_t handle, uint64_t *offset)
{
struct drm_gem_object *gem;
gem = drm_gem_object_lookup(file, handle);
if (!gem)
return -ENOENT;
*offset = drm_vma_node_offset_addr(&gem->vma_node);
drm_gem_object_put(gem);
return 0;
}
EXPORT_SYMBOL(drm_gem_ttm_dumb_map_offset);
MODULE_DESCRIPTION("DRM gem ttm helpers"); MODULE_DESCRIPTION("DRM gem ttm helpers");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");

View File

@@ -245,22 +245,6 @@ void drm_gem_vram_put(struct drm_gem_vram_object *gbo)
} }
EXPORT_SYMBOL(drm_gem_vram_put); EXPORT_SYMBOL(drm_gem_vram_put);
/**
* drm_gem_vram_mmap_offset() - Returns a GEM VRAM object's mmap offset
* @gbo: the GEM VRAM object
*
* See drm_vma_node_offset_addr() for more information.
*
* Returns:
* The buffer object's offset for userspace mappings on success, or
* 0 if no offset is allocated.
*/
u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo)
{
return drm_vma_node_offset_addr(&gbo->bo.base.vma_node);
}
EXPORT_SYMBOL(drm_gem_vram_mmap_offset);
static u64 drm_gem_vram_pg_offset(struct drm_gem_vram_object *gbo) static u64 drm_gem_vram_pg_offset(struct drm_gem_vram_object *gbo)
{ {
/* Keep TTM behavior for now, remove when drivers are audited */ /* Keep TTM behavior for now, remove when drivers are audited */
@@ -638,38 +622,6 @@ int drm_gem_vram_driver_dumb_create(struct drm_file *file,
} }
EXPORT_SYMBOL(drm_gem_vram_driver_dumb_create); EXPORT_SYMBOL(drm_gem_vram_driver_dumb_create);
/**
* drm_gem_vram_driver_dumb_mmap_offset() - \
Implements &struct drm_driver.dumb_mmap_offset
* @file: DRM file pointer.
* @dev: DRM device.
* @handle: GEM handle
* @offset: Returns the mapping's memory offset on success
*
* Returns:
* 0 on success, or
* a negative errno code otherwise.
*/
int drm_gem_vram_driver_dumb_mmap_offset(struct drm_file *file,
struct drm_device *dev,
uint32_t handle, uint64_t *offset)
{
struct drm_gem_object *gem;
struct drm_gem_vram_object *gbo;
gem = drm_gem_object_lookup(file, handle);
if (!gem)
return -ENOENT;
gbo = drm_gem_vram_of_gem(gem);
*offset = drm_gem_vram_mmap_offset(gbo);
drm_gem_object_put(gem);
return 0;
}
EXPORT_SYMBOL(drm_gem_vram_driver_dumb_mmap_offset);
/* /*
* Helpers for struct drm_plane_helper_funcs * Helpers for struct drm_plane_helper_funcs
*/ */

View File

@@ -56,7 +56,6 @@ void drm_lastclose(struct drm_device *dev);
/* drm_pci.c */ /* drm_pci.c */
int drm_legacy_irq_by_busid(struct drm_device *dev, void *data, int drm_legacy_irq_by_busid(struct drm_device *dev, void *data,
struct drm_file *file_priv); struct drm_file *file_priv);
void drm_pci_agp_destroy(struct drm_device *dev);
int drm_pci_set_busid(struct drm_device *dev, struct drm_master *master); int drm_pci_set_busid(struct drm_device *dev, struct drm_master *master);
#else #else
@@ -67,10 +66,6 @@ static inline int drm_legacy_irq_by_busid(struct drm_device *dev, void *data,
return -EINVAL; return -EINVAL;
} }
static inline void drm_pci_agp_destroy(struct drm_device *dev)
{
}
static inline int drm_pci_set_busid(struct drm_device *dev, static inline int drm_pci_set_busid(struct drm_device *dev,
struct drm_master *master) struct drm_master *master)
{ {

View File

@@ -31,7 +31,6 @@
#include <linux/ratelimit.h> #include <linux/ratelimit.h>
#include <linux/export.h> #include <linux/export.h>
#include <drm/drm_agpsupport.h>
#include <drm/drm_file.h> #include <drm/drm_file.h>
#include <drm/drm_print.h> #include <drm/drm_print.h>
@@ -619,6 +618,7 @@ static int compat_drm_dma(struct file *file, unsigned int cmd,
} }
#endif #endif
#if IS_ENABLED(CONFIG_DRM_LEGACY)
#if IS_ENABLED(CONFIG_AGP) #if IS_ENABLED(CONFIG_AGP)
typedef struct drm_agp_mode32 { typedef struct drm_agp_mode32 {
u32 mode; /**< AGP mode */ u32 mode; /**< AGP mode */
@@ -633,7 +633,7 @@ static int compat_drm_agp_enable(struct file *file, unsigned int cmd,
if (get_user(mode.mode, &argp->mode)) if (get_user(mode.mode, &argp->mode))
return -EFAULT; return -EFAULT;
return drm_ioctl_kernel(file, drm_agp_enable_ioctl, &mode, return drm_ioctl_kernel(file, drm_legacy_agp_enable_ioctl, &mode,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY); DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
} }
@@ -659,7 +659,7 @@ static int compat_drm_agp_info(struct file *file, unsigned int cmd,
struct drm_agp_info info; struct drm_agp_info info;
int err; int err;
err = drm_ioctl_kernel(file, drm_agp_info_ioctl, &info, DRM_AUTH); err = drm_ioctl_kernel(file, drm_legacy_agp_info_ioctl, &info, DRM_AUTH);
if (err) if (err)
return err; return err;
@@ -698,7 +698,7 @@ static int compat_drm_agp_alloc(struct file *file, unsigned int cmd,
request.size = req32.size; request.size = req32.size;
request.type = req32.type; request.type = req32.type;
err = drm_ioctl_kernel(file, drm_agp_alloc_ioctl, &request, err = drm_ioctl_kernel(file, drm_legacy_agp_alloc_ioctl, &request,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY); DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
if (err) if (err)
return err; return err;
@@ -706,7 +706,7 @@ static int compat_drm_agp_alloc(struct file *file, unsigned int cmd,
req32.handle = request.handle; req32.handle = request.handle;
req32.physical = request.physical; req32.physical = request.physical;
if (copy_to_user(argp, &req32, sizeof(req32))) { if (copy_to_user(argp, &req32, sizeof(req32))) {
drm_ioctl_kernel(file, drm_agp_free_ioctl, &request, drm_ioctl_kernel(file, drm_legacy_agp_free_ioctl, &request,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY); DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
return -EFAULT; return -EFAULT;
} }
@@ -723,7 +723,7 @@ static int compat_drm_agp_free(struct file *file, unsigned int cmd,
if (get_user(request.handle, &argp->handle)) if (get_user(request.handle, &argp->handle))
return -EFAULT; return -EFAULT;
return drm_ioctl_kernel(file, drm_agp_free_ioctl, &request, return drm_ioctl_kernel(file, drm_legacy_agp_free_ioctl, &request,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY); DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
} }
@@ -744,7 +744,7 @@ static int compat_drm_agp_bind(struct file *file, unsigned int cmd,
request.handle = req32.handle; request.handle = req32.handle;
request.offset = req32.offset; request.offset = req32.offset;
return drm_ioctl_kernel(file, drm_agp_bind_ioctl, &request, return drm_ioctl_kernel(file, drm_legacy_agp_bind_ioctl, &request,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY); DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
} }
@@ -757,12 +757,11 @@ static int compat_drm_agp_unbind(struct file *file, unsigned int cmd,
if (get_user(request.handle, &argp->handle)) if (get_user(request.handle, &argp->handle))
return -EFAULT; return -EFAULT;
return drm_ioctl_kernel(file, drm_agp_unbind_ioctl, &request, return drm_ioctl_kernel(file, drm_legacy_agp_unbind_ioctl, &request,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY); DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY);
} }
#endif /* CONFIG_AGP */ #endif /* CONFIG_AGP */
#if IS_ENABLED(CONFIG_DRM_LEGACY)
typedef struct drm_scatter_gather32 { typedef struct drm_scatter_gather32 {
u32 size; /**< In bytes -- will round to page boundary */ u32 size; /**< In bytes -- will round to page boundary */
u32 handle; /**< Used for mapping / unmapping */ u32 handle; /**< Used for mapping / unmapping */
@@ -935,7 +934,6 @@ static struct {
DRM_IOCTL32_DEF(DRM_IOCTL_GET_SAREA_CTX, compat_drm_getsareactx), DRM_IOCTL32_DEF(DRM_IOCTL_GET_SAREA_CTX, compat_drm_getsareactx),
DRM_IOCTL32_DEF(DRM_IOCTL_RES_CTX, compat_drm_resctx), DRM_IOCTL32_DEF(DRM_IOCTL_RES_CTX, compat_drm_resctx),
DRM_IOCTL32_DEF(DRM_IOCTL_DMA, compat_drm_dma), DRM_IOCTL32_DEF(DRM_IOCTL_DMA, compat_drm_dma),
#endif
#if IS_ENABLED(CONFIG_AGP) #if IS_ENABLED(CONFIG_AGP)
DRM_IOCTL32_DEF(DRM_IOCTL_AGP_ENABLE, compat_drm_agp_enable), DRM_IOCTL32_DEF(DRM_IOCTL_AGP_ENABLE, compat_drm_agp_enable),
DRM_IOCTL32_DEF(DRM_IOCTL_AGP_INFO, compat_drm_agp_info), DRM_IOCTL32_DEF(DRM_IOCTL_AGP_INFO, compat_drm_agp_info),
@@ -944,6 +942,7 @@ static struct {
DRM_IOCTL32_DEF(DRM_IOCTL_AGP_BIND, compat_drm_agp_bind), DRM_IOCTL32_DEF(DRM_IOCTL_AGP_BIND, compat_drm_agp_bind),
DRM_IOCTL32_DEF(DRM_IOCTL_AGP_UNBIND, compat_drm_agp_unbind), DRM_IOCTL32_DEF(DRM_IOCTL_AGP_UNBIND, compat_drm_agp_unbind),
#endif #endif
#endif
#if IS_ENABLED(CONFIG_DRM_LEGACY) #if IS_ENABLED(CONFIG_DRM_LEGACY)
DRM_IOCTL32_DEF(DRM_IOCTL_SG_ALLOC, compat_drm_sg_alloc), DRM_IOCTL32_DEF(DRM_IOCTL_SG_ALLOC, compat_drm_sg_alloc),
DRM_IOCTL32_DEF(DRM_IOCTL_SG_FREE, compat_drm_sg_free), DRM_IOCTL32_DEF(DRM_IOCTL_SG_FREE, compat_drm_sg_free),

View File

@@ -33,7 +33,6 @@
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <drm/drm_agpsupport.h>
#include <drm/drm_auth.h> #include <drm/drm_auth.h>
#include <drm/drm_crtc.h> #include <drm/drm_crtc.h>
#include <drm/drm_drv.h> #include <drm/drm_drv.h>
@@ -627,14 +626,21 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_CONTROL, drm_legacy_irq_control, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_CONTROL, drm_legacy_irq_control, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
#if IS_ENABLED(CONFIG_AGP) #if IS_ENABLED(CONFIG_AGP)
DRM_IOCTL_DEF(DRM_IOCTL_AGP_ACQUIRE, drm_agp_acquire_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_AGP_ACQUIRE, drm_legacy_agp_acquire_ioctl,
DRM_IOCTL_DEF(DRM_IOCTL_AGP_RELEASE, drm_agp_release_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_AGP_ENABLE, drm_agp_enable_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_AGP_RELEASE, drm_legacy_agp_release_ioctl,
DRM_IOCTL_DEF(DRM_IOCTL_AGP_INFO, drm_agp_info_ioctl, DRM_AUTH), DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_AGP_ALLOC, drm_agp_alloc_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_AGP_ENABLE, drm_legacy_agp_enable_ioctl,
DRM_IOCTL_DEF(DRM_IOCTL_AGP_FREE, drm_agp_free_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_AGP_BIND, drm_agp_bind_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_AGP_INFO, drm_legacy_agp_info_ioctl, DRM_AUTH),
DRM_IOCTL_DEF(DRM_IOCTL_AGP_UNBIND, drm_agp_unbind_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_AGP_ALLOC, drm_legacy_agp_alloc_ioctl,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_AGP_FREE, drm_legacy_agp_free_ioctl,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_AGP_BIND, drm_legacy_agp_bind_ioctl,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_AGP_UNBIND, drm_legacy_agp_unbind_ioctl,
DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
#endif #endif
DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_SG_ALLOC, drm_legacy_sg_alloc, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_LEGACY_IOCTL_DEF(DRM_IOCTL_SG_ALLOC, drm_legacy_sg_alloc, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),

View File

@@ -148,6 +148,30 @@ struct drm_agp_mem {
struct list_head head; struct list_head head;
}; };
/* drm_agpsupport.c */
#if IS_ENABLED(CONFIG_DRM_LEGACY) && IS_ENABLED(CONFIG_AGP)
void drm_legacy_agp_clear(struct drm_device *dev);
int drm_legacy_agp_acquire_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_agp_release_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_agp_enable_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_agp_info_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_agp_alloc_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_agp_free_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_agp_unbind_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_agp_bind_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
#else
static inline void drm_legacy_agp_clear(struct drm_device *dev) {}
#endif
/* drm_lock.c */ /* drm_lock.c */
#if IS_ENABLED(CONFIG_DRM_LEGACY) #if IS_ENABLED(CONFIG_DRM_LEGACY)
int drm_legacy_lock(struct drm_device *d, void *v, struct drm_file *f); int drm_legacy_lock(struct drm_device *d, void *v, struct drm_file *f);
@@ -211,4 +235,10 @@ void drm_master_legacy_init(struct drm_master *master);
static inline void drm_master_legacy_init(struct drm_master *master) {} static inline void drm_master_legacy_init(struct drm_master *master) {}
#endif #endif
#if IS_ENABLED(CONFIG_DRM_LEGACY) && IS_ENABLED(CONFIG_PCI)
void drm_legacy_pci_agp_destroy(struct drm_device *dev);
#else
static inline void drm_legacy_pci_agp_destroy(struct drm_device *dev) {}
#endif
#endif /* __DRM_LEGACY_H__ */ #endif /* __DRM_LEGACY_H__ */

View File

@@ -33,7 +33,6 @@
* OTHER DEALINGS IN THE SOFTWARE. * OTHER DEALINGS IN THE SOFTWARE.
*/ */
#include <drm/drm_agpsupport.h>
#include <drm/drm_device.h> #include <drm/drm_device.h>
#include <drm/drm_drv.h> #include <drm/drm_drv.h>
#include <drm/drm_irq.h> #include <drm/drm_irq.h>

View File

@@ -38,7 +38,6 @@
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <drm/drm_agpsupport.h>
#include <drm/drm_cache.h> #include <drm/drm_cache.h>
#include <drm/drm_device.h> #include <drm/drm_device.h>

View File

@@ -1176,16 +1176,11 @@ enum drm_mode_status
drm_mode_validate_ycbcr420(const struct drm_display_mode *mode, drm_mode_validate_ycbcr420(const struct drm_display_mode *mode,
struct drm_connector *connector) struct drm_connector *connector)
{ {
u8 vic = drm_match_cea_mode(mode); if (!connector->ycbcr_420_allowed &&
enum drm_mode_status status = MODE_OK; drm_mode_is_420_only(&connector->display_info, mode))
struct drm_hdmi_info *hdmi = &connector->display_info.hdmi; return MODE_NO_420;
if (test_bit(vic, hdmi->y420_vdb_modes)) { return MODE_OK;
if (!connector->ycbcr_420_allowed)
status = MODE_NO_420;
}
return status;
} }
EXPORT_SYMBOL(drm_mode_validate_ycbcr420); EXPORT_SYMBOL(drm_mode_validate_ycbcr420);

View File

@@ -30,7 +30,6 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <drm/drm.h> #include <drm/drm.h>
#include <drm/drm_agpsupport.h>
#include <drm/drm_drv.h> #include <drm/drm_drv.h>
#include <drm/drm_print.h> #include <drm/drm_print.h>
@@ -41,64 +40,6 @@
/* List of devices hanging off drivers with stealth attach. */ /* List of devices hanging off drivers with stealth attach. */
static LIST_HEAD(legacy_dev_list); static LIST_HEAD(legacy_dev_list);
static DEFINE_MUTEX(legacy_dev_list_lock); static DEFINE_MUTEX(legacy_dev_list_lock);
/**
* drm_pci_alloc - Allocate a PCI consistent memory block, for DMA.
* @dev: DRM device
* @size: size of block to allocate
* @align: alignment of block
*
* FIXME: This is a needless abstraction of the Linux dma-api and should be
* removed.
*
* Return: A handle to the allocated memory block on success or NULL on
* failure.
*/
drm_dma_handle_t *drm_pci_alloc(struct drm_device * dev, size_t size, size_t align)
{
drm_dma_handle_t *dmah;
/* pci_alloc_consistent only guarantees alignment to the smallest
* PAGE_SIZE order which is greater than or equal to the requested size.
* Return NULL here for now to make sure nobody tries for larger alignment
*/
if (align > size)
return NULL;
dmah = kmalloc(sizeof(drm_dma_handle_t), GFP_KERNEL);
if (!dmah)
return NULL;
dmah->size = size;
dmah->vaddr = dma_alloc_coherent(dev->dev, size,
&dmah->busaddr,
GFP_KERNEL);
if (dmah->vaddr == NULL) {
kfree(dmah);
return NULL;
}
return dmah;
}
EXPORT_SYMBOL(drm_pci_alloc);
/**
* drm_pci_free - Free a PCI consistent memory block
* @dev: DRM device
* @dmah: handle to memory block
*
* FIXME: This is a needless abstraction of the Linux dma-api and should be
* removed.
*/
void drm_pci_free(struct drm_device * dev, drm_dma_handle_t * dmah)
{
dma_free_coherent(dev->dev, dmah->size, dmah->vaddr,
dmah->busaddr);
kfree(dmah);
}
EXPORT_SYMBOL(drm_pci_free);
#endif #endif
static int drm_get_pci_domain(struct drm_device *dev) static int drm_get_pci_domain(struct drm_device *dev)
@@ -177,7 +118,9 @@ int drm_legacy_irq_by_busid(struct drm_device *dev, void *data,
return drm_pci_irq_by_busid(dev, p); return drm_pci_irq_by_busid(dev, p);
} }
void drm_pci_agp_destroy(struct drm_device *dev) #ifdef CONFIG_DRM_LEGACY
void drm_legacy_pci_agp_destroy(struct drm_device *dev)
{ {
if (dev->agp) { if (dev->agp) {
arch_phys_wc_del(dev->agp->agp_mtrr); arch_phys_wc_del(dev->agp->agp_mtrr);
@@ -187,13 +130,11 @@ void drm_pci_agp_destroy(struct drm_device *dev)
} }
} }
#ifdef CONFIG_DRM_LEGACY static void drm_legacy_pci_agp_init(struct drm_device *dev)
static void drm_pci_agp_init(struct drm_device *dev)
{ {
if (drm_core_check_feature(dev, DRIVER_USE_AGP)) { if (drm_core_check_feature(dev, DRIVER_USE_AGP)) {
if (pci_find_capability(to_pci_dev(dev->dev), PCI_CAP_ID_AGP)) if (pci_find_capability(to_pci_dev(dev->dev), PCI_CAP_ID_AGP))
dev->agp = drm_agp_init(dev); dev->agp = drm_legacy_agp_init(dev);
if (dev->agp) { if (dev->agp) {
dev->agp->agp_mtrr = arch_phys_wc_add( dev->agp->agp_mtrr = arch_phys_wc_add(
dev->agp->agp_info.aper_base, dev->agp->agp_info.aper_base,
@@ -203,7 +144,7 @@ static void drm_pci_agp_init(struct drm_device *dev)
} }
} }
static int drm_get_pci_dev(struct pci_dev *pdev, static int drm_legacy_get_pci_dev(struct pci_dev *pdev,
const struct pci_device_id *ent, const struct pci_device_id *ent,
const struct drm_driver *driver) const struct drm_driver *driver)
{ {
@@ -220,7 +161,6 @@ static int drm_get_pci_dev(struct pci_dev *pdev,
if (ret) if (ret)
goto err_free; goto err_free;
dev->pdev = pdev;
#ifdef __alpha__ #ifdef __alpha__
dev->hose = pdev->sysdata; dev->hose = pdev->sysdata;
#endif #endif
@@ -228,7 +168,7 @@ static int drm_get_pci_dev(struct pci_dev *pdev,
if (drm_core_check_feature(dev, DRIVER_MODESET)) if (drm_core_check_feature(dev, DRIVER_MODESET))
pci_set_drvdata(pdev, dev); pci_set_drvdata(pdev, dev);
drm_pci_agp_init(dev); drm_legacy_pci_agp_init(dev);
ret = drm_dev_register(dev, ent->driver_data); ret = drm_dev_register(dev, ent->driver_data);
if (ret) if (ret)
@@ -243,7 +183,7 @@ static int drm_get_pci_dev(struct pci_dev *pdev,
return 0; return 0;
err_agp: err_agp:
drm_pci_agp_destroy(dev); drm_legacy_pci_agp_destroy(dev);
pci_disable_device(pdev); pci_disable_device(pdev);
err_free: err_free:
drm_dev_put(dev); drm_dev_put(dev);
@@ -290,7 +230,7 @@ int drm_legacy_pci_init(const struct drm_driver *driver,
/* stealth mode requires a manual probe */ /* stealth mode requires a manual probe */
pci_dev_get(pdev); pci_dev_get(pdev);
drm_get_pci_dev(pdev, pid, driver); drm_legacy_get_pci_dev(pdev, pid, driver);
} }
} }
return 0; return 0;

View File

@@ -128,6 +128,13 @@
* pairs supported by this plane. The blob is a struct * pairs supported by this plane. The blob is a struct
* drm_format_modifier_blob. Without this property the plane doesn't * drm_format_modifier_blob. Without this property the plane doesn't
* support buffers with modifiers. Userspace cannot change this property. * support buffers with modifiers. Userspace cannot change this property.
*
* Note that userspace can check the &DRM_CAP_ADDFB2_MODIFIERS driver
* capability for general modifier support. If this flag is set then every
* plane will have the IN_FORMATS property, even when it only supports
* DRM_FORMAT_MOD_LINEAR. Before linux kernel release v5.1 there have been
* various bugs in this area with inconsistencies between the capability
* flag and per-plane properties.
*/ */
static unsigned int drm_num_planes(struct drm_device *dev) static unsigned int drm_num_planes(struct drm_device *dev)
@@ -277,8 +284,14 @@ static int __drm_universal_plane_init(struct drm_device *dev,
format_modifier_count++; format_modifier_count++;
} }
if (format_modifier_count) /* autoset the cap and check for consistency across all planes */
if (format_modifier_count) {
drm_WARN_ON(dev, !config->allow_fb_modifiers &&
!list_empty(&config->plane_list));
config->allow_fb_modifiers = true; config->allow_fb_modifiers = true;
} else {
drm_WARN_ON(dev, config->allow_fb_modifiers);
}
plane->modifier_count = format_modifier_count; plane->modifier_count = format_modifier_count;
plane->modifiers = kmalloc_array(format_modifier_count, plane->modifiers = kmalloc_array(format_modifier_count,
@@ -360,6 +373,9 @@ static int __drm_universal_plane_init(struct drm_device *dev,
* drm_universal_plane_init() to let the DRM managed resource infrastructure * drm_universal_plane_init() to let the DRM managed resource infrastructure
* take care of cleanup and deallocation. * take care of cleanup and deallocation.
* *
* Drivers supporting modifiers must set @format_modifiers on all their planes,
* even those that only support DRM_FORMAT_MOD_LINEAR.
*
* Returns: * Returns:
* Zero on success, error code on failure. * Zero on success, error code on failure.
*/ */

View File

@@ -45,8 +45,6 @@
#endif #endif
#include <linux/mem_encrypt.h> #include <linux/mem_encrypt.h>
#include <drm/drm_agpsupport.h>
#include <drm/drm_device.h> #include <drm/drm_device.h>
#include <drm/drm_drv.h> #include <drm/drm_drv.h>
#include <drm/drm_file.h> #include <drm/drm_file.h>

View File

@@ -177,7 +177,5 @@ void exynos_drm_mode_config_init(struct drm_device *dev)
dev->mode_config.funcs = &exynos_drm_mode_config_funcs; dev->mode_config.funcs = &exynos_drm_mode_config_funcs;
dev->mode_config.helper_private = &exynos_drm_mode_config_helpers; dev->mode_config.helper_private = &exynos_drm_mode_config_helpers;
dev->mode_config.allow_fb_modifiers = true;
dev->mode_config.normalize_zpos = true; dev->mode_config.normalize_zpos = true;
} }

View File

@@ -1358,7 +1358,6 @@ cdv_intel_dp_set_link_train(struct gma_encoder *encoder,
uint32_t dp_reg_value, uint32_t dp_reg_value,
uint8_t dp_train_pat) uint8_t dp_train_pat)
{ {
struct drm_device *dev = encoder->base.dev; struct drm_device *dev = encoder->base.dev;
int ret; int ret;
struct cdv_intel_dp *intel_dp = encoder->dev_priv; struct cdv_intel_dp *intel_dp = encoder->dev_priv;
@@ -1384,7 +1383,6 @@ static bool
cdv_intel_dplink_set_level(struct gma_encoder *encoder, cdv_intel_dplink_set_level(struct gma_encoder *encoder,
uint8_t dp_train_pat) uint8_t dp_train_pat)
{ {
int ret; int ret;
struct cdv_intel_dp *intel_dp = encoder->dev_priv; struct cdv_intel_dp *intel_dp = encoder->dev_priv;

View File

@@ -21,7 +21,7 @@
#include "psb_intel_drv.h" #include "psb_intel_drv.h"
#include "psb_intel_reg.h" #include "psb_intel_reg.h"
/** /*
* LVDS I2C backlight control macros * LVDS I2C backlight control macros
*/ */
#define BRIGHTNESS_MAX_LEVEL 100 #define BRIGHTNESS_MAX_LEVEL 100

View File

@@ -379,7 +379,7 @@ static const struct i2c_algorithm gmbus_algorithm = {
}; };
/** /**
* intel_gmbus_setup - instantiate all Intel i2c GMBuses * gma_intel_setup_gmbus() - instantiate all Intel i2c GMBuses
* @dev: DRM device * @dev: DRM device
*/ */
int gma_intel_setup_gmbus(struct drm_device *dev) int gma_intel_setup_gmbus(struct drm_device *dev)

View File

@@ -646,7 +646,7 @@ extern u32 psb_get_vblank_counter(struct drm_crtc *crtc);
extern int psbfb_probed(struct drm_device *dev); extern int psbfb_probed(struct drm_device *dev);
extern int psbfb_remove(struct drm_device *dev, extern int psbfb_remove(struct drm_device *dev,
struct drm_framebuffer *fb); struct drm_framebuffer *fb);
/* accel_2d.c */ /* psb_drv.c */
extern void psb_spank(struct drm_psb_private *dev_priv); extern void psb_spank(struct drm_psb_private *dev_priv);
/* psb_reset.c */ /* psb_reset.c */

View File

@@ -86,7 +86,7 @@ static inline u8 gud_from_fourcc(u32 fourcc)
return GUD_PIXEL_FORMAT_XRGB8888; return GUD_PIXEL_FORMAT_XRGB8888;
case DRM_FORMAT_ARGB8888: case DRM_FORMAT_ARGB8888:
return GUD_PIXEL_FORMAT_ARGB8888; return GUD_PIXEL_FORMAT_ARGB8888;
}; }
return 0; return 0;
} }
@@ -104,7 +104,7 @@ static inline u32 gud_to_fourcc(u8 format)
return DRM_FORMAT_XRGB8888; return DRM_FORMAT_XRGB8888;
case GUD_PIXEL_FORMAT_ARGB8888: case GUD_PIXEL_FORMAT_ARGB8888:
return DRM_FORMAT_ARGB8888; return DRM_FORMAT_ARGB8888;
}; }
return 0; return 0;
} }

View File

@@ -14,6 +14,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <drm/drm_aperture.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include <drm/drm_drv.h> #include <drm/drm_drv.h>
#include <drm/drm_gem_framebuffer_helper.h> #include <drm/drm_gem_framebuffer_helper.h>
@@ -60,7 +61,7 @@ static const struct drm_driver hibmc_driver = {
.minor = 0, .minor = 0,
.debugfs_init = drm_vram_mm_debugfs_init, .debugfs_init = drm_vram_mm_debugfs_init,
.dumb_create = hibmc_dumb_create, .dumb_create = hibmc_dumb_create,
.dumb_map_offset = drm_gem_vram_driver_dumb_mmap_offset, .dumb_map_offset = drm_gem_ttm_dumb_map_offset,
.gem_prime_mmap = drm_gem_prime_mmap, .gem_prime_mmap = drm_gem_prime_mmap,
.irq_handler = hibmc_drm_interrupt, .irq_handler = hibmc_drm_interrupt,
}; };
@@ -313,8 +314,7 @@ static int hibmc_pci_probe(struct pci_dev *pdev,
struct drm_device *dev; struct drm_device *dev;
int ret; int ret;
ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, "hibmcdrmfb");
"hibmcdrmfb");
if (ret) if (ret)
return ret; return ret;

View File

@@ -34,7 +34,6 @@
#include <linux/mman.h> #include <linux/mman.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <drm/drm_agpsupport.h>
#include <drm/drm_device.h> #include <drm/drm_device.h>
#include <drm/drm_drv.h> #include <drm/drm_drv.h>
#include <drm/drm_file.h> #include <drm/drm_file.h>
@@ -220,7 +219,7 @@ static int i810_dma_cleanup(struct drm_device *dev)
if (dev_priv->ring.virtual_start) if (dev_priv->ring.virtual_start)
drm_legacy_ioremapfree(&dev_priv->ring.map, dev); drm_legacy_ioremapfree(&dev_priv->ring.map, dev);
if (dev_priv->hw_status_page) { if (dev_priv->hw_status_page) {
dma_free_coherent(&dev->pdev->dev, PAGE_SIZE, dma_free_coherent(dev->dev, PAGE_SIZE,
dev_priv->hw_status_page, dev_priv->hw_status_page,
dev_priv->dma_status_page); dev_priv->dma_status_page);
} }
@@ -398,7 +397,7 @@ static int i810_dma_initialize(struct drm_device *dev,
/* Program Hardware Status Page */ /* Program Hardware Status Page */
dev_priv->hw_status_page = dev_priv->hw_status_page =
dma_alloc_coherent(&dev->pdev->dev, PAGE_SIZE, dma_alloc_coherent(dev->dev, PAGE_SIZE,
&dev_priv->dma_status_page, GFP_KERNEL); &dev_priv->dma_status_page, GFP_KERNEL);
if (!dev_priv->hw_status_page) { if (!dev_priv->hw_status_page) {
dev->dev_private = (void *)dev_priv; dev->dev_private = (void *)dev_priv;
@@ -1197,7 +1196,9 @@ static int i810_flip_bufs(struct drm_device *dev, void *data,
int i810_driver_load(struct drm_device *dev, unsigned long flags) int i810_driver_load(struct drm_device *dev, unsigned long flags)
{ {
dev->agp = drm_agp_init(dev); struct pci_dev *pdev = to_pci_dev(dev->dev);
dev->agp = drm_legacy_agp_init(dev);
if (dev->agp) { if (dev->agp) {
dev->agp->agp_mtrr = arch_phys_wc_add( dev->agp->agp_mtrr = arch_phys_wc_add(
dev->agp->agp_info.aper_base, dev->agp->agp_info.aper_base,
@@ -1209,7 +1210,7 @@ int i810_driver_load(struct drm_device *dev, unsigned long flags)
if (!dev->agp) if (!dev->agp)
return -EINVAL; return -EINVAL;
pci_set_master(dev->pdev); pci_set_master(pdev);
return 0; return 0;
} }

View File

@@ -109,16 +109,6 @@ int intel_digital_connector_atomic_set_property(struct drm_connector *connector,
return -EINVAL; return -EINVAL;
} }
static bool blob_equal(const struct drm_property_blob *a,
const struct drm_property_blob *b)
{
if (a && b)
return a->length == b->length &&
!memcmp(a->data, b->data, a->length);
return !a == !b;
}
int intel_digital_connector_atomic_check(struct drm_connector *conn, int intel_digital_connector_atomic_check(struct drm_connector *conn,
struct drm_atomic_state *state) struct drm_atomic_state *state)
{ {
@@ -149,8 +139,7 @@ int intel_digital_connector_atomic_check(struct drm_connector *conn,
new_conn_state->base.picture_aspect_ratio != old_conn_state->base.picture_aspect_ratio || new_conn_state->base.picture_aspect_ratio != old_conn_state->base.picture_aspect_ratio ||
new_conn_state->base.content_type != old_conn_state->base.content_type || new_conn_state->base.content_type != old_conn_state->base.content_type ||
new_conn_state->base.scaling_mode != old_conn_state->base.scaling_mode || new_conn_state->base.scaling_mode != old_conn_state->base.scaling_mode ||
!blob_equal(new_conn_state->base.hdr_output_metadata, !drm_connector_atomic_hdr_metadata_equal(old_state, new_state))
old_conn_state->base.hdr_output_metadata))
crtc_state->mode_changed = true; crtc_state->mode_changed = true;
return 0; return 0;

View File

@@ -282,14 +282,12 @@ void
intel_attach_hdmi_colorspace_property(struct drm_connector *connector) intel_attach_hdmi_colorspace_property(struct drm_connector *connector)
{ {
if (!drm_mode_create_hdmi_colorspace_property(connector)) if (!drm_mode_create_hdmi_colorspace_property(connector))
drm_object_attach_property(&connector->base, drm_connector_attach_colorspace_property(connector);
connector->colorspace_property, 0);
} }
void void
intel_attach_dp_colorspace_property(struct drm_connector *connector) intel_attach_dp_colorspace_property(struct drm_connector *connector)
{ {
if (!drm_mode_create_dp_colorspace_property(connector)) if (!drm_mode_create_dp_colorspace_property(connector))
drm_object_attach_property(&connector->base, drm_connector_attach_colorspace_property(connector);
connector->colorspace_property, 0);
} }

View File

@@ -11705,8 +11705,6 @@ static void intel_mode_config_init(struct drm_i915_private *i915)
mode_config->preferred_depth = 24; mode_config->preferred_depth = 24;
mode_config->prefer_shadow = 1; mode_config->prefer_shadow = 1;
mode_config->allow_fb_modifiers = true;
mode_config->funcs = &intel_mode_funcs; mode_config->funcs = &intel_mode_funcs;
mode_config->async_page_flip = has_async_flips(i915); mode_config->async_page_flip = has_async_flips(i915);

Some files were not shown because too many files have changed in this diff Show More