forked from Minki/linux
drm/display/dp_mst: Move all payload info into the atomic state
Now that we've finally gotten rid of the non-atomic MST users leftover in the kernel, we can finally get rid of all of the legacy payload code we have and move as much as possible into the MST atomic state structs. The main purpose of this is to make the MST code a lot less confusing to work on, as there's a lot of duplicated logic that doesn't really need to be here. As well, this should make introducing features like fallback link retraining and DSC support far easier. Since the old payload code was pretty gnarly and there's a Lot of changes here, I expect this might be a bit difficult to review. So to make things as easy as possible for reviewers, I'll sum up how both the old and new code worked here (it took me a while to figure this out too!). The old MST code basically worked by maintaining two different payload tables - proposed_vcpis, and payloads. proposed_vcpis would hold the modified payload we wanted to push to the topology, while payloads held the payload table that was currently programmed in hardware. Modifications to proposed_vcpis would be handled through drm_dp_allocate_vcpi(), drm_dp_mst_deallocate_vcpi(), and drm_dp_mst_reset_vcpi_slots(). Then, they would be pushed via drm_dp_mst_update_payload_step1() and drm_dp_mst_update_payload_step2(). Furthermore, it's important to note how adding and removing VC payloads actually worked with drm_dp_mst_update_payload_step1(). When a VC payload is removed from the VC table, all VC payloads which come after the removed VC payload's slots must have their time slots shifted towards the start of the table. The old code handles this by looping through the entire payload table and recomputing the start slot for every payload in the topology from scratch. While very much overkill, this ends up doing the right thing because we always order the VCPIs for payloads from first to last starting timeslot. It's important to also note that drm_dp_mst_update_payload_step2() isn't actually limited to updating a single payload - the driver can use it to queue up multiple payload changes so that as many of them can be sent as possible before waiting for the ACT. This is -technically- not against spec, but as Wayne Lin has pointed out it's not consistently implemented correctly in hubs - so it might as well be. drm_dp_mst_update_payload_step2() is pretty self explanatory and basically the same between the old and new code, save for the fact we don't have a second step for deleting payloads anymore -and thus rename it to drm_dp_mst_add_payload_step2(). The new payload code stores all of the current payload info within the MST atomic state and computes as much of the state as possible ahead of time. This has the one exception of the starting timeslots for payloads, which can't be determined at atomic check time since the starting time slots will vary depending on what order CRTCs are enabled in the atomic state - which varies from driver to driver. These are still stored in the atomic MST state, but are only copied from the old MST state during atomic commit time. Likewise, this is when new start slots are determined. Adding/removing payloads now works much more closely to how things are described in the spec. When we delete a payload, we loop through the current list of payloads and update the start slots for any payloads whose time slots came after the payload we just deleted. Determining the starting time slots for new payloads being added is done by simply keeping track of where the end of the VC table is in drm_dp_mst_topology_mgr->next_start_slot. Additionally, it's worth noting that we no longer have a single update_payload() function. Instead, we now have drm_dp_mst_add_payload_step1|2() and drm_dp_mst_remove_payload(). As such, it's now left it up to the driver to figure out when to add or remove payloads. The driver already knows when it's disabling/enabling CRTCs, so it also already knows when payloads should be added or removed. Changes since v1: * Refactor around all of the completely dead code changes that are happening in amdgpu for some reason when they really shouldn't even be there in the first place… :\ * Remove mention of sending one ACT per series of payload updates. As Wayne Lin pointed out, there are apparently hubs on the market that don't work correctly with this scheme and require a separate ACT per payload update. * Fix accidental drop of mst_mgr.lock - Wayne Lin * Remove mentions of allowing multiple ACT updates per payload change, mention that this is a result of vendors not consistently supporting this part of the spec and requiring a unique ACT for each payload change. * Get rid of reference to drm_dp_mst_port in DC - turns out I just got myself confused by DC and we don't actually need this. Changes since v2: * Get rid of fix for not sending payload deallocations if ddps=0 and just go back to wayne's fix Signed-off-by: Lyude Paul <lyude@redhat.com> Cc: Wayne Lin <Wayne.Lin@amd.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Fangzhi Zuo <Jerry.Zuo@amd.com> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Imre Deak <imre.deak@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Sean Paul <sean@poorly.run> Acked-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220817193847.557945-18-lyude@redhat.com
This commit is contained in:
parent
01ad1d9c28
commit
4d07b0bc40
@ -6385,6 +6385,7 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
|
||||
const struct drm_display_mode *adjusted_mode = &crtc_state->adjusted_mode;
|
||||
struct drm_dp_mst_topology_mgr *mst_mgr;
|
||||
struct drm_dp_mst_port *mst_port;
|
||||
struct drm_dp_mst_topology_state *mst_state;
|
||||
enum dc_color_depth color_depth;
|
||||
int clock, bpp = 0;
|
||||
bool is_y420 = false;
|
||||
@ -6398,6 +6399,13 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
|
||||
if (!crtc_state->connectors_changed && !crtc_state->mode_changed)
|
||||
return 0;
|
||||
|
||||
mst_state = drm_atomic_get_mst_topology_state(state, mst_mgr);
|
||||
if (IS_ERR(mst_state))
|
||||
return PTR_ERR(mst_state);
|
||||
|
||||
if (!mst_state->pbn_div)
|
||||
mst_state->pbn_div = dm_mst_get_pbn_divider(aconnector->mst_port->dc_link);
|
||||
|
||||
if (!state->duplicated) {
|
||||
int max_bpc = conn_state->max_requested_bpc;
|
||||
is_y420 = drm_mode_is_420_also(&connector->display_info, adjusted_mode) &&
|
||||
@ -6409,11 +6417,10 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
|
||||
clock = adjusted_mode->clock;
|
||||
dm_new_connector_state->pbn = drm_dp_calc_pbn_mode(clock, bpp, false);
|
||||
}
|
||||
dm_new_connector_state->vcpi_slots = drm_dp_atomic_find_time_slots(state,
|
||||
mst_mgr,
|
||||
mst_port,
|
||||
dm_new_connector_state->pbn,
|
||||
dm_mst_get_pbn_divider(aconnector->dc_link));
|
||||
|
||||
dm_new_connector_state->vcpi_slots =
|
||||
drm_dp_atomic_find_time_slots(state, mst_mgr, mst_port,
|
||||
dm_new_connector_state->pbn);
|
||||
if (dm_new_connector_state->vcpi_slots < 0) {
|
||||
DRM_DEBUG_ATOMIC("failed finding vcpi slots: %d\n", (int)dm_new_connector_state->vcpi_slots);
|
||||
return dm_new_connector_state->vcpi_slots;
|
||||
@ -6483,18 +6490,12 @@ static int dm_update_mst_vcpi_slots_for_dsc(struct drm_atomic_state *state,
|
||||
dm_conn_state->pbn = pbn;
|
||||
dm_conn_state->vcpi_slots = slot_num;
|
||||
|
||||
drm_dp_mst_atomic_enable_dsc(state,
|
||||
aconnector->port,
|
||||
dm_conn_state->pbn,
|
||||
0,
|
||||
drm_dp_mst_atomic_enable_dsc(state, aconnector->port, dm_conn_state->pbn,
|
||||
false);
|
||||
continue;
|
||||
}
|
||||
|
||||
vcpi = drm_dp_mst_atomic_enable_dsc(state,
|
||||
aconnector->port,
|
||||
pbn, pbn_div,
|
||||
true);
|
||||
vcpi = drm_dp_mst_atomic_enable_dsc(state, aconnector->port, pbn, true);
|
||||
if (vcpi < 0)
|
||||
return vcpi;
|
||||
|
||||
@ -9336,8 +9337,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
|
||||
struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state;
|
||||
#if defined(CONFIG_DRM_AMD_DC_DCN)
|
||||
struct dsc_mst_fairness_vars vars[MAX_PIPES];
|
||||
struct drm_dp_mst_topology_state *mst_state;
|
||||
struct drm_dp_mst_topology_mgr *mgr;
|
||||
#endif
|
||||
|
||||
trace_amdgpu_dm_atomic_check_begin(state);
|
||||
@ -9576,33 +9575,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
|
||||
lock_and_validation_needed = true;
|
||||
}
|
||||
|
||||
#if defined(CONFIG_DRM_AMD_DC_DCN)
|
||||
/* set the slot info for each mst_state based on the link encoding format */
|
||||
for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) {
|
||||
struct amdgpu_dm_connector *aconnector;
|
||||
struct drm_connector *connector;
|
||||
struct drm_connector_list_iter iter;
|
||||
u8 link_coding_cap;
|
||||
|
||||
if (!mgr->mst_state )
|
||||
continue;
|
||||
|
||||
drm_connector_list_iter_begin(dev, &iter);
|
||||
drm_for_each_connector_iter(connector, &iter) {
|
||||
int id = connector->index;
|
||||
|
||||
if (id == mst_state->mgr->conn_base_id) {
|
||||
aconnector = to_amdgpu_dm_connector(connector);
|
||||
link_coding_cap = dc_link_dp_mst_decide_link_encoding_format(aconnector->dc_link);
|
||||
drm_dp_mst_update_slots(mst_state, link_coding_cap);
|
||||
|
||||
break;
|
||||
}
|
||||
}
|
||||
drm_connector_list_iter_end(&iter);
|
||||
|
||||
}
|
||||
#endif
|
||||
/**
|
||||
* Streams and planes are reset when there are changes that affect
|
||||
* bandwidth. Anything that affects bandwidth needs to go through
|
||||
|
@ -27,6 +27,7 @@
|
||||
#include <linux/acpi.h>
|
||||
#include <linux/i2c.h>
|
||||
|
||||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <drm/amdgpu_drm.h>
|
||||
#include <drm/drm_edid.h>
|
||||
@ -154,40 +155,27 @@ enum dc_edid_status dm_helpers_parse_edid_caps(
|
||||
}
|
||||
|
||||
static void
|
||||
fill_dc_mst_payload_table_from_drm(struct amdgpu_dm_connector *aconnector,
|
||||
struct dc_dp_mst_stream_allocation_table *proposed_table)
|
||||
fill_dc_mst_payload_table_from_drm(struct drm_dp_mst_topology_state *mst_state,
|
||||
struct amdgpu_dm_connector *aconnector,
|
||||
struct dc_dp_mst_stream_allocation_table *table)
|
||||
{
|
||||
int i;
|
||||
struct drm_dp_mst_topology_mgr *mst_mgr =
|
||||
&aconnector->mst_port->mst_mgr;
|
||||
struct dc_dp_mst_stream_allocation_table new_table = { 0 };
|
||||
struct dc_dp_mst_stream_allocation *sa;
|
||||
struct drm_dp_mst_atomic_payload *payload;
|
||||
|
||||
mutex_lock(&mst_mgr->payload_lock);
|
||||
/* Fill payload info*/
|
||||
list_for_each_entry(payload, &mst_state->payloads, next) {
|
||||
if (payload->delete)
|
||||
continue;
|
||||
|
||||
proposed_table->stream_count = 0;
|
||||
|
||||
/* number of active streams */
|
||||
for (i = 0; i < mst_mgr->max_payloads; i++) {
|
||||
if (mst_mgr->payloads[i].num_slots == 0)
|
||||
break; /* end of vcp_id table */
|
||||
|
||||
ASSERT(mst_mgr->payloads[i].payload_state !=
|
||||
DP_PAYLOAD_DELETE_LOCAL);
|
||||
|
||||
if (mst_mgr->payloads[i].payload_state == DP_PAYLOAD_LOCAL ||
|
||||
mst_mgr->payloads[i].payload_state ==
|
||||
DP_PAYLOAD_REMOTE) {
|
||||
|
||||
struct dc_dp_mst_stream_allocation *sa =
|
||||
&proposed_table->stream_allocations[
|
||||
proposed_table->stream_count];
|
||||
|
||||
sa->slot_count = mst_mgr->payloads[i].num_slots;
|
||||
sa->vcp_id = mst_mgr->proposed_vcpis[i]->vcpi;
|
||||
proposed_table->stream_count++;
|
||||
}
|
||||
sa = &new_table.stream_allocations[new_table.stream_count];
|
||||
sa->slot_count = payload->time_slots;
|
||||
sa->vcp_id = payload->vcpi;
|
||||
new_table.stream_count++;
|
||||
}
|
||||
|
||||
mutex_unlock(&mst_mgr->payload_lock);
|
||||
/* Overwrite the old table */
|
||||
*table = new_table;
|
||||
}
|
||||
|
||||
void dm_helpers_dp_update_branch_info(
|
||||
@ -205,11 +193,9 @@ bool dm_helpers_dp_mst_write_payload_allocation_table(
|
||||
bool enable)
|
||||
{
|
||||
struct amdgpu_dm_connector *aconnector;
|
||||
struct dm_connector_state *dm_conn_state;
|
||||
struct drm_dp_mst_topology_state *mst_state;
|
||||
struct drm_dp_mst_atomic_payload *payload;
|
||||
struct drm_dp_mst_topology_mgr *mst_mgr;
|
||||
struct drm_dp_mst_port *mst_port;
|
||||
bool ret;
|
||||
u8 link_coding_cap = DP_8b_10b_ENCODING;
|
||||
|
||||
aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context;
|
||||
/* Accessing the connector state is required for vcpi_slots allocation
|
||||
@ -220,40 +206,21 @@ bool dm_helpers_dp_mst_write_payload_allocation_table(
|
||||
if (!aconnector || !aconnector->mst_port)
|
||||
return false;
|
||||
|
||||
dm_conn_state = to_dm_connector_state(aconnector->base.state);
|
||||
|
||||
mst_mgr = &aconnector->mst_port->mst_mgr;
|
||||
|
||||
if (!mst_mgr->mst_state)
|
||||
return false;
|
||||
|
||||
mst_port = aconnector->port;
|
||||
|
||||
#if defined(CONFIG_DRM_AMD_DC_DCN)
|
||||
link_coding_cap = dc_link_dp_mst_decide_link_encoding_format(aconnector->dc_link);
|
||||
#endif
|
||||
|
||||
if (enable) {
|
||||
|
||||
ret = drm_dp_mst_allocate_vcpi(mst_mgr, mst_port,
|
||||
dm_conn_state->pbn,
|
||||
dm_conn_state->vcpi_slots);
|
||||
if (!ret)
|
||||
return false;
|
||||
|
||||
} else {
|
||||
drm_dp_mst_reset_vcpi_slots(mst_mgr, mst_port);
|
||||
}
|
||||
mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state);
|
||||
|
||||
/* It's OK for this to fail */
|
||||
drm_dp_update_payload_part1(mst_mgr, (link_coding_cap == DP_CAP_ANSI_128B132B) ? 0:1);
|
||||
payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->port);
|
||||
if (enable)
|
||||
drm_dp_add_payload_part1(mst_mgr, mst_state, payload);
|
||||
else
|
||||
drm_dp_remove_payload(mst_mgr, mst_state, payload);
|
||||
|
||||
/* mst_mgr->->payloads are VC payload notify MST branch using DPCD or
|
||||
* AUX message. The sequence is slot 1-63 allocated sequence for each
|
||||
* stream. AMD ASIC stream slot allocation should follow the same
|
||||
* sequence. copy DRM MST allocation to dc */
|
||||
|
||||
fill_dc_mst_payload_table_from_drm(aconnector, proposed_table);
|
||||
fill_dc_mst_payload_table_from_drm(mst_state, aconnector, proposed_table);
|
||||
|
||||
return true;
|
||||
}
|
||||
@ -310,8 +277,9 @@ bool dm_helpers_dp_mst_send_payload_allocation(
|
||||
bool enable)
|
||||
{
|
||||
struct amdgpu_dm_connector *aconnector;
|
||||
struct drm_dp_mst_topology_state *mst_state;
|
||||
struct drm_dp_mst_topology_mgr *mst_mgr;
|
||||
struct drm_dp_mst_port *mst_port;
|
||||
struct drm_dp_mst_atomic_payload *payload;
|
||||
enum mst_progress_status set_flag = MST_ALLOCATE_NEW_PAYLOAD;
|
||||
enum mst_progress_status clr_flag = MST_CLEAR_ALLOCATED_PAYLOAD;
|
||||
|
||||
@ -320,19 +288,16 @@ bool dm_helpers_dp_mst_send_payload_allocation(
|
||||
if (!aconnector || !aconnector->mst_port)
|
||||
return false;
|
||||
|
||||
mst_port = aconnector->port;
|
||||
|
||||
mst_mgr = &aconnector->mst_port->mst_mgr;
|
||||
mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state);
|
||||
|
||||
if (!mst_mgr->mst_state)
|
||||
return false;
|
||||
|
||||
payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->port);
|
||||
if (!enable) {
|
||||
set_flag = MST_CLEAR_ALLOCATED_PAYLOAD;
|
||||
clr_flag = MST_ALLOCATE_NEW_PAYLOAD;
|
||||
}
|
||||
|
||||
if (drm_dp_update_payload_part2(mst_mgr)) {
|
||||
if (enable && drm_dp_add_payload_part2(mst_mgr, mst_state->base.state, payload)) {
|
||||
amdgpu_dm_set_mst_status(&aconnector->mst_status,
|
||||
set_flag, false);
|
||||
} else {
|
||||
@ -342,9 +307,6 @@ bool dm_helpers_dp_mst_send_payload_allocation(
|
||||
clr_flag, false);
|
||||
}
|
||||
|
||||
if (!enable)
|
||||
drm_dp_mst_deallocate_vcpi(mst_mgr, mst_port);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -597,15 +597,8 @@ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
|
||||
|
||||
dc_link_dp_get_max_link_enc_cap(aconnector->dc_link, &max_link_enc_cap);
|
||||
aconnector->mst_mgr.cbs = &dm_mst_cbs;
|
||||
drm_dp_mst_topology_mgr_init(
|
||||
&aconnector->mst_mgr,
|
||||
adev_to_drm(dm->adev),
|
||||
&aconnector->dm_dp_aux.aux,
|
||||
16,
|
||||
4,
|
||||
max_link_enc_cap.lane_count,
|
||||
drm_dp_bw_code_to_link_rate(max_link_enc_cap.link_rate),
|
||||
aconnector->connector_id);
|
||||
drm_dp_mst_topology_mgr_init(&aconnector->mst_mgr, adev_to_drm(dm->adev),
|
||||
&aconnector->dm_dp_aux.aux, 16, 4, aconnector->connector_id);
|
||||
|
||||
drm_connector_attach_dp_subconnector_property(&aconnector->base);
|
||||
}
|
||||
@ -710,6 +703,7 @@ static int bpp_x16_from_pbn(struct dsc_mst_fairness_params param, int pbn)
|
||||
}
|
||||
|
||||
static bool increase_dsc_bpp(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_state *mst_state,
|
||||
struct dc_link *dc_link,
|
||||
struct dsc_mst_fairness_params *params,
|
||||
struct dsc_mst_fairness_vars *vars,
|
||||
@ -722,12 +716,9 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state,
|
||||
int min_initial_slack;
|
||||
int next_index;
|
||||
int remaining_to_increase = 0;
|
||||
int pbn_per_timeslot;
|
||||
int link_timeslots_used;
|
||||
int fair_pbn_alloc;
|
||||
|
||||
pbn_per_timeslot = dm_mst_get_pbn_divider(dc_link);
|
||||
|
||||
for (i = 0; i < count; i++) {
|
||||
if (vars[i + k].dsc_enabled) {
|
||||
initial_slack[i] =
|
||||
@ -758,17 +749,17 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state,
|
||||
link_timeslots_used = 0;
|
||||
|
||||
for (i = 0; i < count; i++)
|
||||
link_timeslots_used += DIV_ROUND_UP(vars[i + k].pbn, pbn_per_timeslot);
|
||||
link_timeslots_used += DIV_ROUND_UP(vars[i + k].pbn, mst_state->pbn_div);
|
||||
|
||||
fair_pbn_alloc = (63 - link_timeslots_used) / remaining_to_increase * pbn_per_timeslot;
|
||||
fair_pbn_alloc =
|
||||
(63 - link_timeslots_used) / remaining_to_increase * mst_state->pbn_div;
|
||||
|
||||
if (initial_slack[next_index] > fair_pbn_alloc) {
|
||||
vars[next_index].pbn += fair_pbn_alloc;
|
||||
if (drm_dp_atomic_find_time_slots(state,
|
||||
params[next_index].port->mgr,
|
||||
params[next_index].port,
|
||||
vars[next_index].pbn,
|
||||
pbn_per_timeslot) < 0)
|
||||
vars[next_index].pbn) < 0)
|
||||
return false;
|
||||
if (!drm_dp_mst_atomic_check(state)) {
|
||||
vars[next_index].bpp_x16 = bpp_x16_from_pbn(params[next_index], vars[next_index].pbn);
|
||||
@ -777,8 +768,7 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state,
|
||||
if (drm_dp_atomic_find_time_slots(state,
|
||||
params[next_index].port->mgr,
|
||||
params[next_index].port,
|
||||
vars[next_index].pbn,
|
||||
pbn_per_timeslot) < 0)
|
||||
vars[next_index].pbn) < 0)
|
||||
return false;
|
||||
}
|
||||
} else {
|
||||
@ -786,8 +776,7 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state,
|
||||
if (drm_dp_atomic_find_time_slots(state,
|
||||
params[next_index].port->mgr,
|
||||
params[next_index].port,
|
||||
vars[next_index].pbn,
|
||||
pbn_per_timeslot) < 0)
|
||||
vars[next_index].pbn) < 0)
|
||||
return false;
|
||||
if (!drm_dp_mst_atomic_check(state)) {
|
||||
vars[next_index].bpp_x16 = params[next_index].bw_range.max_target_bpp_x16;
|
||||
@ -796,8 +785,7 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state,
|
||||
if (drm_dp_atomic_find_time_slots(state,
|
||||
params[next_index].port->mgr,
|
||||
params[next_index].port,
|
||||
vars[next_index].pbn,
|
||||
pbn_per_timeslot) < 0)
|
||||
vars[next_index].pbn) < 0)
|
||||
return false;
|
||||
}
|
||||
}
|
||||
@ -854,8 +842,7 @@ static bool try_disable_dsc(struct drm_atomic_state *state,
|
||||
if (drm_dp_atomic_find_time_slots(state,
|
||||
params[next_index].port->mgr,
|
||||
params[next_index].port,
|
||||
vars[next_index].pbn,
|
||||
dm_mst_get_pbn_divider(dc_link)) < 0)
|
||||
vars[next_index].pbn) < 0)
|
||||
return false;
|
||||
|
||||
if (!drm_dp_mst_atomic_check(state)) {
|
||||
@ -866,8 +853,7 @@ static bool try_disable_dsc(struct drm_atomic_state *state,
|
||||
if (drm_dp_atomic_find_time_slots(state,
|
||||
params[next_index].port->mgr,
|
||||
params[next_index].port,
|
||||
vars[next_index].pbn,
|
||||
dm_mst_get_pbn_divider(dc_link)) < 0)
|
||||
vars[next_index].pbn) < 0)
|
||||
return false;
|
||||
}
|
||||
|
||||
@ -881,17 +867,27 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
|
||||
struct dc_state *dc_state,
|
||||
struct dc_link *dc_link,
|
||||
struct dsc_mst_fairness_vars *vars,
|
||||
struct drm_dp_mst_topology_mgr *mgr,
|
||||
int *link_vars_start_index)
|
||||
{
|
||||
int i, k;
|
||||
struct dc_stream_state *stream;
|
||||
struct dsc_mst_fairness_params params[MAX_PIPES];
|
||||
struct amdgpu_dm_connector *aconnector;
|
||||
struct drm_dp_mst_topology_state *mst_state = drm_atomic_get_mst_topology_state(state, mgr);
|
||||
int count = 0;
|
||||
int i, k;
|
||||
bool debugfs_overwrite = false;
|
||||
|
||||
memset(params, 0, sizeof(params));
|
||||
|
||||
if (IS_ERR(mst_state))
|
||||
return false;
|
||||
|
||||
mst_state->pbn_div = dm_mst_get_pbn_divider(dc_link);
|
||||
#if defined(CONFIG_DRM_AMD_DC_DCN)
|
||||
drm_dp_mst_update_slots(mst_state, dc_link_dp_mst_decide_link_encoding_format(dc_link));
|
||||
#endif
|
||||
|
||||
/* Set up params */
|
||||
for (i = 0; i < dc_state->stream_count; i++) {
|
||||
struct dc_dsc_policy dsc_policy = {0};
|
||||
@ -950,11 +946,8 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
|
||||
vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps);
|
||||
vars[i + k].dsc_enabled = false;
|
||||
vars[i + k].bpp_x16 = 0;
|
||||
if (drm_dp_atomic_find_time_slots(state,
|
||||
params[i].port->mgr,
|
||||
params[i].port,
|
||||
vars[i + k].pbn,
|
||||
dm_mst_get_pbn_divider(dc_link)) < 0)
|
||||
if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr, params[i].port,
|
||||
vars[i + k].pbn) < 0)
|
||||
return false;
|
||||
}
|
||||
if (!drm_dp_mst_atomic_check(state) && !debugfs_overwrite) {
|
||||
@ -968,21 +961,15 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
|
||||
vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.min_kbps);
|
||||
vars[i + k].dsc_enabled = true;
|
||||
vars[i + k].bpp_x16 = params[i].bw_range.min_target_bpp_x16;
|
||||
if (drm_dp_atomic_find_time_slots(state,
|
||||
params[i].port->mgr,
|
||||
params[i].port,
|
||||
vars[i + k].pbn,
|
||||
dm_mst_get_pbn_divider(dc_link)) < 0)
|
||||
if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr,
|
||||
params[i].port, vars[i + k].pbn) < 0)
|
||||
return false;
|
||||
} else {
|
||||
vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps);
|
||||
vars[i + k].dsc_enabled = false;
|
||||
vars[i + k].bpp_x16 = 0;
|
||||
if (drm_dp_atomic_find_time_slots(state,
|
||||
params[i].port->mgr,
|
||||
params[i].port,
|
||||
vars[i + k].pbn,
|
||||
dm_mst_get_pbn_divider(dc_link)) < 0)
|
||||
if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr,
|
||||
params[i].port, vars[i + k].pbn) < 0)
|
||||
return false;
|
||||
}
|
||||
}
|
||||
@ -990,7 +977,7 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
|
||||
return false;
|
||||
|
||||
/* Optimize degree of compression */
|
||||
if (!increase_dsc_bpp(state, dc_link, params, vars, count, k))
|
||||
if (!increase_dsc_bpp(state, mst_state, dc_link, params, vars, count, k))
|
||||
return false;
|
||||
|
||||
if (!try_disable_dsc(state, dc_link, params, vars, count, k))
|
||||
@ -1136,8 +1123,9 @@ bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state,
|
||||
continue;
|
||||
|
||||
mutex_lock(&aconnector->mst_mgr.lock);
|
||||
if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link,
|
||||
vars, &link_vars_start_index)) {
|
||||
if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, vars,
|
||||
&aconnector->mst_mgr,
|
||||
&link_vars_start_index)) {
|
||||
mutex_unlock(&aconnector->mst_mgr.lock);
|
||||
return false;
|
||||
}
|
||||
@ -1195,10 +1183,8 @@ static bool
|
||||
continue;
|
||||
|
||||
mutex_lock(&aconnector->mst_mgr.lock);
|
||||
if (!compute_mst_dsc_configs_for_link(state,
|
||||
dc_state,
|
||||
stream->link,
|
||||
vars,
|
||||
if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, vars,
|
||||
&aconnector->mst_mgr,
|
||||
&link_vars_start_index)) {
|
||||
mutex_unlock(&aconnector->mst_mgr.lock);
|
||||
return false;
|
||||
|
@ -251,6 +251,9 @@ union dpcd_training_lane_set {
|
||||
* _ONLY_ be filled out from DM and then passed to DC, do NOT use these for _any_ kind of atomic
|
||||
* state calculations in DM, or you will break something.
|
||||
*/
|
||||
|
||||
struct drm_dp_mst_port;
|
||||
|
||||
/* DP MST stream allocation (payload bandwidth number) */
|
||||
struct dc_dp_mst_stream_allocation {
|
||||
uint8_t vcp_id;
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -52,6 +52,7 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder,
|
||||
struct drm_atomic_state *state = crtc_state->uapi.state;
|
||||
struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder);
|
||||
struct intel_dp *intel_dp = &intel_mst->primary->dp;
|
||||
struct drm_dp_mst_topology_state *mst_state;
|
||||
struct intel_connector *connector =
|
||||
to_intel_connector(conn_state->connector);
|
||||
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
||||
@ -60,22 +61,28 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder,
|
||||
bool constant_n = drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_CONSTANT_N);
|
||||
int bpp, slots = -EINVAL;
|
||||
|
||||
mst_state = drm_atomic_get_mst_topology_state(state, &intel_dp->mst_mgr);
|
||||
if (IS_ERR(mst_state))
|
||||
return PTR_ERR(mst_state);
|
||||
|
||||
crtc_state->lane_count = limits->max_lane_count;
|
||||
crtc_state->port_clock = limits->max_rate;
|
||||
|
||||
// TODO: Handle pbn_div changes by adding a new MST helper
|
||||
if (!mst_state->pbn_div) {
|
||||
mst_state->pbn_div = drm_dp_get_vc_payload_bw(&intel_dp->mst_mgr,
|
||||
limits->max_rate,
|
||||
limits->max_lane_count);
|
||||
}
|
||||
|
||||
for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
|
||||
crtc_state->pipe_bpp = bpp;
|
||||
|
||||
crtc_state->pbn = drm_dp_calc_pbn_mode(adjusted_mode->crtc_clock,
|
||||
crtc_state->pipe_bpp,
|
||||
false);
|
||||
|
||||
slots = drm_dp_atomic_find_time_slots(state, &intel_dp->mst_mgr,
|
||||
connector->port,
|
||||
crtc_state->pbn,
|
||||
drm_dp_get_vc_payload_bw(&intel_dp->mst_mgr,
|
||||
crtc_state->port_clock,
|
||||
crtc_state->lane_count));
|
||||
connector->port, crtc_state->pbn);
|
||||
if (slots == -EDEADLK)
|
||||
return slots;
|
||||
if (slots >= 0)
|
||||
@ -360,21 +367,17 @@ static void intel_mst_disable_dp(struct intel_atomic_state *state,
|
||||
struct intel_dp *intel_dp = &dig_port->dp;
|
||||
struct intel_connector *connector =
|
||||
to_intel_connector(old_conn_state->connector);
|
||||
struct drm_dp_mst_topology_state *mst_state =
|
||||
drm_atomic_get_mst_topology_state(&state->base, &intel_dp->mst_mgr);
|
||||
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
||||
int start_slot = intel_dp_is_uhbr(old_crtc_state) ? 0 : 1;
|
||||
int ret;
|
||||
|
||||
drm_dbg_kms(&i915->drm, "active links %d\n",
|
||||
intel_dp->active_mst_links);
|
||||
|
||||
intel_hdcp_disable(intel_mst->connector);
|
||||
|
||||
drm_dp_mst_reset_vcpi_slots(&intel_dp->mst_mgr, connector->port);
|
||||
|
||||
ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr, start_slot);
|
||||
if (ret) {
|
||||
drm_dbg_kms(&i915->drm, "failed to update payload %d\n", ret);
|
||||
}
|
||||
drm_dp_remove_payload(&intel_dp->mst_mgr, mst_state,
|
||||
drm_atomic_get_mst_payload_state(mst_state, connector->port));
|
||||
|
||||
intel_audio_codec_disable(encoder, old_crtc_state, old_conn_state);
|
||||
}
|
||||
@ -402,8 +405,6 @@ static void intel_mst_post_disable_dp(struct intel_atomic_state *state,
|
||||
|
||||
intel_disable_transcoder(old_crtc_state);
|
||||
|
||||
drm_dp_update_payload_part2(&intel_dp->mst_mgr);
|
||||
|
||||
clear_act_sent(encoder, old_crtc_state);
|
||||
|
||||
intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL(old_crtc_state->cpu_transcoder),
|
||||
@ -411,8 +412,6 @@ static void intel_mst_post_disable_dp(struct intel_atomic_state *state,
|
||||
|
||||
wait_for_act_sent(encoder, old_crtc_state);
|
||||
|
||||
drm_dp_mst_deallocate_vcpi(&intel_dp->mst_mgr, connector->port);
|
||||
|
||||
intel_ddi_disable_transcoder_func(old_crtc_state);
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 9)
|
||||
@ -479,7 +478,8 @@ static void intel_mst_pre_enable_dp(struct intel_atomic_state *state,
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_connector *connector =
|
||||
to_intel_connector(conn_state->connector);
|
||||
int start_slot = intel_dp_is_uhbr(pipe_config) ? 0 : 1;
|
||||
struct drm_dp_mst_topology_state *mst_state =
|
||||
drm_atomic_get_new_mst_topology_state(&state->base, &intel_dp->mst_mgr);
|
||||
int ret;
|
||||
bool first_mst_stream;
|
||||
|
||||
@ -505,16 +505,13 @@ static void intel_mst_pre_enable_dp(struct intel_atomic_state *state,
|
||||
dig_port->base.pre_enable(state, &dig_port->base,
|
||||
pipe_config, NULL);
|
||||
|
||||
ret = drm_dp_mst_allocate_vcpi(&intel_dp->mst_mgr,
|
||||
connector->port,
|
||||
pipe_config->pbn,
|
||||
pipe_config->dp_m_n.tu);
|
||||
if (!ret)
|
||||
drm_err(&dev_priv->drm, "failed to allocate vcpi\n");
|
||||
|
||||
intel_dp->active_mst_links++;
|
||||
|
||||
ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr, start_slot);
|
||||
ret = drm_dp_add_payload_part1(&intel_dp->mst_mgr, mst_state,
|
||||
drm_atomic_get_mst_payload_state(mst_state, connector->port));
|
||||
if (ret < 0)
|
||||
drm_err(&dev_priv->drm, "Failed to create MST payload for %s: %d\n",
|
||||
connector->base.name, ret);
|
||||
|
||||
/*
|
||||
* Before Gen 12 this is not done as part of
|
||||
@ -537,7 +534,10 @@ static void intel_mst_enable_dp(struct intel_atomic_state *state,
|
||||
struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder);
|
||||
struct intel_digital_port *dig_port = intel_mst->primary;
|
||||
struct intel_dp *intel_dp = &dig_port->dp;
|
||||
struct intel_connector *connector = to_intel_connector(conn_state->connector);
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct drm_dp_mst_topology_state *mst_state =
|
||||
drm_atomic_get_new_mst_topology_state(&state->base, &intel_dp->mst_mgr);
|
||||
enum transcoder trans = pipe_config->cpu_transcoder;
|
||||
|
||||
drm_WARN_ON(&dev_priv->drm, pipe_config->has_pch_encoder);
|
||||
@ -565,7 +565,8 @@ static void intel_mst_enable_dp(struct intel_atomic_state *state,
|
||||
|
||||
wait_for_act_sent(encoder, pipe_config);
|
||||
|
||||
drm_dp_update_payload_part2(&intel_dp->mst_mgr);
|
||||
drm_dp_add_payload_part2(&intel_dp->mst_mgr, &state->base,
|
||||
drm_atomic_get_mst_payload_state(mst_state, connector->port));
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 12 && pipe_config->fec_enable)
|
||||
intel_de_rmw(dev_priv, CHICKEN_TRANS(trans), 0,
|
||||
@ -949,8 +950,6 @@ intel_dp_mst_encoder_init(struct intel_digital_port *dig_port, int conn_base_id)
|
||||
struct intel_dp *intel_dp = &dig_port->dp;
|
||||
enum port port = dig_port->base.port;
|
||||
int ret;
|
||||
int max_source_rate =
|
||||
intel_dp->source_rates[intel_dp->num_source_rates - 1];
|
||||
|
||||
if (!HAS_DP_MST(i915) || intel_dp_is_edp(intel_dp))
|
||||
return 0;
|
||||
@ -966,10 +965,7 @@ intel_dp_mst_encoder_init(struct intel_digital_port *dig_port, int conn_base_id)
|
||||
/* create encoders */
|
||||
intel_dp_create_fake_mst_encoders(dig_port);
|
||||
ret = drm_dp_mst_topology_mgr_init(&intel_dp->mst_mgr, &i915->drm,
|
||||
&intel_dp->aux, 16, 3,
|
||||
dig_port->max_lanes,
|
||||
max_source_rate,
|
||||
conn_base_id);
|
||||
&intel_dp->aux, 16, 3, conn_base_id);
|
||||
if (ret) {
|
||||
intel_dp->mst_mgr.cbs = NULL;
|
||||
return ret;
|
||||
|
@ -30,8 +30,30 @@
|
||||
|
||||
static int intel_conn_to_vcpi(struct intel_connector *connector)
|
||||
{
|
||||
struct drm_dp_mst_topology_mgr *mgr;
|
||||
struct drm_dp_mst_atomic_payload *payload;
|
||||
struct drm_dp_mst_topology_state *mst_state;
|
||||
int vcpi = 0;
|
||||
|
||||
/* For HDMI this is forced to be 0x0. For DP SST also this is 0x0. */
|
||||
return connector->port ? connector->port->vcpi.vcpi : 0;
|
||||
if (!connector->port)
|
||||
return 0;
|
||||
mgr = connector->port->mgr;
|
||||
|
||||
drm_modeset_lock(&mgr->base.lock, NULL);
|
||||
mst_state = to_drm_dp_mst_topology_state(mgr->base.state);
|
||||
payload = drm_atomic_get_mst_payload_state(mst_state, connector->port);
|
||||
if (drm_WARN_ON(mgr->dev, !payload))
|
||||
goto out;
|
||||
|
||||
vcpi = payload->vcpi;
|
||||
if (drm_WARN_ON(mgr->dev, vcpi < 0)) {
|
||||
vcpi = 0;
|
||||
goto out;
|
||||
}
|
||||
out:
|
||||
drm_modeset_unlock(&mgr->base.lock);
|
||||
return vcpi;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -932,6 +932,7 @@ struct nv50_msto {
|
||||
struct nv50_head *head;
|
||||
struct nv50_mstc *mstc;
|
||||
bool disabled;
|
||||
bool enabled;
|
||||
};
|
||||
|
||||
struct nouveau_encoder *nv50_real_outp(struct drm_encoder *encoder)
|
||||
@ -947,57 +948,37 @@ struct nouveau_encoder *nv50_real_outp(struct drm_encoder *encoder)
|
||||
return msto->mstc->mstm->outp;
|
||||
}
|
||||
|
||||
static struct drm_dp_payload *
|
||||
nv50_msto_payload(struct nv50_msto *msto)
|
||||
{
|
||||
struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev);
|
||||
struct nv50_mstc *mstc = msto->mstc;
|
||||
struct nv50_mstm *mstm = mstc->mstm;
|
||||
int vcpi = mstc->port->vcpi.vcpi, i;
|
||||
|
||||
WARN_ON(!mutex_is_locked(&mstm->mgr.payload_lock));
|
||||
|
||||
NV_ATOMIC(drm, "%s: vcpi %d\n", msto->encoder.name, vcpi);
|
||||
for (i = 0; i < mstm->mgr.max_payloads; i++) {
|
||||
struct drm_dp_payload *payload = &mstm->mgr.payloads[i];
|
||||
NV_ATOMIC(drm, "%s: %d: vcpi %d start 0x%02x slots 0x%02x\n",
|
||||
mstm->outp->base.base.name, i, payload->vcpi,
|
||||
payload->start_slot, payload->num_slots);
|
||||
}
|
||||
|
||||
for (i = 0; i < mstm->mgr.max_payloads; i++) {
|
||||
struct drm_dp_payload *payload = &mstm->mgr.payloads[i];
|
||||
if (payload->vcpi == vcpi)
|
||||
return payload;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void
|
||||
nv50_msto_cleanup(struct nv50_msto *msto)
|
||||
nv50_msto_cleanup(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_state *mst_state,
|
||||
struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct nv50_msto *msto)
|
||||
{
|
||||
struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev);
|
||||
struct nv50_mstc *mstc = msto->mstc;
|
||||
struct nv50_mstm *mstm = mstc->mstm;
|
||||
|
||||
if (!msto->disabled)
|
||||
return;
|
||||
struct drm_dp_mst_atomic_payload *payload =
|
||||
drm_atomic_get_mst_payload_state(mst_state, msto->mstc->port);
|
||||
|
||||
NV_ATOMIC(drm, "%s: msto cleanup\n", msto->encoder.name);
|
||||
|
||||
drm_dp_mst_deallocate_vcpi(&mstm->mgr, mstc->port);
|
||||
|
||||
if (msto->disabled) {
|
||||
msto->mstc = NULL;
|
||||
msto->disabled = false;
|
||||
} else if (msto->enabled) {
|
||||
drm_dp_add_payload_part2(mgr, state, payload);
|
||||
msto->enabled = false;
|
||||
}
|
||||
}
|
||||
|
||||
static void
|
||||
nv50_msto_prepare(struct nv50_msto *msto)
|
||||
nv50_msto_prepare(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_state *mst_state,
|
||||
struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct nv50_msto *msto)
|
||||
{
|
||||
struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev);
|
||||
struct nv50_mstc *mstc = msto->mstc;
|
||||
struct nv50_mstm *mstm = mstc->mstm;
|
||||
struct drm_dp_mst_atomic_payload *payload;
|
||||
struct {
|
||||
struct nv50_disp_mthd_v1 base;
|
||||
struct nv50_disp_sor_dp_mst_vcpi_v0 vcpi;
|
||||
@ -1009,17 +990,21 @@ nv50_msto_prepare(struct nv50_msto *msto)
|
||||
(0x0100 << msto->head->base.index),
|
||||
};
|
||||
|
||||
mutex_lock(&mstm->mgr.payload_lock);
|
||||
|
||||
NV_ATOMIC(drm, "%s: msto prepare\n", msto->encoder.name);
|
||||
if (mstc->port->vcpi.vcpi > 0) {
|
||||
struct drm_dp_payload *payload = nv50_msto_payload(msto);
|
||||
if (payload) {
|
||||
args.vcpi.start_slot = payload->start_slot;
|
||||
args.vcpi.num_slots = payload->num_slots;
|
||||
args.vcpi.pbn = mstc->port->vcpi.pbn;
|
||||
args.vcpi.aligned_pbn = mstc->port->vcpi.aligned_pbn;
|
||||
}
|
||||
|
||||
payload = drm_atomic_get_mst_payload_state(mst_state, mstc->port);
|
||||
|
||||
// TODO: Figure out if we want to do a better job of handling VCPI allocation failures here?
|
||||
if (msto->disabled) {
|
||||
drm_dp_remove_payload(mgr, mst_state, payload);
|
||||
} else {
|
||||
if (msto->enabled)
|
||||
drm_dp_add_payload_part1(mgr, mst_state, payload);
|
||||
|
||||
args.vcpi.start_slot = payload->vc_start_slot;
|
||||
args.vcpi.num_slots = payload->time_slots;
|
||||
args.vcpi.pbn = payload->pbn;
|
||||
args.vcpi.aligned_pbn = payload->time_slots * mst_state->pbn_div;
|
||||
}
|
||||
|
||||
NV_ATOMIC(drm, "%s: %s: %02x %02x %04x %04x\n",
|
||||
@ -1028,7 +1013,6 @@ nv50_msto_prepare(struct nv50_msto *msto)
|
||||
args.vcpi.pbn, args.vcpi.aligned_pbn);
|
||||
|
||||
nvif_mthd(&drm->display->disp.object, 0, &args, sizeof(args));
|
||||
mutex_unlock(&mstm->mgr.payload_lock);
|
||||
}
|
||||
|
||||
static int
|
||||
@ -1038,6 +1022,7 @@ nv50_msto_atomic_check(struct drm_encoder *encoder,
|
||||
{
|
||||
struct drm_atomic_state *state = crtc_state->state;
|
||||
struct drm_connector *connector = conn_state->connector;
|
||||
struct drm_dp_mst_topology_state *mst_state;
|
||||
struct nv50_mstc *mstc = nv50_mstc(connector);
|
||||
struct nv50_mstm *mstm = mstc->mstm;
|
||||
struct nv50_head_atom *asyh = nv50_head_atom(crtc_state);
|
||||
@ -1065,8 +1050,18 @@ nv50_msto_atomic_check(struct drm_encoder *encoder,
|
||||
false);
|
||||
}
|
||||
|
||||
slots = drm_dp_atomic_find_time_slots(state, &mstm->mgr, mstc->port,
|
||||
asyh->dp.pbn, 0);
|
||||
mst_state = drm_atomic_get_mst_topology_state(state, &mstm->mgr);
|
||||
if (IS_ERR(mst_state))
|
||||
return PTR_ERR(mst_state);
|
||||
|
||||
if (!mst_state->pbn_div) {
|
||||
struct nouveau_encoder *outp = mstc->mstm->outp;
|
||||
|
||||
mst_state->pbn_div = drm_dp_get_vc_payload_bw(&mstm->mgr,
|
||||
outp->dp.link_bw, outp->dp.link_nr);
|
||||
}
|
||||
|
||||
slots = drm_dp_atomic_find_time_slots(state, &mstm->mgr, mstc->port, asyh->dp.pbn);
|
||||
if (slots < 0)
|
||||
return slots;
|
||||
|
||||
@ -1098,7 +1093,6 @@ nv50_msto_atomic_enable(struct drm_encoder *encoder, struct drm_atomic_state *st
|
||||
struct drm_connector *connector;
|
||||
struct drm_connector_list_iter conn_iter;
|
||||
u8 proto;
|
||||
bool r;
|
||||
|
||||
drm_connector_list_iter_begin(encoder->dev, &conn_iter);
|
||||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
@ -1113,10 +1107,6 @@ nv50_msto_atomic_enable(struct drm_encoder *encoder, struct drm_atomic_state *st
|
||||
if (WARN_ON(!mstc))
|
||||
return;
|
||||
|
||||
r = drm_dp_mst_allocate_vcpi(&mstm->mgr, mstc->port, asyh->dp.pbn, asyh->dp.tu);
|
||||
if (!r)
|
||||
DRM_DEBUG_KMS("Failed to allocate VCPI\n");
|
||||
|
||||
if (!mstm->links++)
|
||||
nv50_outp_acquire(mstm->outp, false /*XXX: MST audio.*/);
|
||||
|
||||
@ -1129,6 +1119,7 @@ nv50_msto_atomic_enable(struct drm_encoder *encoder, struct drm_atomic_state *st
|
||||
nv50_dp_bpc_to_depth(asyh->or.bpc));
|
||||
|
||||
msto->mstc = mstc;
|
||||
msto->enabled = true;
|
||||
mstm->modified = true;
|
||||
}
|
||||
|
||||
@ -1139,8 +1130,6 @@ nv50_msto_atomic_disable(struct drm_encoder *encoder, struct drm_atomic_state *s
|
||||
struct nv50_mstc *mstc = msto->mstc;
|
||||
struct nv50_mstm *mstm = mstc->mstm;
|
||||
|
||||
drm_dp_mst_reset_vcpi_slots(&mstm->mgr, mstc->port);
|
||||
|
||||
mstm->outp->update(mstm->outp, msto->head->base.index, NULL, 0, 0);
|
||||
mstm->modified = true;
|
||||
if (!--mstm->links)
|
||||
@ -1360,7 +1349,9 @@ nv50_mstc_new(struct nv50_mstm *mstm, struct drm_dp_mst_port *port,
|
||||
}
|
||||
|
||||
static void
|
||||
nv50_mstm_cleanup(struct nv50_mstm *mstm)
|
||||
nv50_mstm_cleanup(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_state *mst_state,
|
||||
struct nv50_mstm *mstm)
|
||||
{
|
||||
struct nouveau_drm *drm = nouveau_drm(mstm->outp->base.base.dev);
|
||||
struct drm_encoder *encoder;
|
||||
@ -1368,14 +1359,12 @@ nv50_mstm_cleanup(struct nv50_mstm *mstm)
|
||||
NV_ATOMIC(drm, "%s: mstm cleanup\n", mstm->outp->base.base.name);
|
||||
drm_dp_check_act_status(&mstm->mgr);
|
||||
|
||||
drm_dp_update_payload_part2(&mstm->mgr);
|
||||
|
||||
drm_for_each_encoder(encoder, mstm->outp->base.base.dev) {
|
||||
if (encoder->encoder_type == DRM_MODE_ENCODER_DPMST) {
|
||||
struct nv50_msto *msto = nv50_msto(encoder);
|
||||
struct nv50_mstc *mstc = msto->mstc;
|
||||
if (mstc && mstc->mstm == mstm)
|
||||
nv50_msto_cleanup(msto);
|
||||
nv50_msto_cleanup(state, mst_state, &mstm->mgr, msto);
|
||||
}
|
||||
}
|
||||
|
||||
@ -1383,20 +1372,34 @@ nv50_mstm_cleanup(struct nv50_mstm *mstm)
|
||||
}
|
||||
|
||||
static void
|
||||
nv50_mstm_prepare(struct nv50_mstm *mstm)
|
||||
nv50_mstm_prepare(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_state *mst_state,
|
||||
struct nv50_mstm *mstm)
|
||||
{
|
||||
struct nouveau_drm *drm = nouveau_drm(mstm->outp->base.base.dev);
|
||||
struct drm_encoder *encoder;
|
||||
|
||||
NV_ATOMIC(drm, "%s: mstm prepare\n", mstm->outp->base.base.name);
|
||||
drm_dp_update_payload_part1(&mstm->mgr, 1);
|
||||
|
||||
/* Disable payloads first */
|
||||
drm_for_each_encoder(encoder, mstm->outp->base.base.dev) {
|
||||
if (encoder->encoder_type == DRM_MODE_ENCODER_DPMST) {
|
||||
struct nv50_msto *msto = nv50_msto(encoder);
|
||||
struct nv50_mstc *mstc = msto->mstc;
|
||||
if (mstc && mstc->mstm == mstm)
|
||||
nv50_msto_prepare(msto);
|
||||
if (mstc && mstc->mstm == mstm && msto->disabled)
|
||||
nv50_msto_prepare(state, mst_state, &mstm->mgr, msto);
|
||||
}
|
||||
}
|
||||
|
||||
/* Add payloads for new heads, while also updating the start slots of any unmodified (but
|
||||
* active) heads that may have had their VC slots shifted left after the previous step
|
||||
*/
|
||||
drm_for_each_encoder(encoder, mstm->outp->base.base.dev) {
|
||||
if (encoder->encoder_type == DRM_MODE_ENCODER_DPMST) {
|
||||
struct nv50_msto *msto = nv50_msto(encoder);
|
||||
struct nv50_mstc *mstc = msto->mstc;
|
||||
if (mstc && mstc->mstm == mstm && !msto->disabled)
|
||||
nv50_msto_prepare(state, mst_state, &mstm->mgr, msto);
|
||||
}
|
||||
}
|
||||
|
||||
@ -1593,9 +1596,7 @@ nv50_mstm_new(struct nouveau_encoder *outp, struct drm_dp_aux *aux, int aux_max,
|
||||
mstm->mgr.cbs = &nv50_mstm;
|
||||
|
||||
ret = drm_dp_mst_topology_mgr_init(&mstm->mgr, dev, aux, aux_max,
|
||||
max_payloads, outp->dcb->dpconf.link_nr,
|
||||
drm_dp_bw_code_to_link_rate(outp->dcb->dpconf.link_bw),
|
||||
conn_base_id);
|
||||
max_payloads, conn_base_id);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -2047,20 +2048,20 @@ nv50_pior_create(struct drm_connector *connector, struct dcb_output *dcbe)
|
||||
static void
|
||||
nv50_disp_atomic_commit_core(struct drm_atomic_state *state, u32 *interlock)
|
||||
{
|
||||
struct drm_dp_mst_topology_mgr *mgr;
|
||||
struct drm_dp_mst_topology_state *mst_state;
|
||||
struct nouveau_drm *drm = nouveau_drm(state->dev);
|
||||
struct nv50_disp *disp = nv50_disp(drm->dev);
|
||||
struct nv50_core *core = disp->core;
|
||||
struct nv50_mstm *mstm;
|
||||
struct drm_encoder *encoder;
|
||||
int i;
|
||||
|
||||
NV_ATOMIC(drm, "commit core %08x\n", interlock[NV50_DISP_INTERLOCK_BASE]);
|
||||
|
||||
drm_for_each_encoder(encoder, drm->dev) {
|
||||
if (encoder->encoder_type != DRM_MODE_ENCODER_DPMST) {
|
||||
mstm = nouveau_encoder(encoder)->dp.mstm;
|
||||
if (mstm && mstm->modified)
|
||||
nv50_mstm_prepare(mstm);
|
||||
}
|
||||
for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) {
|
||||
mstm = nv50_mstm(mgr);
|
||||
if (mstm->modified)
|
||||
nv50_mstm_prepare(state, mst_state, mstm);
|
||||
}
|
||||
|
||||
core->func->ntfy_init(disp->sync, NV50_DISP_CORE_NTFY);
|
||||
@ -2069,12 +2070,10 @@ nv50_disp_atomic_commit_core(struct drm_atomic_state *state, u32 *interlock)
|
||||
disp->core->chan.base.device))
|
||||
NV_ERROR(drm, "core notifier timeout\n");
|
||||
|
||||
drm_for_each_encoder(encoder, drm->dev) {
|
||||
if (encoder->encoder_type != DRM_MODE_ENCODER_DPMST) {
|
||||
mstm = nouveau_encoder(encoder)->dp.mstm;
|
||||
if (mstm && mstm->modified)
|
||||
nv50_mstm_cleanup(mstm);
|
||||
}
|
||||
for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) {
|
||||
mstm = nv50_mstm(mgr);
|
||||
if (mstm->modified)
|
||||
nv50_mstm_cleanup(state, mst_state, mstm);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -48,20 +48,6 @@ struct drm_dp_mst_topology_ref_history {
|
||||
|
||||
struct drm_dp_mst_branch;
|
||||
|
||||
/**
|
||||
* struct drm_dp_vcpi - Virtual Channel Payload Identifier
|
||||
* @vcpi: Virtual channel ID.
|
||||
* @pbn: Payload Bandwidth Number for this channel
|
||||
* @aligned_pbn: PBN aligned with slot size
|
||||
* @num_slots: number of slots for this PBN
|
||||
*/
|
||||
struct drm_dp_vcpi {
|
||||
int vcpi;
|
||||
int pbn;
|
||||
int aligned_pbn;
|
||||
int num_slots;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct drm_dp_mst_port - MST port
|
||||
* @port_num: port number
|
||||
@ -142,7 +128,6 @@ struct drm_dp_mst_port {
|
||||
struct drm_dp_aux aux; /* i2c bus for this port? */
|
||||
struct drm_dp_mst_branch *parent;
|
||||
|
||||
struct drm_dp_vcpi vcpi;
|
||||
struct drm_connector *connector;
|
||||
struct drm_dp_mst_topology_mgr *mgr;
|
||||
|
||||
@ -527,19 +512,6 @@ struct drm_dp_mst_topology_cbs {
|
||||
void (*poll_hpd_irq)(struct drm_dp_mst_topology_mgr *mgr);
|
||||
};
|
||||
|
||||
#define DP_MAX_PAYLOAD (sizeof(unsigned long) * 8)
|
||||
|
||||
#define DP_PAYLOAD_LOCAL 1
|
||||
#define DP_PAYLOAD_REMOTE 2
|
||||
#define DP_PAYLOAD_DELETE_LOCAL 3
|
||||
|
||||
struct drm_dp_payload {
|
||||
int payload_state;
|
||||
int start_slot;
|
||||
int num_slots;
|
||||
int vcpi;
|
||||
};
|
||||
|
||||
#define to_dp_mst_topology_state(x) container_of(x, struct drm_dp_mst_topology_state, base)
|
||||
|
||||
/**
|
||||
@ -552,6 +524,34 @@ struct drm_dp_mst_atomic_payload {
|
||||
/** @port: The MST port assigned to this payload */
|
||||
struct drm_dp_mst_port *port;
|
||||
|
||||
/**
|
||||
* @vc_start_slot: The time slot that this payload starts on. Because payload start slots
|
||||
* can't be determined ahead of time, the contents of this value are UNDEFINED at atomic
|
||||
* check time. This shouldn't usually matter, as the start slot should never be relevant for
|
||||
* atomic state computations.
|
||||
*
|
||||
* Since this value is determined at commit time instead of check time, this value is
|
||||
* protected by the MST helpers ensuring that async commits operating on the given topology
|
||||
* never run in parallel. In the event that a driver does need to read this value (e.g. to
|
||||
* inform hardware of the starting timeslot for a payload), the driver may either:
|
||||
*
|
||||
* * Read this field during the atomic commit after
|
||||
* drm_dp_mst_atomic_wait_for_dependencies() has been called, which will ensure the
|
||||
* previous MST states payload start slots have been copied over to the new state. Note
|
||||
* that a new start slot won't be assigned/removed from this payload until
|
||||
* drm_dp_add_payload_part1()/drm_dp_remove_payload() have been called.
|
||||
* * Acquire the MST modesetting lock, and then wait for any pending MST-related commits to
|
||||
* get committed to hardware by calling drm_crtc_commit_wait() on each of the
|
||||
* &drm_crtc_commit structs in &drm_dp_mst_topology_state.commit_deps.
|
||||
*
|
||||
* If neither of the two above solutions suffice (e.g. the driver needs to read the start
|
||||
* slot in the middle of an atomic commit without waiting for some reason), then drivers
|
||||
* should cache this value themselves after changing payloads.
|
||||
*/
|
||||
s8 vc_start_slot;
|
||||
|
||||
/** @vcpi: The Virtual Channel Payload Identifier */
|
||||
u8 vcpi;
|
||||
/**
|
||||
* @time_slots:
|
||||
* The number of timeslots allocated to this payload from the source DP Tx to
|
||||
@ -579,8 +579,6 @@ struct drm_dp_mst_topology_state {
|
||||
/** @base: Base private state for atomic */
|
||||
struct drm_private_state base;
|
||||
|
||||
/** @payloads: The list of payloads being created/destroyed in this state */
|
||||
struct list_head payloads;
|
||||
/** @mgr: The topology manager */
|
||||
struct drm_dp_mst_topology_mgr *mgr;
|
||||
|
||||
@ -597,10 +595,21 @@ struct drm_dp_mst_topology_state {
|
||||
/** @num_commit_deps: The number of CRTC commits in @commit_deps */
|
||||
size_t num_commit_deps;
|
||||
|
||||
/** @payload_mask: A bitmask of allocated VCPIs, used for VCPI assignments */
|
||||
u32 payload_mask;
|
||||
/** @payloads: The list of payloads being created/destroyed in this state */
|
||||
struct list_head payloads;
|
||||
|
||||
/** @total_avail_slots: The total number of slots this topology can handle (63 or 64) */
|
||||
u8 total_avail_slots;
|
||||
/** @start_slot: The first usable time slot in this topology (1 or 0) */
|
||||
u8 start_slot;
|
||||
|
||||
/**
|
||||
* @pbn_div: The current PBN divisor for this topology. The driver is expected to fill this
|
||||
* out itself.
|
||||
*/
|
||||
int pbn_div;
|
||||
};
|
||||
|
||||
#define to_dp_mst_topology_mgr(x) container_of(x, struct drm_dp_mst_topology_mgr, base)
|
||||
@ -640,14 +649,6 @@ struct drm_dp_mst_topology_mgr {
|
||||
* @max_payloads: maximum number of payloads the GPU can generate.
|
||||
*/
|
||||
int max_payloads;
|
||||
/**
|
||||
* @max_lane_count: maximum number of lanes the GPU can drive.
|
||||
*/
|
||||
int max_lane_count;
|
||||
/**
|
||||
* @max_link_rate: maximum link rate per lane GPU can output, in kHz.
|
||||
*/
|
||||
int max_link_rate;
|
||||
/**
|
||||
* @conn_base_id: DRM connector ID this mgr is connected to. Only used
|
||||
* to build the MST connector path value.
|
||||
@ -690,6 +691,20 @@ struct drm_dp_mst_topology_mgr {
|
||||
*/
|
||||
bool payload_id_table_cleared : 1;
|
||||
|
||||
/**
|
||||
* @payload_count: The number of currently active payloads in hardware. This value is only
|
||||
* intended to be used internally by MST helpers for payload tracking, and is only safe to
|
||||
* read/write from the atomic commit (not check) context.
|
||||
*/
|
||||
u8 payload_count;
|
||||
|
||||
/**
|
||||
* @next_start_slot: The starting timeslot to use for new VC payloads. This value is used
|
||||
* internally by MST helpers for payload tracking, and is only safe to read/write from the
|
||||
* atomic commit (not check) context.
|
||||
*/
|
||||
u8 next_start_slot;
|
||||
|
||||
/**
|
||||
* @mst_primary: Pointer to the primary/first branch device.
|
||||
*/
|
||||
@ -703,10 +718,6 @@ struct drm_dp_mst_topology_mgr {
|
||||
* @sink_count: Sink count from DEVICE_SERVICE_IRQ_VECTOR_ESI0.
|
||||
*/
|
||||
u8 sink_count;
|
||||
/**
|
||||
* @pbn_div: PBN to slots divisor.
|
||||
*/
|
||||
int pbn_div;
|
||||
|
||||
/**
|
||||
* @funcs: Atomic helper callbacks
|
||||
@ -723,32 +734,6 @@ struct drm_dp_mst_topology_mgr {
|
||||
*/
|
||||
struct list_head tx_msg_downq;
|
||||
|
||||
/**
|
||||
* @payload_lock: Protect payload information.
|
||||
*/
|
||||
struct mutex payload_lock;
|
||||
/**
|
||||
* @proposed_vcpis: Array of pointers for the new VCPI allocation. The
|
||||
* VCPI structure itself is &drm_dp_mst_port.vcpi, and the size of
|
||||
* this array is determined by @max_payloads.
|
||||
*/
|
||||
struct drm_dp_vcpi **proposed_vcpis;
|
||||
/**
|
||||
* @payloads: Array of payloads. The size of this array is determined
|
||||
* by @max_payloads.
|
||||
*/
|
||||
struct drm_dp_payload *payloads;
|
||||
/**
|
||||
* @payload_mask: Elements of @payloads actually in use. Since
|
||||
* reallocation of active outputs isn't possible gaps can be created by
|
||||
* disabling outputs out of order compared to how they've been enabled.
|
||||
*/
|
||||
unsigned long payload_mask;
|
||||
/**
|
||||
* @vcpi_mask: Similar to @payload_mask, but for @proposed_vcpis.
|
||||
*/
|
||||
unsigned long vcpi_mask;
|
||||
|
||||
/**
|
||||
* @tx_waitq: Wait to queue stall for the tx worker.
|
||||
*/
|
||||
@ -820,9 +805,7 @@ struct drm_dp_mst_topology_mgr {
|
||||
int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_device *dev, struct drm_dp_aux *aux,
|
||||
int max_dpcd_transaction_bytes,
|
||||
int max_payloads,
|
||||
int max_lane_count, int max_link_rate,
|
||||
int conn_base_id);
|
||||
int max_payloads, int conn_base_id);
|
||||
|
||||
void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr);
|
||||
|
||||
@ -845,28 +828,17 @@ int drm_dp_get_vc_payload_bw(const struct drm_dp_mst_topology_mgr *mgr,
|
||||
|
||||
int drm_dp_calc_pbn_mode(int clock, int bpp, bool dsc);
|
||||
|
||||
bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_dp_mst_port *port, int pbn, int slots);
|
||||
|
||||
int drm_dp_mst_get_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port);
|
||||
|
||||
|
||||
void drm_dp_mst_reset_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port);
|
||||
|
||||
void drm_dp_mst_update_slots(struct drm_dp_mst_topology_state *mst_state, uint8_t link_encoding_cap);
|
||||
|
||||
void drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_dp_mst_port *port);
|
||||
|
||||
|
||||
int drm_dp_find_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr,
|
||||
int pbn);
|
||||
|
||||
|
||||
int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr, int start_slot);
|
||||
|
||||
|
||||
int drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr);
|
||||
int drm_dp_add_payload_part1(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_dp_mst_topology_state *mst_state,
|
||||
struct drm_dp_mst_atomic_payload *payload);
|
||||
int drm_dp_add_payload_part2(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_atomic_payload *payload);
|
||||
void drm_dp_remove_payload(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_dp_mst_topology_state *mst_state,
|
||||
struct drm_dp_mst_atomic_payload *payload);
|
||||
|
||||
int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr);
|
||||
|
||||
@ -888,17 +860,22 @@ int drm_dp_mst_connector_late_register(struct drm_connector *connector,
|
||||
void drm_dp_mst_connector_early_unregister(struct drm_connector *connector,
|
||||
struct drm_dp_mst_port *port);
|
||||
|
||||
struct drm_dp_mst_topology_state *drm_atomic_get_mst_topology_state(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_state *
|
||||
drm_atomic_get_mst_topology_state(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_mgr *mgr);
|
||||
struct drm_dp_mst_topology_state *
|
||||
drm_atomic_get_new_mst_topology_state(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_mgr *mgr);
|
||||
struct drm_dp_mst_atomic_payload *
|
||||
drm_atomic_get_mst_payload_state(struct drm_dp_mst_topology_state *state,
|
||||
struct drm_dp_mst_port *port);
|
||||
int __must_check
|
||||
drm_dp_atomic_find_time_slots(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_dp_mst_port *port, int pbn,
|
||||
int pbn_div);
|
||||
struct drm_dp_mst_port *port, int pbn);
|
||||
int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_port *port,
|
||||
int pbn, int pbn_div,
|
||||
bool enable);
|
||||
int pbn, bool enable);
|
||||
int __must_check
|
||||
drm_dp_mst_add_affected_dsc_crtcs(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_mgr *mgr);
|
||||
@ -922,6 +899,12 @@ void drm_dp_mst_put_port_malloc(struct drm_dp_mst_port *port);
|
||||
|
||||
struct drm_dp_aux *drm_dp_mst_dsc_aux_for_port(struct drm_dp_mst_port *port);
|
||||
|
||||
static inline struct drm_dp_mst_topology_state *
|
||||
to_drm_dp_mst_topology_state(struct drm_private_state *state)
|
||||
{
|
||||
return container_of(state, struct drm_dp_mst_topology_state, base);
|
||||
}
|
||||
|
||||
extern const struct drm_private_state_funcs drm_dp_mst_topology_state_funcs;
|
||||
|
||||
/**
|
||||
|
Loading…
Reference in New Issue
Block a user