There are two major achievements for MMC in this release, which deserves to be

specially highlighted.
 
 First, we have converted the MMC block device from using the legacy blk
 interface into using the modern blkmq interface. Not only do we get all the
 nice effects from using blkmq, but it also means that new fresh nice code
 replaces old rusty code. Great news to everybody that cares about MMC/SD!
 
 It should also be noted that converting to blkmq has not been trivial, mostly
 because of that we have been carrying too much of MMC specific optimizations
 for the I/O request path, rather than striving to move these to the generic blk
 layer. Hopefully we won't be doing that mistake, ever again.
 
 Special thanks to Adrian Hunter (Intel) and to Linus Walleij (Linaro), who both
 have been working on this for quite some time!
 
 Second, on top of the blkmq deployment, we have enabled full support the eMMC
 command queuing feature, introduced in the eMMC v.5.1 spec. This also includes
 an implementation of a host driver library, supporting the corresponding CQHCI
 HW. Ideally, those controllers that supports CQHCI should only need some minor
 adaptations to make this play.
 
 So far the sdhci-pci driver for the Intel GLKs and the sdhci-of-arasan driver
 used on Rockchip RK3399, have enabled support for eMMC command queueing.
 
 Worth to highlight is also that, implementing the eMMC command queuing support
 has been a collaborative effort, as several people from Codeaurora, Rockchip,
 Intel and Linaro have been involved. However, the work has been driven by
 Adrian Hunter (Intel).
 
 In some shadow of the above, here are the rest of the highlights:
 
 MMC core:
  - Don't remove non-removable cards during system suspend
  - Add a slot-gpio helper to check capability of GPIO WP detection
 
 MMC host:
  - sdhci: Cleanups and improvements of some wakeup related code
  - sdhci-pci-arasan: New variant to support Arasan PCI HW with integrated phy
  - sdhci-acpi: Avoid broken UHS transfer modes on Intel CHT
  - sdhci-acpi: Add support for ACPI HID of AMD Controller with HS400
  - sdhci_f_sdh30: Add ACPI support
  - sdhci-esdhc-imx: Enable/disable clock at runtime suspend/resume
  - sdhci-of-esdhc: A few minor fixes
  - mmci: Add support for new STM32 variant
  - renesas_sdhi: enable R-Car D3 (r8a77995) support
  - tmio/renesas_sdhi: Re-structuring, cleanups and modernizations
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJabvFuAAoJEP4mhCVzWIwp7qIQAMFyLGkmJya9woUDgPlKJZMD
 pasQtW5/60ASEid4hiZsGQgDrvmq+8jba+3O6QhL/maTX69nhtQXTGvbyHlIYjIG
 J8z1oaflYz3tQ2w5I+85mkDciQEECH7rIiXMwZn1WUKByptnSJaqA95b2eiB5b+/
 zoCx9ZWCKeWBxuXsBf8E/Cyt7H6VndDVtIYzcmNYb8x0SdKGj8cjQ8YvtP+fIse1
 5bSWzfSO33T+gHW0mxX1C9bBjSVTLB2yYyiTCdd8grUIimexatgcpj4VA1zWPNcA
 EJYeXwUCULNMDobLVEd0XuScIX92ZWiKd2ITtuwh1OaLltGjubt6K6UCa3ecZfIv
 RbzCTB+YpkyNGYnf+aQlbLF5OO9IWVcfMP8hTtHdcm7YbKnI2u6IFhDR4mHtFWpx
 e3HZFZoIVwtBRxUwM8e2rNs0YfA96fn7+5YWqJ77pSa8rxGmIXE3Znf911MRHxej
 faDoYYAFSxBlvHujQ4a/yxHRldaFGkelNCjvjnIC0+1O++im9oi3c8Agg8SYSR9H
 dKckEBEXTVPz6PzXgmy5+fgNZ1UK1LLkAdVrdMBcoIY+BUi9xUBlu6Wng3Qr86Ht
 YD/WO/2dLewPAx9qVnBO6L001IbS//dr+qPFjaQdkXP6F1KAKQRmrs2QngKPks2x
 PQB46gFuxdLj6Ykg12wK
 =kUV+
 -----END PGP SIGNATURE-----

Merge tag 'mmc-v4.16' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc

Pull MMC updates from Ulf Hansson:
 "There are two major achievements for MMC in this release, which
  deserves to be specially highlighted.

  First, we have converted the MMC block device from using the legacy
  blk interface into using the modern blkmq interface. Not only do we
  get all the nice effects from using blkmq, but it also means that new
  fresh nice code replaces old rusty code. Great news to everybody that
  cares about MMC/SD!

  It should also be noted that converting to blkmq has not been trivial,
  mostly because of that we have been carrying too much of MMC specific
  optimizations for the I/O request path, rather than striving to move
  these to the generic blk layer. Hopefully we won't be doing that
  mistake, ever again.

  Special thanks to Adrian Hunter (Intel) and to Linus Walleij (Linaro),
  who both have been working on this for quite some time!

  Second, on top of the blkmq deployment, we have enabled full support
  the eMMC command queuing feature, introduced in the eMMC v.5.1 spec.
  This also includes an implementation of a host driver library,
  supporting the corresponding CQHCI HW. Ideally, those controllers that
  supports CQHCI should only need some minor adaptations to make this
  play.

  So far the sdhci-pci driver for the Intel GLKs and the sdhci-of-arasan
  driver used on Rockchip RK3399, have enabled support for eMMC command
  queueing.

  Worth to highlight is also that, implementing the eMMC command queuing
  support has been a collaborative effort, as several people from
  Codeaurora, Rockchip, Intel and Linaro have been involved. However,
  the work has been driven by Adrian Hunter (Intel).

  In some shadow of the above, here are the rest of the highlights:

  MMC core:
   - Don't remove non-removable cards during system suspend
   - Add a slot-gpio helper to check capability of GPIO WP detection

  MMC host:
   - sdhci: Cleanups and improvements of some wakeup related code
   - sdhci-pci-arasan: New variant to support Arasan PCI HW with integrated phy
   - sdhci-acpi: Avoid broken UHS transfer modes on Intel CHT
   - sdhci-acpi: Add support for ACPI HID of AMD Controller with HS400
   - sdhci_f_sdh30: Add ACPI support
   - sdhci-esdhc-imx: Enable/disable clock at runtime suspend/resume
   - sdhci-of-esdhc: A few minor fixes
   - mmci: Add support for new STM32 variant
   - renesas_sdhi: enable R-Car D3 (r8a77995) support
   - tmio/renesas_sdhi: Re-structuring, cleanups and modernizations"

* tag 'mmc-v4.16' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc: (96 commits)
  mmc: mmci: fix error return code in mmci_probe()
  mmc: davinci: suppress error message on EPROBE_DEFER
  mmc: davinci: dont' use module_platform_driver_probe()
  mmc: tmio: hide unused tmio_mmc_clk_disable/tmio_mmc_clk_enable functions
  mmc: mmci: Add STM32 variant
  mmc: mmci: Add support for setting pad type via pinctrl
  mmc: mmci: Don't pretend all variants to have OPENDRAIN bit
  mmc: mmci: Don't pretend all variants to have MCI_STARBITERR flag
  mmc: mmci: Don't pretend all variants to have MMCIMASK1 register
  mmc: tmio: refactor .get_ro hook
  mmc: slot-gpio: add a helper to check capability of GPIO WP detection
  mmc: tmio: remove dma_ops from tmio_mmc_host_probe() argument
  mmc: tmio: move {tmio_}mmc_of_parse() to tmio_mmc_host_alloc()
  mmc: tmio: move clk_enable/disable out of tmio_mmc_host_probe()
  mmc: tmio: ioremap memory resource in tmio_mmc_host_alloc()
  mmc: sh_mmcif: remove redundant initialization of 'opc'
  mmc: sdhci: Rework sdhci_enable_irq_wakeups()
  mmc: sdhci: Handle failure of enable_irq_wake()
  mmc: sdhci: Stop exporting sdhci_enable_irq_wakeups()
  mmc: sdhci-pci: Use device wakeup capability to determine MMC_PM_WAKE_SDIO_IRQ capability
  ...
This commit is contained in:
Linus Torvalds 2018-01-29 11:26:11 -08:00
commit 0bae60fcee
45 changed files with 4011 additions and 1477 deletions

View File

@ -12,6 +12,8 @@ Required properties:
"mediatek,mt8173-mmc": for mmc host ip compatible with mt8173
"mediatek,mt2701-mmc": for mmc host ip compatible with mt2701
"mediatek,mt2712-mmc": for mmc host ip compatible with mt2712
"mediatek,mt7623-mmc", "mediatek,mt2701-mmc": for MT7623 SoC
- reg: physical base address of the controller and length
- interrupts: Should contain MSDC interrupt number
- clocks: Should contain phandle for the clock feeding the MMC controller

View File

@ -26,6 +26,7 @@ Required properties:
"renesas,sdhi-r8a7794" - SDHI IP on R8A7794 SoC
"renesas,sdhi-r8a7795" - SDHI IP on R8A7795 SoC
"renesas,sdhi-r8a7796" - SDHI IP on R8A7796 SoC
"renesas,sdhi-r8a77995" - SDHI IP on R8A77995 SoC
"renesas,sdhi-shmobile" - a generic sh-mobile SDHI controller
"renesas,rcar-gen1-sdhi" - a generic R-Car Gen1 SDHI controller
"renesas,rcar-gen2-sdhi" - a generic R-Car Gen2 or RZ/G1

File diff suppressed because it is too large Load Diff

View File

@ -5,6 +5,16 @@
struct mmc_queue;
struct request;
void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req);
void mmc_blk_cqe_recovery(struct mmc_queue *mq);
enum mmc_issued;
enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req);
void mmc_blk_mq_complete(struct request *req);
void mmc_blk_mq_recovery(struct mmc_queue *mq);
struct work_struct;
void mmc_blk_mq_complete_work(struct work_struct *work);
#endif

View File

@ -351,8 +351,6 @@ int mmc_add_card(struct mmc_card *card)
#ifdef CONFIG_DEBUG_FS
mmc_add_card_debugfs(card);
#endif
mmc_init_context_info(card->host);
card->dev.of_node = mmc_of_find_child_device(card->host, 0);
device_enable_async_suspend(&card->dev);

View File

@ -341,6 +341,8 @@ int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
{
int err;
init_completion(&mrq->cmd_completion);
mmc_retune_hold(host);
if (mmc_card_removed(host->card))
@ -361,20 +363,6 @@ int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
}
EXPORT_SYMBOL(mmc_start_request);
/*
* mmc_wait_data_done() - done callback for data request
* @mrq: done data request
*
* Wakes up mmc context, passed as a callback to host controller driver
*/
static void mmc_wait_data_done(struct mmc_request *mrq)
{
struct mmc_context_info *context_info = &mrq->host->context_info;
context_info->is_done_rcv = true;
wake_up_interruptible(&context_info->wait);
}
static void mmc_wait_done(struct mmc_request *mrq)
{
complete(&mrq->completion);
@ -392,37 +380,6 @@ static inline void mmc_wait_ongoing_tfr_cmd(struct mmc_host *host)
wait_for_completion(&ongoing_mrq->cmd_completion);
}
/*
*__mmc_start_data_req() - starts data request
* @host: MMC host to start the request
* @mrq: data request to start
*
* Sets the done callback to be called when request is completed by the card.
* Starts data mmc request execution
* If an ongoing transfer is already in progress, wait for the command line
* to become available before sending another command.
*/
static int __mmc_start_data_req(struct mmc_host *host, struct mmc_request *mrq)
{
int err;
mmc_wait_ongoing_tfr_cmd(host);
mrq->done = mmc_wait_data_done;
mrq->host = host;
init_completion(&mrq->cmd_completion);
err = mmc_start_request(host, mrq);
if (err) {
mrq->cmd->error = err;
mmc_complete_cmd(mrq);
mmc_wait_data_done(mrq);
}
return err;
}
static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
{
int err;
@ -432,8 +389,6 @@ static int __mmc_start_req(struct mmc_host *host, struct mmc_request *mrq)
init_completion(&mrq->completion);
mrq->done = mmc_wait_done;
init_completion(&mrq->cmd_completion);
err = mmc_start_request(host, mrq);
if (err) {
mrq->cmd->error = err;
@ -650,163 +605,10 @@ EXPORT_SYMBOL(mmc_cqe_recovery);
*/
bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq)
{
if (host->areq)
return host->context_info.is_done_rcv;
else
return completion_done(&mrq->completion);
return completion_done(&mrq->completion);
}
EXPORT_SYMBOL(mmc_is_req_done);
/**
* mmc_pre_req - Prepare for a new request
* @host: MMC host to prepare command
* @mrq: MMC request to prepare for
*
* mmc_pre_req() is called in prior to mmc_start_req() to let
* host prepare for the new request. Preparation of a request may be
* performed while another request is running on the host.
*/
static void mmc_pre_req(struct mmc_host *host, struct mmc_request *mrq)
{
if (host->ops->pre_req)
host->ops->pre_req(host, mrq);
}
/**
* mmc_post_req - Post process a completed request
* @host: MMC host to post process command
* @mrq: MMC request to post process for
* @err: Error, if non zero, clean up any resources made in pre_req
*
* Let the host post process a completed request. Post processing of
* a request may be performed while another reuqest is running.
*/
static void mmc_post_req(struct mmc_host *host, struct mmc_request *mrq,
int err)
{
if (host->ops->post_req)
host->ops->post_req(host, mrq, err);
}
/**
* mmc_finalize_areq() - finalize an asynchronous request
* @host: MMC host to finalize any ongoing request on
*
* Returns the status of the ongoing asynchronous request, but
* MMC_BLK_SUCCESS if no request was going on.
*/
static enum mmc_blk_status mmc_finalize_areq(struct mmc_host *host)
{
struct mmc_context_info *context_info = &host->context_info;
enum mmc_blk_status status;
if (!host->areq)
return MMC_BLK_SUCCESS;
while (1) {
wait_event_interruptible(context_info->wait,
(context_info->is_done_rcv ||
context_info->is_new_req));
if (context_info->is_done_rcv) {
struct mmc_command *cmd;
context_info->is_done_rcv = false;
cmd = host->areq->mrq->cmd;
if (!cmd->error || !cmd->retries ||
mmc_card_removed(host->card)) {
status = host->areq->err_check(host->card,
host->areq);
break; /* return status */
} else {
mmc_retune_recheck(host);
pr_info("%s: req failed (CMD%u): %d, retrying...\n",
mmc_hostname(host),
cmd->opcode, cmd->error);
cmd->retries--;
cmd->error = 0;
__mmc_start_request(host, host->areq->mrq);
continue; /* wait for done/new event again */
}
}
return MMC_BLK_NEW_REQUEST;
}
mmc_retune_release(host);
/*
* Check BKOPS urgency for each R1 response
*/
if (host->card && mmc_card_mmc(host->card) &&
((mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1) ||
(mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1B)) &&
(host->areq->mrq->cmd->resp[0] & R1_EXCEPTION_EVENT)) {
mmc_start_bkops(host->card, true);
}
return status;
}
/**
* mmc_start_areq - start an asynchronous request
* @host: MMC host to start command
* @areq: asynchronous request to start
* @ret_stat: out parameter for status
*
* Start a new MMC custom command request for a host.
* If there is on ongoing async request wait for completion
* of that request and start the new one and return.
* Does not wait for the new request to complete.
*
* Returns the completed request, NULL in case of none completed.
* Wait for the an ongoing request (previoulsy started) to complete and
* return the completed request. If there is no ongoing request, NULL
* is returned without waiting. NULL is not an error condition.
*/
struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
struct mmc_async_req *areq,
enum mmc_blk_status *ret_stat)
{
enum mmc_blk_status status;
int start_err = 0;
struct mmc_async_req *previous = host->areq;
/* Prepare a new request */
if (areq)
mmc_pre_req(host, areq->mrq);
/* Finalize previous request */
status = mmc_finalize_areq(host);
if (ret_stat)
*ret_stat = status;
/* The previous request is still going on... */
if (status == MMC_BLK_NEW_REQUEST)
return NULL;
/* Fine so far, start the new request! */
if (status == MMC_BLK_SUCCESS && areq)
start_err = __mmc_start_data_req(host, areq->mrq);
/* Postprocess the old request at this point */
if (host->areq)
mmc_post_req(host, host->areq->mrq, 0);
/* Cancel a prepared request if it was not started. */
if ((status != MMC_BLK_SUCCESS || start_err) && areq)
mmc_post_req(host, areq->mrq, -EINVAL);
if (status != MMC_BLK_SUCCESS)
host->areq = NULL;
else
host->areq = areq;
return previous;
}
EXPORT_SYMBOL(mmc_start_areq);
/**
* mmc_wait_for_req - start a request and wait for completion
* @host: MMC host to start command
@ -2959,6 +2761,14 @@ static int mmc_pm_notify(struct notifier_block *notify_block,
if (!err)
break;
if (!mmc_card_is_removable(host)) {
dev_warn(mmc_dev(host),
"pre_suspend failed for non-removable host: "
"%d\n", err);
/* Avoid removing non-removable hosts */
break;
}
/* Calling bus_ops->remove() with a claimed host can deadlock */
host->bus_ops->remove(host);
mmc_claim_host(host);
@ -2994,22 +2804,6 @@ void mmc_unregister_pm_notifier(struct mmc_host *host)
}
#endif
/**
* mmc_init_context_info() - init synchronization context
* @host: mmc host
*
* Init struct context_info needed to implement asynchronous
* request mechanism, used by mmc core, host driver and mmc requests
* supplier.
*/
void mmc_init_context_info(struct mmc_host *host)
{
host->context_info.is_new_req = false;
host->context_info.is_done_rcv = false;
host->context_info.is_waiting_last_req = false;
init_waitqueue_head(&host->context_info.wait);
}
static int __init mmc_init(void)
{
int ret;

View File

@ -62,12 +62,10 @@ void mmc_set_initial_state(struct mmc_host *host);
static inline void mmc_delay(unsigned int ms)
{
if (ms < 1000 / HZ) {
cond_resched();
mdelay(ms);
} else {
if (ms <= 20)
usleep_range(ms * 1000, ms * 1250);
else
msleep(ms);
}
}
void mmc_rescan(struct work_struct *work);
@ -91,8 +89,6 @@ void mmc_remove_host_debugfs(struct mmc_host *host);
void mmc_add_card_debugfs(struct mmc_card *card);
void mmc_remove_card_debugfs(struct mmc_card *card);
void mmc_init_context_info(struct mmc_host *host);
int mmc_execute_tuning(struct mmc_card *card);
int mmc_hs200_to_hs400(struct mmc_card *card);
int mmc_hs400_to_hs200(struct mmc_card *card);
@ -110,12 +106,6 @@ bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq);
int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq);
struct mmc_async_req;
struct mmc_async_req *mmc_start_areq(struct mmc_host *host,
struct mmc_async_req *areq,
enum mmc_blk_status *ret_stat);
int mmc_erase(struct mmc_card *card, unsigned int from, unsigned int nr,
unsigned int arg);
int mmc_can_erase(struct mmc_card *card);
@ -152,4 +142,35 @@ int mmc_cqe_start_req(struct mmc_host *host, struct mmc_request *mrq);
void mmc_cqe_post_req(struct mmc_host *host, struct mmc_request *mrq);
int mmc_cqe_recovery(struct mmc_host *host);
/**
* mmc_pre_req - Prepare for a new request
* @host: MMC host to prepare command
* @mrq: MMC request to prepare for
*
* mmc_pre_req() is called in prior to mmc_start_req() to let
* host prepare for the new request. Preparation of a request may be
* performed while another request is running on the host.
*/
static inline void mmc_pre_req(struct mmc_host *host, struct mmc_request *mrq)
{
if (host->ops->pre_req)
host->ops->pre_req(host, mrq);
}
/**
* mmc_post_req - Post process a completed request
* @host: MMC host to post process command
* @mrq: MMC request to post process for
* @err: Error, if non zero, clean up any resources made in pre_req
*
* Let the host post process a completed request. Post processing of
* a request may be performed while another request is running.
*/
static inline void mmc_post_req(struct mmc_host *host, struct mmc_request *mrq,
int err)
{
if (host->ops->post_req)
host->ops->post_req(host, mrq, err);
}
#endif

View File

@ -41,6 +41,11 @@ static inline int mmc_host_cmd23(struct mmc_host *host)
return host->caps & MMC_CAP_CMD23;
}
static inline bool mmc_host_done_complete(struct mmc_host *host)
{
return host->caps & MMC_CAP_DONE_COMPLETE;
}
static inline int mmc_boot_partition_access(struct mmc_host *host)
{
return !(host->caps2 & MMC_CAP2_BOOTPART_NOACC);
@ -74,6 +79,5 @@ static inline bool mmc_card_hs400es(struct mmc_card *card)
return card->host->ios.enhanced_strobe;
}
#endif

View File

@ -101,7 +101,7 @@ struct mmc_test_transfer_result {
struct list_head link;
unsigned int count;
unsigned int sectors;
struct timespec ts;
struct timespec64 ts;
unsigned int rate;
unsigned int iops;
};
@ -171,11 +171,6 @@ struct mmc_test_multiple_rw {
enum mmc_test_prep_media prepare;
};
struct mmc_test_async_req {
struct mmc_async_req areq;
struct mmc_test_card *test;
};
/*******************************************************************/
/* General helper functions */
/*******************************************************************/
@ -515,14 +510,11 @@ static int mmc_test_map_sg_max_scatter(struct mmc_test_mem *mem,
/*
* Calculate transfer rate in bytes per second.
*/
static unsigned int mmc_test_rate(uint64_t bytes, struct timespec *ts)
static unsigned int mmc_test_rate(uint64_t bytes, struct timespec64 *ts)
{
uint64_t ns;
ns = ts->tv_sec;
ns *= 1000000000;
ns += ts->tv_nsec;
ns = timespec64_to_ns(ts);
bytes *= 1000000000;
while (ns > UINT_MAX) {
@ -542,7 +534,7 @@ static unsigned int mmc_test_rate(uint64_t bytes, struct timespec *ts)
* Save transfer results for future usage
*/
static void mmc_test_save_transfer_result(struct mmc_test_card *test,
unsigned int count, unsigned int sectors, struct timespec ts,
unsigned int count, unsigned int sectors, struct timespec64 ts,
unsigned int rate, unsigned int iops)
{
struct mmc_test_transfer_result *tr;
@ -567,21 +559,21 @@ static void mmc_test_save_transfer_result(struct mmc_test_card *test,
* Print the transfer rate.
*/
static void mmc_test_print_rate(struct mmc_test_card *test, uint64_t bytes,
struct timespec *ts1, struct timespec *ts2)
struct timespec64 *ts1, struct timespec64 *ts2)
{
unsigned int rate, iops, sectors = bytes >> 9;
struct timespec ts;
struct timespec64 ts;
ts = timespec_sub(*ts2, *ts1);
ts = timespec64_sub(*ts2, *ts1);
rate = mmc_test_rate(bytes, &ts);
iops = mmc_test_rate(100, &ts); /* I/O ops per sec x 100 */
pr_info("%s: Transfer of %u sectors (%u%s KiB) took %lu.%09lu "
pr_info("%s: Transfer of %u sectors (%u%s KiB) took %llu.%09u "
"seconds (%u kB/s, %u KiB/s, %u.%02u IOPS)\n",
mmc_hostname(test->card->host), sectors, sectors >> 1,
(sectors & 1 ? ".5" : ""), (unsigned long)ts.tv_sec,
(unsigned long)ts.tv_nsec, rate / 1000, rate / 1024,
(sectors & 1 ? ".5" : ""), (u64)ts.tv_sec,
(u32)ts.tv_nsec, rate / 1000, rate / 1024,
iops / 100, iops % 100);
mmc_test_save_transfer_result(test, 1, sectors, ts, rate, iops);
@ -591,24 +583,24 @@ static void mmc_test_print_rate(struct mmc_test_card *test, uint64_t bytes,
* Print the average transfer rate.
*/
static void mmc_test_print_avg_rate(struct mmc_test_card *test, uint64_t bytes,
unsigned int count, struct timespec *ts1,
struct timespec *ts2)
unsigned int count, struct timespec64 *ts1,
struct timespec64 *ts2)
{
unsigned int rate, iops, sectors = bytes >> 9;
uint64_t tot = bytes * count;
struct timespec ts;
struct timespec64 ts;
ts = timespec_sub(*ts2, *ts1);
ts = timespec64_sub(*ts2, *ts1);
rate = mmc_test_rate(tot, &ts);
iops = mmc_test_rate(count * 100, &ts); /* I/O ops per sec x 100 */
pr_info("%s: Transfer of %u x %u sectors (%u x %u%s KiB) took "
"%lu.%09lu seconds (%u kB/s, %u KiB/s, "
"%llu.%09u seconds (%u kB/s, %u KiB/s, "
"%u.%02u IOPS, sg_len %d)\n",
mmc_hostname(test->card->host), count, sectors, count,
sectors >> 1, (sectors & 1 ? ".5" : ""),
(unsigned long)ts.tv_sec, (unsigned long)ts.tv_nsec,
(u64)ts.tv_sec, (u32)ts.tv_nsec,
rate / 1000, rate / 1024, iops / 100, iops % 100,
test->area.sg_len);
@ -741,30 +733,6 @@ static int mmc_test_check_result(struct mmc_test_card *test,
return ret;
}
static enum mmc_blk_status mmc_test_check_result_async(struct mmc_card *card,
struct mmc_async_req *areq)
{
struct mmc_test_async_req *test_async =
container_of(areq, struct mmc_test_async_req, areq);
int ret;
mmc_test_wait_busy(test_async->test);
/*
* FIXME: this would earlier just casts a regular error code,
* either of the kernel type -ERRORCODE or the local test framework
* RESULT_* errorcode, into an enum mmc_blk_status and return as
* result check. Instead, convert it to some reasonable type by just
* returning either MMC_BLK_SUCCESS or MMC_BLK_CMD_ERR.
* If possible, a reasonable error code should be returned.
*/
ret = mmc_test_check_result(test_async->test, areq->mrq);
if (ret)
return MMC_BLK_CMD_ERR;
return MMC_BLK_SUCCESS;
}
/*
* Checks that a "short transfer" behaved as expected
*/
@ -831,6 +799,45 @@ static struct mmc_test_req *mmc_test_req_alloc(void)
return rq;
}
static void mmc_test_wait_done(struct mmc_request *mrq)
{
complete(&mrq->completion);
}
static int mmc_test_start_areq(struct mmc_test_card *test,
struct mmc_request *mrq,
struct mmc_request *prev_mrq)
{
struct mmc_host *host = test->card->host;
int err = 0;
if (mrq) {
init_completion(&mrq->completion);
mrq->done = mmc_test_wait_done;
mmc_pre_req(host, mrq);
}
if (prev_mrq) {
wait_for_completion(&prev_mrq->completion);
err = mmc_test_wait_busy(test);
if (!err)
err = mmc_test_check_result(test, prev_mrq);
}
if (!err && mrq) {
err = mmc_start_request(host, mrq);
if (err)
mmc_retune_release(host);
}
if (prev_mrq)
mmc_post_req(host, prev_mrq, 0);
if (err && mrq)
mmc_post_req(host, mrq, err);
return err;
}
static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
struct scatterlist *sg, unsigned sg_len,
@ -838,17 +845,10 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
unsigned blksz, int write, int count)
{
struct mmc_test_req *rq1, *rq2;
struct mmc_test_async_req test_areq[2];
struct mmc_async_req *done_areq;
struct mmc_async_req *cur_areq = &test_areq[0].areq;
struct mmc_async_req *other_areq = &test_areq[1].areq;
enum mmc_blk_status status;
struct mmc_request *mrq, *prev_mrq;
int i;
int ret = RESULT_OK;
test_areq[0].test = test;
test_areq[1].test = test;
rq1 = mmc_test_req_alloc();
rq2 = mmc_test_req_alloc();
if (!rq1 || !rq2) {
@ -856,33 +856,25 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test,
goto err;
}
cur_areq->mrq = &rq1->mrq;
cur_areq->err_check = mmc_test_check_result_async;
other_areq->mrq = &rq2->mrq;
other_areq->err_check = mmc_test_check_result_async;
mrq = &rq1->mrq;
prev_mrq = NULL;
for (i = 0; i < count; i++) {
mmc_test_prepare_mrq(test, cur_areq->mrq, sg, sg_len, dev_addr,
blocks, blksz, write);
done_areq = mmc_start_areq(test->card->host, cur_areq, &status);
if (status != MMC_BLK_SUCCESS || (!done_areq && i > 0)) {
ret = RESULT_FAIL;
mmc_test_req_reset(container_of(mrq, struct mmc_test_req, mrq));
mmc_test_prepare_mrq(test, mrq, sg, sg_len, dev_addr, blocks,
blksz, write);
ret = mmc_test_start_areq(test, mrq, prev_mrq);
if (ret)
goto err;
}
if (done_areq)
mmc_test_req_reset(container_of(done_areq->mrq,
struct mmc_test_req, mrq));
if (!prev_mrq)
prev_mrq = &rq2->mrq;
swap(cur_areq, other_areq);
swap(mrq, prev_mrq);
dev_addr += blocks;
}
done_areq = mmc_start_areq(test->card->host, NULL, &status);
if (status != MMC_BLK_SUCCESS)
ret = RESULT_FAIL;
ret = mmc_test_start_areq(test, NULL, prev_mrq);
err:
kfree(rq1);
kfree(rq2);
@ -1449,7 +1441,7 @@ static int mmc_test_area_io_seq(struct mmc_test_card *test, unsigned long sz,
int max_scatter, int timed, int count,
bool nonblock, int min_sg_len)
{
struct timespec ts1, ts2;
struct timespec64 ts1, ts2;
int ret = 0;
int i;
struct mmc_test_area *t = &test->area;
@ -1475,7 +1467,7 @@ static int mmc_test_area_io_seq(struct mmc_test_card *test, unsigned long sz,
return ret;
if (timed)
getnstimeofday(&ts1);
ktime_get_ts64(&ts1);
if (nonblock)
ret = mmc_test_nonblock_transfer(test, t->sg, t->sg_len,
dev_addr, t->blocks, 512, write, count);
@ -1489,7 +1481,7 @@ static int mmc_test_area_io_seq(struct mmc_test_card *test, unsigned long sz,
return ret;
if (timed)
getnstimeofday(&ts2);
ktime_get_ts64(&ts2);
if (timed)
mmc_test_print_avg_rate(test, sz, count, &ts1, &ts2);
@ -1747,7 +1739,7 @@ static int mmc_test_profile_trim_perf(struct mmc_test_card *test)
struct mmc_test_area *t = &test->area;
unsigned long sz;
unsigned int dev_addr;
struct timespec ts1, ts2;
struct timespec64 ts1, ts2;
int ret;
if (!mmc_can_trim(test->card))
@ -1758,19 +1750,19 @@ static int mmc_test_profile_trim_perf(struct mmc_test_card *test)
for (sz = 512; sz < t->max_sz; sz <<= 1) {
dev_addr = t->dev_addr + (sz >> 9);
getnstimeofday(&ts1);
ktime_get_ts64(&ts1);
ret = mmc_erase(test->card, dev_addr, sz >> 9, MMC_TRIM_ARG);
if (ret)
return ret;
getnstimeofday(&ts2);
ktime_get_ts64(&ts2);
mmc_test_print_rate(test, sz, &ts1, &ts2);
}
dev_addr = t->dev_addr;
getnstimeofday(&ts1);
ktime_get_ts64(&ts1);
ret = mmc_erase(test->card, dev_addr, sz >> 9, MMC_TRIM_ARG);
if (ret)
return ret;
getnstimeofday(&ts2);
ktime_get_ts64(&ts2);
mmc_test_print_rate(test, sz, &ts1, &ts2);
return 0;
}
@ -1779,19 +1771,19 @@ static int mmc_test_seq_read_perf(struct mmc_test_card *test, unsigned long sz)
{
struct mmc_test_area *t = &test->area;
unsigned int dev_addr, i, cnt;
struct timespec ts1, ts2;
struct timespec64 ts1, ts2;
int ret;
cnt = t->max_sz / sz;
dev_addr = t->dev_addr;
getnstimeofday(&ts1);
ktime_get_ts64(&ts1);
for (i = 0; i < cnt; i++) {
ret = mmc_test_area_io(test, sz, dev_addr, 0, 0, 0);
if (ret)
return ret;
dev_addr += (sz >> 9);
}
getnstimeofday(&ts2);
ktime_get_ts64(&ts2);
mmc_test_print_avg_rate(test, sz, cnt, &ts1, &ts2);
return 0;
}
@ -1818,7 +1810,7 @@ static int mmc_test_seq_write_perf(struct mmc_test_card *test, unsigned long sz)
{
struct mmc_test_area *t = &test->area;
unsigned int dev_addr, i, cnt;
struct timespec ts1, ts2;
struct timespec64 ts1, ts2;
int ret;
ret = mmc_test_area_erase(test);
@ -1826,14 +1818,14 @@ static int mmc_test_seq_write_perf(struct mmc_test_card *test, unsigned long sz)
return ret;
cnt = t->max_sz / sz;
dev_addr = t->dev_addr;
getnstimeofday(&ts1);
ktime_get_ts64(&ts1);
for (i = 0; i < cnt; i++) {
ret = mmc_test_area_io(test, sz, dev_addr, 1, 0, 0);
if (ret)
return ret;
dev_addr += (sz >> 9);
}
getnstimeofday(&ts2);
ktime_get_ts64(&ts2);
mmc_test_print_avg_rate(test, sz, cnt, &ts1, &ts2);
return 0;
}
@ -1864,7 +1856,7 @@ static int mmc_test_profile_seq_trim_perf(struct mmc_test_card *test)
struct mmc_test_area *t = &test->area;
unsigned long sz;
unsigned int dev_addr, i, cnt;
struct timespec ts1, ts2;
struct timespec64 ts1, ts2;
int ret;
if (!mmc_can_trim(test->card))
@ -1882,7 +1874,7 @@ static int mmc_test_profile_seq_trim_perf(struct mmc_test_card *test)
return ret;
cnt = t->max_sz / sz;
dev_addr = t->dev_addr;
getnstimeofday(&ts1);
ktime_get_ts64(&ts1);
for (i = 0; i < cnt; i++) {
ret = mmc_erase(test->card, dev_addr, sz >> 9,
MMC_TRIM_ARG);
@ -1890,7 +1882,7 @@ static int mmc_test_profile_seq_trim_perf(struct mmc_test_card *test)
return ret;
dev_addr += (sz >> 9);
}
getnstimeofday(&ts2);
ktime_get_ts64(&ts2);
mmc_test_print_avg_rate(test, sz, cnt, &ts1, &ts2);
}
return 0;
@ -1912,7 +1904,7 @@ static int mmc_test_rnd_perf(struct mmc_test_card *test, int write, int print,
{
unsigned int dev_addr, cnt, rnd_addr, range1, range2, last_ea = 0, ea;
unsigned int ssz;
struct timespec ts1, ts2, ts;
struct timespec64 ts1, ts2, ts;
int ret;
ssz = sz >> 9;
@ -1921,10 +1913,10 @@ static int mmc_test_rnd_perf(struct mmc_test_card *test, int write, int print,
range1 = rnd_addr / test->card->pref_erase;
range2 = range1 / ssz;
getnstimeofday(&ts1);
ktime_get_ts64(&ts1);
for (cnt = 0; cnt < UINT_MAX; cnt++) {
getnstimeofday(&ts2);
ts = timespec_sub(ts2, ts1);
ktime_get_ts64(&ts2);
ts = timespec64_sub(ts2, ts1);
if (ts.tv_sec >= 10)
break;
ea = mmc_test_rnd_num(range1);
@ -1998,7 +1990,7 @@ static int mmc_test_seq_perf(struct mmc_test_card *test, int write,
{
struct mmc_test_area *t = &test->area;
unsigned int dev_addr, i, cnt, sz, ssz;
struct timespec ts1, ts2;
struct timespec64 ts1, ts2;
int ret;
sz = t->max_tfr;
@ -2025,7 +2017,7 @@ static int mmc_test_seq_perf(struct mmc_test_card *test, int write,
cnt = tot_sz / sz;
dev_addr &= 0xffff0000; /* Round to 64MiB boundary */
getnstimeofday(&ts1);
ktime_get_ts64(&ts1);
for (i = 0; i < cnt; i++) {
ret = mmc_test_area_io(test, sz, dev_addr, write,
max_scatter, 0);
@ -2033,7 +2025,7 @@ static int mmc_test_seq_perf(struct mmc_test_card *test, int write,
return ret;
dev_addr += ssz;
}
getnstimeofday(&ts2);
ktime_get_ts64(&ts2);
mmc_test_print_avg_rate(test, sz, cnt, &ts1, &ts2);
@ -2328,10 +2320,17 @@ static int mmc_test_reset(struct mmc_test_card *test)
int err;
err = mmc_hw_reset(host);
if (!err)
if (!err) {
/*
* Reset will re-enable the card's command queue, but tests
* expect it to be disabled.
*/
if (card->ext_csd.cmdq_en)
mmc_cmdq_disable(card);
return RESULT_OK;
else if (err == -EOPNOTSUPP)
} else if (err == -EOPNOTSUPP) {
return RESULT_UNSUP_HOST;
}
return RESULT_FAIL;
}
@ -2356,11 +2355,9 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test,
struct mmc_test_req *rq = mmc_test_req_alloc();
struct mmc_host *host = test->card->host;
struct mmc_test_area *t = &test->area;
struct mmc_test_async_req test_areq = { .test = test };
struct mmc_request *mrq;
unsigned long timeout;
bool expired = false;
enum mmc_blk_status blkstat = MMC_BLK_SUCCESS;
int ret = 0, cmd_ret;
u32 status = 0;
int count = 0;
@ -2373,9 +2370,6 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test,
mrq->sbc = &rq->sbc;
mrq->cap_cmd_during_tfr = true;
test_areq.areq.mrq = mrq;
test_areq.areq.err_check = mmc_test_check_result_async;
mmc_test_prepare_mrq(test, mrq, t->sg, t->sg_len, dev_addr, t->blocks,
512, write);
@ -2388,11 +2382,9 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test,
/* Start ongoing data request */
if (use_areq) {
mmc_start_areq(host, &test_areq.areq, &blkstat);
if (blkstat != MMC_BLK_SUCCESS) {
ret = RESULT_FAIL;
ret = mmc_test_start_areq(test, mrq, NULL);
if (ret)
goto out_free;
}
} else {
mmc_wait_for_req(host, mrq);
}
@ -2426,9 +2418,7 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test,
/* Wait for data request to complete */
if (use_areq) {
mmc_start_areq(host, NULL, &blkstat);
if (blkstat != MMC_BLK_SUCCESS)
ret = RESULT_FAIL;
ret = mmc_test_start_areq(test, NULL, mrq);
} else {
mmc_wait_for_req_done(test->card->host, mrq);
}
@ -3066,10 +3056,9 @@ static int mtf_test_show(struct seq_file *sf, void *data)
seq_printf(sf, "Test %d: %d\n", gr->testcase + 1, gr->result);
list_for_each_entry(tr, &gr->tr_lst, link) {
seq_printf(sf, "%u %d %lu.%09lu %u %u.%02u\n",
seq_printf(sf, "%u %d %llu.%09u %u %u.%02u\n",
tr->count, tr->sectors,
(unsigned long)tr->ts.tv_sec,
(unsigned long)tr->ts.tv_nsec,
(u64)tr->ts.tv_sec, (u32)tr->ts.tv_nsec,
tr->rate, tr->iops / 100, tr->iops % 100);
}
}

View File

@ -22,100 +22,147 @@
#include "block.h"
#include "core.h"
#include "card.h"
#include "host.h"
/*
* Prepare a MMC request. This just filters out odd stuff.
*/
static int mmc_prep_request(struct request_queue *q, struct request *req)
static inline bool mmc_cqe_dcmd_busy(struct mmc_queue *mq)
{
struct mmc_queue *mq = q->queuedata;
if (mq && mmc_card_removed(mq->card))
return BLKPREP_KILL;
req->rq_flags |= RQF_DONTPREP;
return BLKPREP_OK;
/* Allow only 1 DCMD at a time */
return mq->in_flight[MMC_ISSUE_DCMD];
}
static int mmc_queue_thread(void *d)
void mmc_cqe_check_busy(struct mmc_queue *mq)
{
struct mmc_queue *mq = d;
if ((mq->cqe_busy & MMC_CQE_DCMD_BUSY) && !mmc_cqe_dcmd_busy(mq))
mq->cqe_busy &= ~MMC_CQE_DCMD_BUSY;
mq->cqe_busy &= ~MMC_CQE_QUEUE_FULL;
}
static inline bool mmc_cqe_can_dcmd(struct mmc_host *host)
{
return host->caps2 & MMC_CAP2_CQE_DCMD;
}
static enum mmc_issue_type mmc_cqe_issue_type(struct mmc_host *host,
struct request *req)
{
switch (req_op(req)) {
case REQ_OP_DRV_IN:
case REQ_OP_DRV_OUT:
case REQ_OP_DISCARD:
case REQ_OP_SECURE_ERASE:
return MMC_ISSUE_SYNC;
case REQ_OP_FLUSH:
return mmc_cqe_can_dcmd(host) ? MMC_ISSUE_DCMD : MMC_ISSUE_SYNC;
default:
return MMC_ISSUE_ASYNC;
}
}
enum mmc_issue_type mmc_issue_type(struct mmc_queue *mq, struct request *req)
{
struct mmc_host *host = mq->card->host;
if (mq->use_cqe)
return mmc_cqe_issue_type(host, req);
if (req_op(req) == REQ_OP_READ || req_op(req) == REQ_OP_WRITE)
return MMC_ISSUE_ASYNC;
return MMC_ISSUE_SYNC;
}
static void __mmc_cqe_recovery_notifier(struct mmc_queue *mq)
{
if (!mq->recovery_needed) {
mq->recovery_needed = true;
schedule_work(&mq->recovery_work);
}
}
void mmc_cqe_recovery_notifier(struct mmc_request *mrq)
{
struct mmc_queue_req *mqrq = container_of(mrq, struct mmc_queue_req,
brq.mrq);
struct request *req = mmc_queue_req_to_req(mqrq);
struct request_queue *q = req->q;
struct mmc_queue *mq = q->queuedata;
unsigned long flags;
spin_lock_irqsave(q->queue_lock, flags);
__mmc_cqe_recovery_notifier(mq);
spin_unlock_irqrestore(q->queue_lock, flags);
}
static enum blk_eh_timer_return mmc_cqe_timed_out(struct request *req)
{
struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
struct mmc_request *mrq = &mqrq->brq.mrq;
struct mmc_queue *mq = req->q->queuedata;
struct mmc_host *host = mq->card->host;
enum mmc_issue_type issue_type = mmc_issue_type(mq, req);
bool recovery_needed = false;
switch (issue_type) {
case MMC_ISSUE_ASYNC:
case MMC_ISSUE_DCMD:
if (host->cqe_ops->cqe_timeout(host, mrq, &recovery_needed)) {
if (recovery_needed)
__mmc_cqe_recovery_notifier(mq);
return BLK_EH_RESET_TIMER;
}
/* No timeout */
return BLK_EH_HANDLED;
default:
/* Timeout is handled by mmc core */
return BLK_EH_RESET_TIMER;
}
}
static enum blk_eh_timer_return mmc_mq_timed_out(struct request *req,
bool reserved)
{
struct request_queue *q = req->q;
struct mmc_queue *mq = q->queuedata;
unsigned long flags;
int ret;
spin_lock_irqsave(q->queue_lock, flags);
if (mq->recovery_needed || !mq->use_cqe)
ret = BLK_EH_RESET_TIMER;
else
ret = mmc_cqe_timed_out(req);
spin_unlock_irqrestore(q->queue_lock, flags);
return ret;
}
static void mmc_mq_recovery_handler(struct work_struct *work)
{
struct mmc_queue *mq = container_of(work, struct mmc_queue,
recovery_work);
struct request_queue *q = mq->queue;
struct mmc_context_info *cntx = &mq->card->host->context_info;
current->flags |= PF_MEMALLOC;
mmc_get_card(mq->card, &mq->ctx);
down(&mq->thread_sem);
do {
struct request *req;
mq->in_recovery = true;
spin_lock_irq(q->queue_lock);
set_current_state(TASK_INTERRUPTIBLE);
req = blk_fetch_request(q);
mq->asleep = false;
cntx->is_waiting_last_req = false;
cntx->is_new_req = false;
if (!req) {
/*
* Dispatch queue is empty so set flags for
* mmc_request_fn() to wake us up.
*/
if (mq->qcnt)
cntx->is_waiting_last_req = true;
else
mq->asleep = true;
}
spin_unlock_irq(q->queue_lock);
if (mq->use_cqe)
mmc_blk_cqe_recovery(mq);
else
mmc_blk_mq_recovery(mq);
if (req || mq->qcnt) {
set_current_state(TASK_RUNNING);
mmc_blk_issue_rq(mq, req);
cond_resched();
} else {
if (kthread_should_stop()) {
set_current_state(TASK_RUNNING);
break;
}
up(&mq->thread_sem);
schedule();
down(&mq->thread_sem);
}
} while (1);
up(&mq->thread_sem);
mq->in_recovery = false;
return 0;
}
spin_lock_irq(q->queue_lock);
mq->recovery_needed = false;
spin_unlock_irq(q->queue_lock);
/*
* Generic MMC request handler. This is called for any queue on a
* particular host. When the host is not busy, we look for a request
* on any queue on this host, and attempt to issue it. This may
* not be the queue we were asked to process.
*/
static void mmc_request_fn(struct request_queue *q)
{
struct mmc_queue *mq = q->queuedata;
struct request *req;
struct mmc_context_info *cntx;
mmc_put_card(mq->card, &mq->ctx);
if (!mq) {
while ((req = blk_fetch_request(q)) != NULL) {
req->rq_flags |= RQF_QUIET;
__blk_end_request_all(req, BLK_STS_IOERR);
}
return;
}
cntx = &mq->card->host->context_info;
if (cntx->is_waiting_last_req) {
cntx->is_new_req = true;
wake_up_interruptible(&cntx->wait);
}
if (mq->asleep)
wake_up_process(mq->thread);
blk_mq_run_hw_queues(q, true);
}
static struct scatterlist *mmc_alloc_sg(int sg_len, gfp_t gfp)
@ -154,11 +201,10 @@ static void mmc_queue_setup_discard(struct request_queue *q,
* @req: the request
* @gfp: memory allocation policy
*/
static int mmc_init_request(struct request_queue *q, struct request *req,
gfp_t gfp)
static int __mmc_init_request(struct mmc_queue *mq, struct request *req,
gfp_t gfp)
{
struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req);
struct mmc_queue *mq = q->queuedata;
struct mmc_card *card = mq->card;
struct mmc_host *host = card->host;
@ -177,6 +223,131 @@ static void mmc_exit_request(struct request_queue *q, struct request *req)
mq_rq->sg = NULL;
}
static int mmc_mq_init_request(struct blk_mq_tag_set *set, struct request *req,
unsigned int hctx_idx, unsigned int numa_node)
{
return __mmc_init_request(set->driver_data, req, GFP_KERNEL);
}
static void mmc_mq_exit_request(struct blk_mq_tag_set *set, struct request *req,
unsigned int hctx_idx)
{
struct mmc_queue *mq = set->driver_data;
mmc_exit_request(mq->queue, req);
}
/*
* We use BLK_MQ_F_BLOCKING and have only 1 hardware queue, which means requests
* will not be dispatched in parallel.
*/
static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
const struct blk_mq_queue_data *bd)
{
struct request *req = bd->rq;
struct request_queue *q = req->q;
struct mmc_queue *mq = q->queuedata;
struct mmc_card *card = mq->card;
struct mmc_host *host = card->host;
enum mmc_issue_type issue_type;
enum mmc_issued issued;
bool get_card, cqe_retune_ok;
int ret;
if (mmc_card_removed(mq->card)) {
req->rq_flags |= RQF_QUIET;
return BLK_STS_IOERR;
}
issue_type = mmc_issue_type(mq, req);
spin_lock_irq(q->queue_lock);
if (mq->recovery_needed) {
spin_unlock_irq(q->queue_lock);
return BLK_STS_RESOURCE;
}
switch (issue_type) {
case MMC_ISSUE_DCMD:
if (mmc_cqe_dcmd_busy(mq)) {
mq->cqe_busy |= MMC_CQE_DCMD_BUSY;
spin_unlock_irq(q->queue_lock);
return BLK_STS_RESOURCE;
}
break;
case MMC_ISSUE_ASYNC:
break;
default:
/*
* Timeouts are handled by mmc core, and we don't have a host
* API to abort requests, so we can't handle the timeout anyway.
* However, when the timeout happens, blk_mq_complete_request()
* no longer works (to stop the request disappearing under us).
* To avoid racing with that, set a large timeout.
*/
req->timeout = 600 * HZ;
break;
}
mq->in_flight[issue_type] += 1;
get_card = (mmc_tot_in_flight(mq) == 1);
cqe_retune_ok = (mmc_cqe_qcnt(mq) == 1);
spin_unlock_irq(q->queue_lock);
if (!(req->rq_flags & RQF_DONTPREP)) {
req_to_mmc_queue_req(req)->retries = 0;
req->rq_flags |= RQF_DONTPREP;
}
if (get_card)
mmc_get_card(card, &mq->ctx);
if (mq->use_cqe) {
host->retune_now = host->need_retune && cqe_retune_ok &&
!host->hold_retune;
}
blk_mq_start_request(req);
issued = mmc_blk_mq_issue_rq(mq, req);
switch (issued) {
case MMC_REQ_BUSY:
ret = BLK_STS_RESOURCE;
break;
case MMC_REQ_FAILED_TO_START:
ret = BLK_STS_IOERR;
break;
default:
ret = BLK_STS_OK;
break;
}
if (issued != MMC_REQ_STARTED) {
bool put_card = false;
spin_lock_irq(q->queue_lock);
mq->in_flight[issue_type] -= 1;
if (mmc_tot_in_flight(mq) == 0)
put_card = true;
spin_unlock_irq(q->queue_lock);
if (put_card)
mmc_put_card(card, &mq->ctx);
}
return ret;
}
static const struct blk_mq_ops mmc_mq_ops = {
.queue_rq = mmc_mq_queue_rq,
.init_request = mmc_mq_init_request,
.exit_request = mmc_mq_exit_request,
.complete = mmc_blk_mq_complete,
.timeout = mmc_mq_timed_out,
};
static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
{
struct mmc_host *host = card->host;
@ -196,8 +367,78 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
blk_queue_max_segments(mq->queue, host->max_segs);
blk_queue_max_segment_size(mq->queue, host->max_seg_size);
/* Initialize thread_sem even if it is not used */
sema_init(&mq->thread_sem, 1);
INIT_WORK(&mq->recovery_work, mmc_mq_recovery_handler);
INIT_WORK(&mq->complete_work, mmc_blk_mq_complete_work);
mutex_init(&mq->complete_lock);
init_waitqueue_head(&mq->wait);
}
static int mmc_mq_init_queue(struct mmc_queue *mq, int q_depth,
const struct blk_mq_ops *mq_ops, spinlock_t *lock)
{
int ret;
memset(&mq->tag_set, 0, sizeof(mq->tag_set));
mq->tag_set.ops = mq_ops;
mq->tag_set.queue_depth = q_depth;
mq->tag_set.numa_node = NUMA_NO_NODE;
mq->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE |
BLK_MQ_F_BLOCKING;
mq->tag_set.nr_hw_queues = 1;
mq->tag_set.cmd_size = sizeof(struct mmc_queue_req);
mq->tag_set.driver_data = mq;
ret = blk_mq_alloc_tag_set(&mq->tag_set);
if (ret)
return ret;
mq->queue = blk_mq_init_queue(&mq->tag_set);
if (IS_ERR(mq->queue)) {
ret = PTR_ERR(mq->queue);
goto free_tag_set;
}
mq->queue->queue_lock = lock;
mq->queue->queuedata = mq;
return 0;
free_tag_set:
blk_mq_free_tag_set(&mq->tag_set);
return ret;
}
/* Set queue depth to get a reasonable value for q->nr_requests */
#define MMC_QUEUE_DEPTH 64
static int mmc_mq_init(struct mmc_queue *mq, struct mmc_card *card,
spinlock_t *lock)
{
struct mmc_host *host = card->host;
int q_depth;
int ret;
/*
* The queue depth for CQE must match the hardware because the request
* tag is used to index the hardware queue.
*/
if (mq->use_cqe)
q_depth = min_t(int, card->ext_csd.cmdq_depth, host->cqe_qdepth);
else
q_depth = MMC_QUEUE_DEPTH;
ret = mmc_mq_init_queue(mq, q_depth, &mmc_mq_ops, lock);
if (ret)
return ret;
blk_queue_rq_timeout(mq->queue, 60 * HZ);
mmc_setup_queue(mq, card);
return 0;
}
/**
@ -213,108 +454,53 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
spinlock_t *lock, const char *subname)
{
struct mmc_host *host = card->host;
int ret = -ENOMEM;
mq->card = card;
mq->queue = blk_alloc_queue(GFP_KERNEL);
if (!mq->queue)
return -ENOMEM;
mq->queue->queue_lock = lock;
mq->queue->request_fn = mmc_request_fn;
mq->queue->init_rq_fn = mmc_init_request;
mq->queue->exit_rq_fn = mmc_exit_request;
mq->queue->cmd_size = sizeof(struct mmc_queue_req);
mq->queue->queuedata = mq;
mq->qcnt = 0;
ret = blk_init_allocated_queue(mq->queue);
if (ret) {
blk_cleanup_queue(mq->queue);
return ret;
}
blk_queue_prep_rq(mq->queue, mmc_prep_request);
mq->use_cqe = host->cqe_enabled;
mmc_setup_queue(mq, card);
return mmc_mq_init(mq, card, lock);
}
mq->thread = kthread_run(mmc_queue_thread, mq, "mmcqd/%d%s",
host->index, subname ? subname : "");
void mmc_queue_suspend(struct mmc_queue *mq)
{
blk_mq_quiesce_queue(mq->queue);
if (IS_ERR(mq->thread)) {
ret = PTR_ERR(mq->thread);
goto cleanup_queue;
}
/*
* The host remains claimed while there are outstanding requests, so
* simply claiming and releasing here ensures there are none.
*/
mmc_claim_host(mq->card->host);
mmc_release_host(mq->card->host);
}
return 0;
cleanup_queue:
blk_cleanup_queue(mq->queue);
return ret;
void mmc_queue_resume(struct mmc_queue *mq)
{
blk_mq_unquiesce_queue(mq->queue);
}
void mmc_cleanup_queue(struct mmc_queue *mq)
{
struct request_queue *q = mq->queue;
unsigned long flags;
/* Make sure the queue isn't suspended, as that will deadlock */
mmc_queue_resume(mq);
/*
* The legacy code handled the possibility of being suspended,
* so do that here too.
*/
if (blk_queue_quiesced(q))
blk_mq_unquiesce_queue(q);
/* Then terminate our worker thread */
kthread_stop(mq->thread);
blk_cleanup_queue(q);
/* Empty the queue */
spin_lock_irqsave(q->queue_lock, flags);
q->queuedata = NULL;
blk_start_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
/*
* A request can be completed before the next request, potentially
* leaving a complete_work with nothing to do. Such a work item might
* still be queued at this point. Flush it.
*/
flush_work(&mq->complete_work);
mq->card = NULL;
}
EXPORT_SYMBOL(mmc_cleanup_queue);
/**
* mmc_queue_suspend - suspend a MMC request queue
* @mq: MMC queue to suspend
*
* Stop the block request queue, and wait for our thread to
* complete any outstanding requests. This ensures that we
* won't suspend while a request is being processed.
*/
void mmc_queue_suspend(struct mmc_queue *mq)
{
struct request_queue *q = mq->queue;
unsigned long flags;
if (!mq->suspended) {
mq->suspended |= true;
spin_lock_irqsave(q->queue_lock, flags);
blk_stop_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
down(&mq->thread_sem);
}
}
/**
* mmc_queue_resume - resume a previously suspended MMC request queue
* @mq: MMC queue to resume
*/
void mmc_queue_resume(struct mmc_queue *mq)
{
struct request_queue *q = mq->queue;
unsigned long flags;
if (mq->suspended) {
mq->suspended = false;
up(&mq->thread_sem);
spin_lock_irqsave(q->queue_lock, flags);
blk_start_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
}
}
/*
* Prepare the sg list(s) to be handed of to the host driver

View File

@ -8,6 +8,20 @@
#include <linux/mmc/core.h>
#include <linux/mmc/host.h>
enum mmc_issued {
MMC_REQ_STARTED,
MMC_REQ_BUSY,
MMC_REQ_FAILED_TO_START,
MMC_REQ_FINISHED,
};
enum mmc_issue_type {
MMC_ISSUE_SYNC,
MMC_ISSUE_DCMD,
MMC_ISSUE_ASYNC,
MMC_ISSUE_MAX,
};
static inline struct mmc_queue_req *req_to_mmc_queue_req(struct request *rq)
{
return blk_mq_rq_to_pdu(rq);
@ -20,7 +34,6 @@ static inline struct request *mmc_queue_req_to_req(struct mmc_queue_req *mqr)
return blk_mq_rq_from_pdu(mqr);
}
struct task_struct;
struct mmc_blk_data;
struct mmc_blk_ioc_data;
@ -30,7 +43,6 @@ struct mmc_blk_request {
struct mmc_command cmd;
struct mmc_command stop;
struct mmc_data data;
int retune_retry_done;
};
/**
@ -52,28 +64,34 @@ enum mmc_drv_op {
struct mmc_queue_req {
struct mmc_blk_request brq;
struct scatterlist *sg;
struct mmc_async_req areq;
enum mmc_drv_op drv_op;
int drv_op_result;
void *drv_op_data;
unsigned int ioc_count;
int retries;
};
struct mmc_queue {
struct mmc_card *card;
struct task_struct *thread;
struct semaphore thread_sem;
bool suspended;
bool asleep;
struct mmc_ctx ctx;
struct blk_mq_tag_set tag_set;
struct mmc_blk_data *blkdata;
struct request_queue *queue;
/*
* FIXME: this counter is not a very reliable way of keeping
* track of how many requests that are ongoing. Switch to just
* letting the block core keep track of requests and per-request
* associated mmc_queue_req data.
*/
int qcnt;
int in_flight[MMC_ISSUE_MAX];
unsigned int cqe_busy;
#define MMC_CQE_DCMD_BUSY BIT(0)
#define MMC_CQE_QUEUE_FULL BIT(1)
bool use_cqe;
bool recovery_needed;
bool in_recovery;
bool rw_wait;
bool waiting;
struct work_struct recovery_work;
wait_queue_head_t wait;
struct request *recovery_req;
struct request *complete_req;
struct mutex complete_lock;
struct work_struct complete_work;
};
extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
@ -84,4 +102,22 @@ extern void mmc_queue_resume(struct mmc_queue *);
extern unsigned int mmc_queue_map_sg(struct mmc_queue *,
struct mmc_queue_req *);
void mmc_cqe_check_busy(struct mmc_queue *mq);
void mmc_cqe_recovery_notifier(struct mmc_request *mrq);
enum mmc_issue_type mmc_issue_type(struct mmc_queue *mq, struct request *req);
static inline int mmc_tot_in_flight(struct mmc_queue *mq)
{
return mq->in_flight[MMC_ISSUE_SYNC] +
mq->in_flight[MMC_ISSUE_DCMD] +
mq->in_flight[MMC_ISSUE_ASYNC];
}
static inline int mmc_cqe_qcnt(struct mmc_queue *mq)
{
return mq->in_flight[MMC_ISSUE_DCMD] +
mq->in_flight[MMC_ISSUE_ASYNC];
}
#endif

View File

@ -121,20 +121,18 @@ EXPORT_SYMBOL(mmc_gpio_request_ro);
void mmc_gpiod_request_cd_irq(struct mmc_host *host)
{
struct mmc_gpio *ctx = host->slot.handler_priv;
int ret, irq;
int irq = -EINVAL;
int ret;
if (host->slot.cd_irq >= 0 || !ctx || !ctx->cd_gpio)
return;
irq = gpiod_to_irq(ctx->cd_gpio);
/*
* Even if gpiod_to_irq() returns a valid IRQ number, the platform might
* still prefer to poll, e.g., because that IRQ number is already used
* by another unit and cannot be shared.
* Do not use IRQ if the platform prefers to poll, e.g., because that
* IRQ number is already used by another unit and cannot be shared.
*/
if (irq >= 0 && host->caps & MMC_CAP_NEEDS_POLL)
irq = -EINVAL;
if (!(host->caps & MMC_CAP_NEEDS_POLL))
irq = gpiod_to_irq(ctx->cd_gpio);
if (irq >= 0) {
if (!ctx->cd_gpio_isr)
@ -307,3 +305,11 @@ int mmc_gpiod_request_ro(struct mmc_host *host, const char *con_id,
return 0;
}
EXPORT_SYMBOL(mmc_gpiod_request_ro);
bool mmc_can_gpio_ro(struct mmc_host *host)
{
struct mmc_gpio *ctx = host->slot.handler_priv;
return ctx->ro_gpio ? true : false;
}
EXPORT_SYMBOL(mmc_can_gpio_ro);

View File

@ -81,6 +81,7 @@ config MMC_SDHCI_BIG_ENDIAN_32BIT_BYTE_SWAPPER
config MMC_SDHCI_PCI
tristate "SDHCI support on PCI bus"
depends on MMC_SDHCI && PCI
select MMC_CQHCI
help
This selects the PCI Secure Digital Host Controller Interface.
Most controllers found today are PCI devices.
@ -132,6 +133,7 @@ config MMC_SDHCI_OF_ARASAN
depends on MMC_SDHCI_PLTFM
depends on OF
depends on COMMON_CLK
select MMC_CQHCI
help
This selects the Arasan Secure Digital Host Controller Interface
(SDHCI). This hardware is found e.g. in Xilinx' Zynq SoC.
@ -320,7 +322,7 @@ config MMC_SDHCI_BCM_KONA
config MMC_SDHCI_F_SDH30
tristate "SDHCI support for Fujitsu Semiconductor F_SDH30"
depends on MMC_SDHCI_PLTFM
depends on OF
depends on OF || ACPI
help
This selects the Secure Digital Host Controller Interface (SDHCI)
Needed by some Fujitsu SoC for MMC / SD / SDIO support.
@ -595,11 +597,8 @@ config MMC_TMIO
config MMC_SDHI
tristate "Renesas SDHI SD/SDIO controller support"
depends on SUPERH || ARM || ARM64
depends on SUPERH || ARCH_RENESAS || COMPILE_TEST
select MMC_TMIO_CORE
select MMC_SDHI_SYS_DMAC if (SUPERH || ARM)
select MMC_SDHI_INTERNAL_DMAC if ARM64
help
This provides support for the SDHI SD/SDIO controller found in
Renesas SuperH, ARM and ARM64 based SoCs
@ -607,6 +606,7 @@ config MMC_SDHI
config MMC_SDHI_SYS_DMAC
tristate "DMA for SDHI SD/SDIO controllers using SYS-DMAC"
depends on MMC_SDHI
default MMC_SDHI if (SUPERH || ARM)
help
This provides DMA support for SDHI SD/SDIO controllers
using SYS-DMAC via DMA Engine. This supports the controllers
@ -616,6 +616,7 @@ config MMC_SDHI_INTERNAL_DMAC
tristate "DMA for SDHI SD/SDIO controllers using on-chip bus mastering"
depends on ARM64 || COMPILE_TEST
depends on MMC_SDHI
default MMC_SDHI if ARM64
help
This provides DMA support for SDHI SD/SDIO controllers
using on-chip bus mastering. This supports the controllers
@ -857,6 +858,19 @@ config MMC_SUNXI
This selects support for the SD/MMC Host Controller on
Allwinner sunxi SoCs.
config MMC_CQHCI
tristate "Command Queue Host Controller Interface support"
depends on HAS_DMA
help
This selects the Command Queue Host Controller Interface (CQHCI)
support present in host controllers of Qualcomm Technologies, Inc
amongst others.
This controller supports eMMC devices with command queue support.
If you have a controller with this interface, say Y or M here.
If unsure, say N.
config MMC_TOSHIBA_PCI
tristate "Toshiba Type A SD/MMC Card Interface Driver"
depends on PCI

View File

@ -11,7 +11,7 @@ obj-$(CONFIG_MMC_MXC) += mxcmmc.o
obj-$(CONFIG_MMC_MXS) += mxs-mmc.o
obj-$(CONFIG_MMC_SDHCI) += sdhci.o
obj-$(CONFIG_MMC_SDHCI_PCI) += sdhci-pci.o
sdhci-pci-y += sdhci-pci-core.o sdhci-pci-o2micro.o
sdhci-pci-y += sdhci-pci-core.o sdhci-pci-o2micro.o sdhci-pci-arasan.o
obj-$(subst m,y,$(CONFIG_MMC_SDHCI_PCI)) += sdhci-pci-data.o
obj-$(CONFIG_MMC_SDHCI_ACPI) += sdhci-acpi.o
obj-$(CONFIG_MMC_SDHCI_PXAV3) += sdhci-pxav3.o
@ -39,12 +39,8 @@ obj-$(CONFIG_MMC_SDRICOH_CS) += sdricoh_cs.o
obj-$(CONFIG_MMC_TMIO) += tmio_mmc.o
obj-$(CONFIG_MMC_TMIO_CORE) += tmio_mmc_core.o
obj-$(CONFIG_MMC_SDHI) += renesas_sdhi_core.o
ifeq ($(subst m,y,$(CONFIG_MMC_SDHI_SYS_DMAC)),y)
obj-$(CONFIG_MMC_SDHI) += renesas_sdhi_sys_dmac.o
endif
ifeq ($(subst m,y,$(CONFIG_MMC_SDHI_INTERNAL_DMAC)),y)
obj-$(CONFIG_MMC_SDHI) += renesas_sdhi_internal_dmac.o
endif
obj-$(CONFIG_MMC_SDHI_SYS_DMAC) += renesas_sdhi_sys_dmac.o
obj-$(CONFIG_MMC_SDHI_INTERNAL_DMAC) += renesas_sdhi_internal_dmac.o
obj-$(CONFIG_MMC_CB710) += cb710-mmc.o
obj-$(CONFIG_MMC_VIA_SDMMC) += via-sdmmc.o
obj-$(CONFIG_SDH_BFIN) += bfin_sdh.o
@ -92,6 +88,7 @@ obj-$(CONFIG_MMC_SDHCI_ST) += sdhci-st.o
obj-$(CONFIG_MMC_SDHCI_MICROCHIP_PIC32) += sdhci-pic32.o
obj-$(CONFIG_MMC_SDHCI_BRCMSTB) += sdhci-brcmstb.o
obj-$(CONFIG_MMC_SDHCI_OMAP) += sdhci-omap.o
obj-$(CONFIG_MMC_CQHCI) += cqhci.o
ifeq ($(CONFIG_CB710_DEBUG),y)
CFLAGS-cb710-mmc += -DDEBUG

View File

@ -42,13 +42,11 @@
#include <linux/spinlock.h>
#include <linux/timer.h>
#include <linux/clk.h>
#include <linux/scatterlist.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/types.h>
#include <asm/io.h>
#include <linux/uaccess.h>
#define DRIVER_NAME "goldfish_mmc"

1150
drivers/mmc/host/cqhci.c Normal file

File diff suppressed because it is too large Load Diff

240
drivers/mmc/host/cqhci.h Normal file
View File

@ -0,0 +1,240 @@
/* Copyright (c) 2015, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef LINUX_MMC_CQHCI_H
#define LINUX_MMC_CQHCI_H
#include <linux/compiler.h>
#include <linux/bitops.h>
#include <linux/spinlock_types.h>
#include <linux/types.h>
#include <linux/completion.h>
#include <linux/wait.h>
#include <linux/irqreturn.h>
#include <asm/io.h>
/* registers */
/* version */
#define CQHCI_VER 0x00
#define CQHCI_VER_MAJOR(x) (((x) & GENMASK(11, 8)) >> 8)
#define CQHCI_VER_MINOR1(x) (((x) & GENMASK(7, 4)) >> 4)
#define CQHCI_VER_MINOR2(x) ((x) & GENMASK(3, 0))
/* capabilities */
#define CQHCI_CAP 0x04
/* configuration */
#define CQHCI_CFG 0x08
#define CQHCI_DCMD 0x00001000
#define CQHCI_TASK_DESC_SZ 0x00000100
#define CQHCI_ENABLE 0x00000001
/* control */
#define CQHCI_CTL 0x0C
#define CQHCI_CLEAR_ALL_TASKS 0x00000100
#define CQHCI_HALT 0x00000001
/* interrupt status */
#define CQHCI_IS 0x10
#define CQHCI_IS_HAC BIT(0)
#define CQHCI_IS_TCC BIT(1)
#define CQHCI_IS_RED BIT(2)
#define CQHCI_IS_TCL BIT(3)
#define CQHCI_IS_MASK (CQHCI_IS_TCC | CQHCI_IS_RED)
/* interrupt status enable */
#define CQHCI_ISTE 0x14
/* interrupt signal enable */
#define CQHCI_ISGE 0x18
/* interrupt coalescing */
#define CQHCI_IC 0x1C
#define CQHCI_IC_ENABLE BIT(31)
#define CQHCI_IC_RESET BIT(16)
#define CQHCI_IC_ICCTHWEN BIT(15)
#define CQHCI_IC_ICCTH(x) (((x) & 0x1F) << 8)
#define CQHCI_IC_ICTOVALWEN BIT(7)
#define CQHCI_IC_ICTOVAL(x) ((x) & 0x7F)
/* task list base address */
#define CQHCI_TDLBA 0x20
/* task list base address upper */
#define CQHCI_TDLBAU 0x24
/* door-bell */
#define CQHCI_TDBR 0x28
/* task completion notification */
#define CQHCI_TCN 0x2C
/* device queue status */
#define CQHCI_DQS 0x30
/* device pending tasks */
#define CQHCI_DPT 0x34
/* task clear */
#define CQHCI_TCLR 0x38
/* send status config 1 */
#define CQHCI_SSC1 0x40
/* send status config 2 */
#define CQHCI_SSC2 0x44
/* response for dcmd */
#define CQHCI_CRDCT 0x48
/* response mode error mask */
#define CQHCI_RMEM 0x50
/* task error info */
#define CQHCI_TERRI 0x54
#define CQHCI_TERRI_C_INDEX(x) ((x) & GENMASK(5, 0))
#define CQHCI_TERRI_C_TASK(x) (((x) & GENMASK(12, 8)) >> 8)
#define CQHCI_TERRI_C_VALID(x) ((x) & BIT(15))
#define CQHCI_TERRI_D_INDEX(x) (((x) & GENMASK(21, 16)) >> 16)
#define CQHCI_TERRI_D_TASK(x) (((x) & GENMASK(28, 24)) >> 24)
#define CQHCI_TERRI_D_VALID(x) ((x) & BIT(31))
/* command response index */
#define CQHCI_CRI 0x58
/* command response argument */
#define CQHCI_CRA 0x5C
#define CQHCI_INT_ALL 0xF
#define CQHCI_IC_DEFAULT_ICCTH 31
#define CQHCI_IC_DEFAULT_ICTOVAL 1
/* attribute fields */
#define CQHCI_VALID(x) (((x) & 1) << 0)
#define CQHCI_END(x) (((x) & 1) << 1)
#define CQHCI_INT(x) (((x) & 1) << 2)
#define CQHCI_ACT(x) (((x) & 0x7) << 3)
/* data command task descriptor fields */
#define CQHCI_FORCED_PROG(x) (((x) & 1) << 6)
#define CQHCI_CONTEXT(x) (((x) & 0xF) << 7)
#define CQHCI_DATA_TAG(x) (((x) & 1) << 11)
#define CQHCI_DATA_DIR(x) (((x) & 1) << 12)
#define CQHCI_PRIORITY(x) (((x) & 1) << 13)
#define CQHCI_QBAR(x) (((x) & 1) << 14)
#define CQHCI_REL_WRITE(x) (((x) & 1) << 15)
#define CQHCI_BLK_COUNT(x) (((x) & 0xFFFF) << 16)
#define CQHCI_BLK_ADDR(x) (((x) & 0xFFFFFFFF) << 32)
/* direct command task descriptor fields */
#define CQHCI_CMD_INDEX(x) (((x) & 0x3F) << 16)
#define CQHCI_CMD_TIMING(x) (((x) & 1) << 22)
#define CQHCI_RESP_TYPE(x) (((x) & 0x3) << 23)
/* transfer descriptor fields */
#define CQHCI_DAT_LENGTH(x) (((x) & 0xFFFF) << 16)
#define CQHCI_DAT_ADDR_LO(x) (((x) & 0xFFFFFFFF) << 32)
#define CQHCI_DAT_ADDR_HI(x) (((x) & 0xFFFFFFFF) << 0)
struct cqhci_host_ops;
struct mmc_host;
struct cqhci_slot;
struct cqhci_host {
const struct cqhci_host_ops *ops;
void __iomem *mmio;
struct mmc_host *mmc;
spinlock_t lock;
/* relative card address of device */
unsigned int rca;
/* 64 bit DMA */
bool dma64;
int num_slots;
int qcnt;
u32 dcmd_slot;
u32 caps;
#define CQHCI_TASK_DESC_SZ_128 0x1
u32 quirks;
#define CQHCI_QUIRK_SHORT_TXFR_DESC_SZ 0x1
bool enabled;
bool halted;
bool init_done;
bool activated;
bool waiting_for_idle;
bool recovery_halt;
size_t desc_size;
size_t data_size;
u8 *desc_base;
/* total descriptor size */
u8 slot_sz;
/* 64/128 bit depends on CQHCI_CFG */
u8 task_desc_len;
/* 64 bit on 32-bit arch, 128 bit on 64-bit */
u8 link_desc_len;
u8 *trans_desc_base;
/* same length as transfer descriptor */
u8 trans_desc_len;
dma_addr_t desc_dma_base;
dma_addr_t trans_desc_dma_base;
struct completion halt_comp;
wait_queue_head_t wait_queue;
struct cqhci_slot *slot;
};
struct cqhci_host_ops {
void (*dumpregs)(struct mmc_host *mmc);
void (*write_l)(struct cqhci_host *host, u32 val, int reg);
u32 (*read_l)(struct cqhci_host *host, int reg);
void (*enable)(struct mmc_host *mmc);
void (*disable)(struct mmc_host *mmc, bool recovery);
};
static inline void cqhci_writel(struct cqhci_host *host, u32 val, int reg)
{
if (unlikely(host->ops->write_l))
host->ops->write_l(host, val, reg);
else
writel_relaxed(val, host->mmio + reg);
}
static inline u32 cqhci_readl(struct cqhci_host *host, int reg)
{
if (unlikely(host->ops->read_l))
return host->ops->read_l(host, reg);
else
return readl_relaxed(host->mmio + reg);
}
struct platform_device;
irqreturn_t cqhci_irq(struct mmc_host *mmc, u32 intmask, int cmd_error,
int data_error);
int cqhci_init(struct cqhci_host *cq_host, struct mmc_host *mmc, bool dma64);
struct cqhci_host *cqhci_pltfm_init(struct platform_device *pdev);
int cqhci_suspend(struct mmc_host *mmc);
int cqhci_resume(struct mmc_host *mmc);
#endif

View File

@ -174,7 +174,7 @@ module_param(poll_loopcount, uint, S_IRUGO);
MODULE_PARM_DESC(poll_loopcount,
"Maximum polling loop count. Default = 32");
static unsigned __initdata use_dma = 1;
static unsigned use_dma = 1;
module_param(use_dma, uint, 0);
MODULE_PARM_DESC(use_dma, "Whether to use DMA or not. Default = 1");
@ -496,8 +496,7 @@ static int mmc_davinci_start_dma_transfer(struct mmc_davinci_host *host,
return ret;
}
static void __init_or_module
davinci_release_dma_channels(struct mmc_davinci_host *host)
static void davinci_release_dma_channels(struct mmc_davinci_host *host)
{
if (!host->use_dma)
return;
@ -506,7 +505,7 @@ davinci_release_dma_channels(struct mmc_davinci_host *host)
dma_release_channel(host->dma_rx);
}
static int __init davinci_acquire_dma_channels(struct mmc_davinci_host *host)
static int davinci_acquire_dma_channels(struct mmc_davinci_host *host)
{
host->dma_tx = dma_request_chan(mmc_dev(host->mmc), "tx");
if (IS_ERR(host->dma_tx)) {
@ -1201,7 +1200,7 @@ static int mmc_davinci_parse_pdata(struct mmc_host *mmc)
return 0;
}
static int __init davinci_mmcsd_probe(struct platform_device *pdev)
static int davinci_mmcsd_probe(struct platform_device *pdev)
{
const struct of_device_id *match;
struct mmc_davinci_host *host = NULL;
@ -1254,8 +1253,9 @@ static int __init davinci_mmcsd_probe(struct platform_device *pdev)
pdev->id_entry = match->data;
ret = mmc_of_parse(mmc);
if (ret) {
dev_err(&pdev->dev,
"could not parse of data: %d\n", ret);
if (ret != -EPROBE_DEFER)
dev_err(&pdev->dev,
"could not parse of data: %d\n", ret);
goto parse_fail;
}
} else {
@ -1414,11 +1414,12 @@ static struct platform_driver davinci_mmcsd_driver = {
.pm = davinci_mmcsd_pm_ops,
.of_match_table = davinci_mmc_dt_ids,
},
.probe = davinci_mmcsd_probe,
.remove = __exit_p(davinci_mmcsd_remove),
.id_table = davinci_mmc_devtype,
};
module_platform_driver_probe(davinci_mmcsd_driver, davinci_mmcsd_probe);
module_platform_driver(davinci_mmcsd_driver);
MODULE_AUTHOR("Texas Instruments India");
MODULE_LICENSE("GPL");

View File

@ -1208,7 +1208,7 @@ static int meson_mmc_probe(struct platform_device *pdev)
}
irq = platform_get_irq(pdev, 0);
if (!irq) {
if (irq <= 0) {
dev_err(&pdev->dev, "failed to get interrupt resource.\n");
ret = -EINVAL;
goto free_host;

View File

@ -82,6 +82,10 @@ static unsigned int fmax = 515633;
* @qcom_fifo: enables qcom specific fifo pio read logic.
* @qcom_dml: enables qcom specific dma glue for dma transfers.
* @reversed_irq_handling: handle data irq before cmd irq.
* @mmcimask1: true if variant have a MMCIMASK1 register.
* @start_err: bitmask identifying the STARTBITERR bit inside MMCISTATUS
* register.
* @opendrain: bitmask identifying the OPENDRAIN bit inside MMCIPOWER register
*/
struct variant_data {
unsigned int clkreg;
@ -111,6 +115,9 @@ struct variant_data {
bool qcom_fifo;
bool qcom_dml;
bool reversed_irq_handling;
bool mmcimask1;
u32 start_err;
u32 opendrain;
};
static struct variant_data variant_arm = {
@ -120,6 +127,9 @@ static struct variant_data variant_arm = {
.pwrreg_powerup = MCI_PWR_UP,
.f_max = 100000000,
.reversed_irq_handling = true,
.mmcimask1 = true,
.start_err = MCI_STARTBITERR,
.opendrain = MCI_ROD,
};
static struct variant_data variant_arm_extended_fifo = {
@ -128,6 +138,9 @@ static struct variant_data variant_arm_extended_fifo = {
.datalength_bits = 16,
.pwrreg_powerup = MCI_PWR_UP,
.f_max = 100000000,
.mmcimask1 = true,
.start_err = MCI_STARTBITERR,
.opendrain = MCI_ROD,
};
static struct variant_data variant_arm_extended_fifo_hwfc = {
@ -137,6 +150,9 @@ static struct variant_data variant_arm_extended_fifo_hwfc = {
.datalength_bits = 16,
.pwrreg_powerup = MCI_PWR_UP,
.f_max = 100000000,
.mmcimask1 = true,
.start_err = MCI_STARTBITERR,
.opendrain = MCI_ROD,
};
static struct variant_data variant_u300 = {
@ -152,6 +168,9 @@ static struct variant_data variant_u300 = {
.signal_direction = true,
.pwrreg_clkgate = true,
.pwrreg_nopower = true,
.mmcimask1 = true,
.start_err = MCI_STARTBITERR,
.opendrain = MCI_OD,
};
static struct variant_data variant_nomadik = {
@ -168,6 +187,9 @@ static struct variant_data variant_nomadik = {
.signal_direction = true,
.pwrreg_clkgate = true,
.pwrreg_nopower = true,
.mmcimask1 = true,
.start_err = MCI_STARTBITERR,
.opendrain = MCI_OD,
};
static struct variant_data variant_ux500 = {
@ -190,6 +212,9 @@ static struct variant_data variant_ux500 = {
.busy_detect_flag = MCI_ST_CARDBUSY,
.busy_detect_mask = MCI_ST_BUSYENDMASK,
.pwrreg_nopower = true,
.mmcimask1 = true,
.start_err = MCI_STARTBITERR,
.opendrain = MCI_OD,
};
static struct variant_data variant_ux500v2 = {
@ -214,6 +239,26 @@ static struct variant_data variant_ux500v2 = {
.busy_detect_flag = MCI_ST_CARDBUSY,
.busy_detect_mask = MCI_ST_BUSYENDMASK,
.pwrreg_nopower = true,
.mmcimask1 = true,
.start_err = MCI_STARTBITERR,
.opendrain = MCI_OD,
};
static struct variant_data variant_stm32 = {
.fifosize = 32 * 4,
.fifohalfsize = 8 * 4,
.clkreg = MCI_CLK_ENABLE,
.clkreg_enable = MCI_ST_UX500_HWFCEN,
.clkreg_8bit_bus_enable = MCI_ST_8BIT_BUS,
.clkreg_neg_edge_enable = MCI_ST_UX500_NEG_EDGE,
.datalength_bits = 24,
.datactrl_mask_sdio = MCI_DPSM_ST_SDIOEN,
.st_sdio = true,
.st_clkdiv = true,
.pwrreg_powerup = MCI_PWR_ON,
.f_max = 48000000,
.pwrreg_clkgate = true,
.pwrreg_nopower = true,
};
static struct variant_data variant_qcom = {
@ -232,6 +277,9 @@ static struct variant_data variant_qcom = {
.explicit_mclk_control = true,
.qcom_fifo = true,
.qcom_dml = true,
.mmcimask1 = true,
.start_err = MCI_STARTBITERR,
.opendrain = MCI_ROD,
};
/* Busy detection for the ST Micro variant */
@ -396,6 +444,7 @@ mmci_request_end(struct mmci_host *host, struct mmc_request *mrq)
static void mmci_set_mask1(struct mmci_host *host, unsigned int mask)
{
void __iomem *base = host->base;
struct variant_data *variant = host->variant;
if (host->singleirq) {
unsigned int mask0 = readl(base + MMCIMASK0);
@ -406,7 +455,10 @@ static void mmci_set_mask1(struct mmci_host *host, unsigned int mask)
writel(mask0, base + MMCIMASK0);
}
writel(mask, base + MMCIMASK1);
if (variant->mmcimask1)
writel(mask, base + MMCIMASK1);
host->mask1_reg = mask;
}
static void mmci_stop_data(struct mmci_host *host)
@ -921,8 +973,9 @@ mmci_data_irq(struct mmci_host *host, struct mmc_data *data,
return;
/* First check for errors */
if (status & (MCI_DATACRCFAIL|MCI_DATATIMEOUT|MCI_STARTBITERR|
MCI_TXUNDERRUN|MCI_RXOVERRUN)) {
if (status & (MCI_DATACRCFAIL | MCI_DATATIMEOUT |
host->variant->start_err |
MCI_TXUNDERRUN | MCI_RXOVERRUN)) {
u32 remain, success;
/* Terminate the DMA transfer */
@ -1286,7 +1339,7 @@ static irqreturn_t mmci_irq(int irq, void *dev_id)
status = readl(host->base + MMCISTATUS);
if (host->singleirq) {
if (status & readl(host->base + MMCIMASK1))
if (status & host->mask1_reg)
mmci_pio_irq(irq, dev_id);
status &= ~MCI_IRQ1MASK;
@ -1429,16 +1482,18 @@ static void mmci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
~MCI_ST_DATA2DIREN);
}
if (ios->bus_mode == MMC_BUSMODE_OPENDRAIN) {
if (host->hw_designer != AMBA_VENDOR_ST)
pwr |= MCI_ROD;
else {
/*
* The ST Micro variant use the ROD bit for something
* else and only has OD (Open Drain).
*/
pwr |= MCI_OD;
}
if (variant->opendrain) {
if (ios->bus_mode == MMC_BUSMODE_OPENDRAIN)
pwr |= variant->opendrain;
} else {
/*
* If the variant cannot configure the pads by its own, then we
* expect the pinctrl to be able to do that for us
*/
if (ios->bus_mode == MMC_BUSMODE_OPENDRAIN)
pinctrl_select_state(host->pinctrl, host->pins_opendrain);
else
pinctrl_select_state(host->pinctrl, host->pins_default);
}
/*
@ -1583,6 +1638,35 @@ static int mmci_probe(struct amba_device *dev,
host = mmc_priv(mmc);
host->mmc = mmc;
/*
* Some variant (STM32) doesn't have opendrain bit, nevertheless
* pins can be set accordingly using pinctrl
*/
if (!variant->opendrain) {
host->pinctrl = devm_pinctrl_get(&dev->dev);
if (IS_ERR(host->pinctrl)) {
dev_err(&dev->dev, "failed to get pinctrl");
ret = PTR_ERR(host->pinctrl);
goto host_free;
}
host->pins_default = pinctrl_lookup_state(host->pinctrl,
PINCTRL_STATE_DEFAULT);
if (IS_ERR(host->pins_default)) {
dev_err(mmc_dev(mmc), "Can't select default pins\n");
ret = PTR_ERR(host->pins_default);
goto host_free;
}
host->pins_opendrain = pinctrl_lookup_state(host->pinctrl,
MMCI_PINCTRL_STATE_OPENDRAIN);
if (IS_ERR(host->pins_opendrain)) {
dev_err(mmc_dev(mmc), "Can't select opendrain pins\n");
ret = PTR_ERR(host->pins_opendrain);
goto host_free;
}
}
host->hw_designer = amba_manf(dev);
host->hw_revision = amba_rev(dev);
dev_dbg(mmc_dev(mmc), "designer ID = 0x%02x\n", host->hw_designer);
@ -1729,7 +1813,10 @@ static int mmci_probe(struct amba_device *dev,
spin_lock_init(&host->lock);
writel(0, host->base + MMCIMASK0);
writel(0, host->base + MMCIMASK1);
if (variant->mmcimask1)
writel(0, host->base + MMCIMASK1);
writel(0xfff, host->base + MMCICLEAR);
/*
@ -1809,6 +1896,7 @@ static int mmci_remove(struct amba_device *dev)
if (mmc) {
struct mmci_host *host = mmc_priv(mmc);
struct variant_data *variant = host->variant;
/*
* Undo pm_runtime_put() in probe. We use the _sync
@ -1819,7 +1907,9 @@ static int mmci_remove(struct amba_device *dev)
mmc_remove_host(mmc);
writel(0, host->base + MMCIMASK0);
writel(0, host->base + MMCIMASK1);
if (variant->mmcimask1)
writel(0, host->base + MMCIMASK1);
writel(0, host->base + MMCICOMMAND);
writel(0, host->base + MMCIDATACTRL);
@ -1951,6 +2041,11 @@ static const struct amba_id mmci_ids[] = {
.mask = 0xf0ffffff,
.data = &variant_ux500v2,
},
{
.id = 0x00880180,
.mask = 0x00ffffff,
.data = &variant_stm32,
},
/* Qualcomm variants */
{
.id = 0x00051180,

View File

@ -192,6 +192,8 @@
#define NR_SG 128
#define MMCI_PINCTRL_STATE_OPENDRAIN "opendrain"
struct clk;
struct variant_data;
struct dma_chan;
@ -223,9 +225,13 @@ struct mmci_host {
u32 clk_reg;
u32 datactrl_reg;
u32 busy_status;
u32 mask1_reg;
bool vqmmc_enabled;
struct mmci_platform_data *plat;
struct variant_data *variant;
struct pinctrl *pinctrl;
struct pinctrl_state *pins_default;
struct pinctrl_state *pins_opendrain;
u8 hw_designer;
u8 hw_revision:4;

View File

@ -35,6 +35,28 @@ struct renesas_sdhi_of_data {
unsigned short max_segs;
};
struct tmio_mmc_dma {
enum dma_slave_buswidth dma_buswidth;
bool (*filter)(struct dma_chan *chan, void *arg);
void (*enable)(struct tmio_mmc_host *host, bool enable);
struct completion dma_dataend;
struct tasklet_struct dma_complete;
};
struct renesas_sdhi {
struct clk *clk;
struct clk *clk_cd;
struct tmio_mmc_data mmc_data;
struct tmio_mmc_dma dma_priv;
struct pinctrl *pinctrl;
struct pinctrl_state *pins_default, *pins_uhs;
void __iomem *scc_ctl;
u32 scc_tappos;
};
#define host_to_priv(host) \
container_of((host)->pdata, struct renesas_sdhi, mmc_data)
int renesas_sdhi_probe(struct platform_device *pdev,
const struct tmio_mmc_dma_ops *dma_ops);
int renesas_sdhi_remove(struct platform_device *pdev);

View File

@ -47,19 +47,6 @@
#define SDHI_VER_GEN3_SD 0xcc10
#define SDHI_VER_GEN3_SDMMC 0xcd10
#define host_to_priv(host) \
container_of((host)->pdata, struct renesas_sdhi, mmc_data)
struct renesas_sdhi {
struct clk *clk;
struct clk *clk_cd;
struct tmio_mmc_data mmc_data;
struct tmio_mmc_dma dma_priv;
struct pinctrl *pinctrl;
struct pinctrl_state *pins_default, *pins_uhs;
void __iomem *scc_ctl;
};
static void renesas_sdhi_sdbuf_width(struct tmio_mmc_host *host, int width)
{
u32 val;
@ -281,7 +268,7 @@ static unsigned int renesas_sdhi_init_tuning(struct tmio_mmc_host *host)
~SH_MOBILE_SDHI_SCC_RVSCNTL_RVSEN &
sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL));
sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_DT2FF, host->scc_tappos);
sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_DT2FF, priv->scc_tappos);
/* Read TAPNUM */
return (sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_DTCNTL) >>
@ -498,7 +485,7 @@ int renesas_sdhi_probe(struct platform_device *pdev,
if (IS_ERR(priv->clk)) {
ret = PTR_ERR(priv->clk);
dev_err(&pdev->dev, "cannot get clock: %d\n", ret);
goto eprobe;
return ret;
}
/*
@ -524,11 +511,9 @@ int renesas_sdhi_probe(struct platform_device *pdev,
"state_uhs");
}
host = tmio_mmc_host_alloc(pdev);
if (!host) {
ret = -ENOMEM;
goto eprobe;
}
host = tmio_mmc_host_alloc(pdev, mmc_data);
if (IS_ERR(host))
return PTR_ERR(host);
if (of_data) {
mmc_data->flags |= of_data->tmio_flags;
@ -542,18 +527,18 @@ int renesas_sdhi_probe(struct platform_device *pdev,
host->bus_shift = of_data->bus_shift;
}
host->dma = dma_priv;
host->write16_hook = renesas_sdhi_write16_hook;
host->clk_enable = renesas_sdhi_clk_enable;
host->clk_update = renesas_sdhi_clk_update;
host->clk_disable = renesas_sdhi_clk_disable;
host->multi_io_quirk = renesas_sdhi_multi_io_quirk;
host->dma_ops = dma_ops;
/* SDR speeds are only available on Gen2+ */
if (mmc_data->flags & TMIO_MMC_MIN_RCAR2) {
/* card_busy caused issues on r8a73a4 (pre-Gen2) CD-less SDHI */
host->card_busy = renesas_sdhi_card_busy;
host->start_signal_voltage_switch =
host->ops.card_busy = renesas_sdhi_card_busy;
host->ops.start_signal_voltage_switch =
renesas_sdhi_start_signal_voltage_switch;
}
@ -587,10 +572,14 @@ int renesas_sdhi_probe(struct platform_device *pdev,
/* All SDHI have SDIO status bits which must be 1 */
mmc_data->flags |= TMIO_MMC_SDIO_STATUS_SETBITS;
ret = tmio_mmc_host_probe(host, mmc_data, dma_ops);
if (ret < 0)
ret = renesas_sdhi_clk_enable(host);
if (ret)
goto efree;
ret = tmio_mmc_host_probe(host);
if (ret < 0)
goto edisclk;
/* One Gen2 SDHI incarnation does NOT have a CBSY bit */
if (sd_ctrl_read16(host, CTL_VERSION) == SDHI_VER_GEN2_SDR50)
mmc_data->flags &= ~TMIO_MMC_HAVE_CBSY;
@ -607,7 +596,7 @@ int renesas_sdhi_probe(struct platform_device *pdev,
for (i = 0; i < of_data->taps_num; i++) {
if (taps[i].clk_rate == 0 ||
taps[i].clk_rate == host->mmc->f_max) {
host->scc_tappos = taps->tap;
priv->scc_tappos = taps->tap;
hit = true;
break;
}
@ -651,19 +640,21 @@ int renesas_sdhi_probe(struct platform_device *pdev,
eirq:
tmio_mmc_host_remove(host);
edisclk:
renesas_sdhi_clk_disable(host);
efree:
tmio_mmc_host_free(host);
eprobe:
return ret;
}
EXPORT_SYMBOL_GPL(renesas_sdhi_probe);
int renesas_sdhi_remove(struct platform_device *pdev)
{
struct mmc_host *mmc = platform_get_drvdata(pdev);
struct tmio_mmc_host *host = mmc_priv(mmc);
struct tmio_mmc_host *host = platform_get_drvdata(pdev);
tmio_mmc_host_remove(host);
renesas_sdhi_clk_disable(host);
return 0;
}

View File

@ -103,6 +103,8 @@ renesas_sdhi_internal_dmac_dm_write(struct tmio_mmc_host *host,
static void
renesas_sdhi_internal_dmac_enable_dma(struct tmio_mmc_host *host, bool enable)
{
struct renesas_sdhi *priv = host_to_priv(host);
if (!host->chan_tx || !host->chan_rx)
return;
@ -110,8 +112,8 @@ renesas_sdhi_internal_dmac_enable_dma(struct tmio_mmc_host *host, bool enable)
renesas_sdhi_internal_dmac_dm_write(host, DM_CM_INFO1,
INFO1_CLEAR);
if (host->dma->enable)
host->dma->enable(host, enable);
if (priv->dma_priv.enable)
priv->dma_priv.enable(host, enable);
}
static void
@ -130,7 +132,9 @@ renesas_sdhi_internal_dmac_abort_dma(struct tmio_mmc_host *host) {
static void
renesas_sdhi_internal_dmac_dataend_dma(struct tmio_mmc_host *host) {
tasklet_schedule(&host->dma_complete);
struct renesas_sdhi *priv = host_to_priv(host);
tasklet_schedule(&priv->dma_priv.dma_complete);
}
static void
@ -220,10 +224,12 @@ static void
renesas_sdhi_internal_dmac_request_dma(struct tmio_mmc_host *host,
struct tmio_mmc_data *pdata)
{
struct renesas_sdhi *priv = host_to_priv(host);
/* Each value is set to non-zero to assume "enabling" each DMA */
host->chan_rx = host->chan_tx = (void *)0xdeadbeaf;
tasklet_init(&host->dma_complete,
tasklet_init(&priv->dma_priv.dma_complete,
renesas_sdhi_internal_dmac_complete_tasklet_fn,
(unsigned long)host);
tasklet_init(&host->dma_issue,
@ -255,6 +261,7 @@ static const struct soc_device_attribute gen3_soc_whitelist[] = {
{ .soc_id = "r8a7795", .revision = "ES1.*" },
{ .soc_id = "r8a7795", .revision = "ES2.0" },
{ .soc_id = "r8a7796", .revision = "ES1.0" },
{ .soc_id = "r8a77995", .revision = "ES1.0" },
{ /* sentinel */ }
};

View File

@ -117,11 +117,13 @@ MODULE_DEVICE_TABLE(of, renesas_sdhi_sys_dmac_of_match);
static void renesas_sdhi_sys_dmac_enable_dma(struct tmio_mmc_host *host,
bool enable)
{
struct renesas_sdhi *priv = host_to_priv(host);
if (!host->chan_tx || !host->chan_rx)
return;
if (host->dma->enable)
host->dma->enable(host, enable);
if (priv->dma_priv.enable)
priv->dma_priv.enable(host, enable);
}
static void renesas_sdhi_sys_dmac_abort_dma(struct tmio_mmc_host *host)
@ -138,12 +140,15 @@ static void renesas_sdhi_sys_dmac_abort_dma(struct tmio_mmc_host *host)
static void renesas_sdhi_sys_dmac_dataend_dma(struct tmio_mmc_host *host)
{
complete(&host->dma_dataend);
struct renesas_sdhi *priv = host_to_priv(host);
complete(&priv->dma_priv.dma_dataend);
}
static void renesas_sdhi_sys_dmac_dma_callback(void *arg)
{
struct tmio_mmc_host *host = arg;
struct renesas_sdhi *priv = host_to_priv(host);
spin_lock_irq(&host->lock);
@ -161,7 +166,7 @@ static void renesas_sdhi_sys_dmac_dma_callback(void *arg)
spin_unlock_irq(&host->lock);
wait_for_completion(&host->dma_dataend);
wait_for_completion(&priv->dma_priv.dma_dataend);
spin_lock_irq(&host->lock);
tmio_mmc_do_data_irq(host);
@ -171,6 +176,7 @@ out:
static void renesas_sdhi_sys_dmac_start_dma_rx(struct tmio_mmc_host *host)
{
struct renesas_sdhi *priv = host_to_priv(host);
struct scatterlist *sg = host->sg_ptr, *sg_tmp;
struct dma_async_tx_descriptor *desc = NULL;
struct dma_chan *chan = host->chan_rx;
@ -214,7 +220,7 @@ static void renesas_sdhi_sys_dmac_start_dma_rx(struct tmio_mmc_host *host)
DMA_CTRL_ACK);
if (desc) {
reinit_completion(&host->dma_dataend);
reinit_completion(&priv->dma_priv.dma_dataend);
desc->callback = renesas_sdhi_sys_dmac_dma_callback;
desc->callback_param = host;
@ -245,6 +251,7 @@ pio:
static void renesas_sdhi_sys_dmac_start_dma_tx(struct tmio_mmc_host *host)
{
struct renesas_sdhi *priv = host_to_priv(host);
struct scatterlist *sg = host->sg_ptr, *sg_tmp;
struct dma_async_tx_descriptor *desc = NULL;
struct dma_chan *chan = host->chan_tx;
@ -293,7 +300,7 @@ static void renesas_sdhi_sys_dmac_start_dma_tx(struct tmio_mmc_host *host)
DMA_CTRL_ACK);
if (desc) {
reinit_completion(&host->dma_dataend);
reinit_completion(&priv->dma_priv.dma_dataend);
desc->callback = renesas_sdhi_sys_dmac_dma_callback;
desc->callback_param = host;
@ -341,7 +348,7 @@ static void renesas_sdhi_sys_dmac_issue_tasklet_fn(unsigned long priv)
spin_lock_irq(&host->lock);
if (host && host->data) {
if (host->data) {
if (host->data->flags & MMC_DATA_READ)
chan = host->chan_rx;
else
@ -359,9 +366,11 @@ static void renesas_sdhi_sys_dmac_issue_tasklet_fn(unsigned long priv)
static void renesas_sdhi_sys_dmac_request_dma(struct tmio_mmc_host *host,
struct tmio_mmc_data *pdata)
{
struct renesas_sdhi *priv = host_to_priv(host);
/* We can only either use DMA for both Tx and Rx or not use it at all */
if (!host->dma || (!host->pdev->dev.of_node &&
(!pdata->chan_priv_tx || !pdata->chan_priv_rx)))
if (!host->pdev->dev.of_node &&
(!pdata->chan_priv_tx || !pdata->chan_priv_rx))
return;
if (!host->chan_tx && !host->chan_rx) {
@ -378,7 +387,7 @@ static void renesas_sdhi_sys_dmac_request_dma(struct tmio_mmc_host *host,
dma_cap_set(DMA_SLAVE, mask);
host->chan_tx = dma_request_slave_channel_compat(mask,
host->dma->filter, pdata->chan_priv_tx,
priv->dma_priv.filter, pdata->chan_priv_tx,
&host->pdev->dev, "tx");
dev_dbg(&host->pdev->dev, "%s: TX: got channel %p\n", __func__,
host->chan_tx);
@ -389,7 +398,7 @@ static void renesas_sdhi_sys_dmac_request_dma(struct tmio_mmc_host *host,
cfg.direction = DMA_MEM_TO_DEV;
cfg.dst_addr = res->start +
(CTL_SD_DATA_PORT << host->bus_shift);
cfg.dst_addr_width = host->dma->dma_buswidth;
cfg.dst_addr_width = priv->dma_priv.dma_buswidth;
if (!cfg.dst_addr_width)
cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
cfg.src_addr = 0;
@ -398,7 +407,7 @@ static void renesas_sdhi_sys_dmac_request_dma(struct tmio_mmc_host *host,
goto ecfgtx;
host->chan_rx = dma_request_slave_channel_compat(mask,
host->dma->filter, pdata->chan_priv_rx,
priv->dma_priv.filter, pdata->chan_priv_rx,
&host->pdev->dev, "rx");
dev_dbg(&host->pdev->dev, "%s: RX: got channel %p\n", __func__,
host->chan_rx);
@ -408,7 +417,7 @@ static void renesas_sdhi_sys_dmac_request_dma(struct tmio_mmc_host *host,
cfg.direction = DMA_DEV_TO_MEM;
cfg.src_addr = cfg.dst_addr + host->pdata->dma_rx_offset;
cfg.src_addr_width = host->dma->dma_buswidth;
cfg.src_addr_width = priv->dma_priv.dma_buswidth;
if (!cfg.src_addr_width)
cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
cfg.dst_addr = 0;
@ -420,7 +429,7 @@ static void renesas_sdhi_sys_dmac_request_dma(struct tmio_mmc_host *host,
if (!host->bounce_buf)
goto ebouncebuf;
init_completion(&host->dma_dataend);
init_completion(&priv->dma_priv.dma_dataend);
tasklet_init(&host->dma_issue,
renesas_sdhi_sys_dmac_issue_tasklet_fn,
(unsigned long)host);

View File

@ -1660,7 +1660,7 @@ static int s3cmci_probe(struct platform_device *pdev)
}
host->irq = platform_get_irq(pdev, 0);
if (host->irq == 0) {
if (host->irq <= 0) {
dev_err(&pdev->dev, "failed to get interrupt resource.\n");
ret = -EINVAL;
goto probe_iounmap;

View File

@ -76,6 +76,7 @@ struct sdhci_acpi_slot {
size_t priv_size;
int (*probe_slot)(struct platform_device *, const char *, const char *);
int (*remove_slot)(struct platform_device *);
int (*setup_host)(struct platform_device *pdev);
};
struct sdhci_acpi_host {
@ -96,14 +97,21 @@ static inline bool sdhci_acpi_flag(struct sdhci_acpi_host *c, unsigned int flag)
return c->slot && (c->slot->flags & flag);
}
#define INTEL_DSM_HS_CAPS_SDR25 BIT(0)
#define INTEL_DSM_HS_CAPS_DDR50 BIT(1)
#define INTEL_DSM_HS_CAPS_SDR50 BIT(2)
#define INTEL_DSM_HS_CAPS_SDR104 BIT(3)
enum {
INTEL_DSM_FNS = 0,
INTEL_DSM_V18_SWITCH = 3,
INTEL_DSM_V33_SWITCH = 4,
INTEL_DSM_HS_CAPS = 8,
};
struct intel_host {
u32 dsm_fns;
u32 hs_caps;
};
static const guid_t intel_dsm_guid =
@ -152,6 +160,8 @@ static void intel_dsm_init(struct intel_host *intel_host, struct device *dev,
{
int err;
intel_host->hs_caps = ~0;
err = __intel_dsm(intel_host, dev, INTEL_DSM_FNS, &intel_host->dsm_fns);
if (err) {
pr_debug("%s: DSM not supported, error %d\n",
@ -161,6 +171,8 @@ static void intel_dsm_init(struct intel_host *intel_host, struct device *dev,
pr_debug("%s: DSM function mask %#x\n",
mmc_hostname(mmc), intel_host->dsm_fns);
intel_dsm(intel_host, dev, INTEL_DSM_HS_CAPS, &intel_host->hs_caps);
}
static int intel_start_signal_voltage_switch(struct mmc_host *mmc,
@ -398,6 +410,26 @@ static int intel_probe_slot(struct platform_device *pdev, const char *hid,
return 0;
}
static int intel_setup_host(struct platform_device *pdev)
{
struct sdhci_acpi_host *c = platform_get_drvdata(pdev);
struct intel_host *intel_host = sdhci_acpi_priv(c);
if (!(intel_host->hs_caps & INTEL_DSM_HS_CAPS_SDR25))
c->host->mmc->caps &= ~MMC_CAP_UHS_SDR25;
if (!(intel_host->hs_caps & INTEL_DSM_HS_CAPS_SDR50))
c->host->mmc->caps &= ~MMC_CAP_UHS_SDR50;
if (!(intel_host->hs_caps & INTEL_DSM_HS_CAPS_DDR50))
c->host->mmc->caps &= ~MMC_CAP_UHS_DDR50;
if (!(intel_host->hs_caps & INTEL_DSM_HS_CAPS_SDR104))
c->host->mmc->caps &= ~MMC_CAP_UHS_SDR104;
return 0;
}
static const struct sdhci_acpi_slot sdhci_acpi_slot_int_emmc = {
.chip = &sdhci_acpi_chip_int,
.caps = MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE |
@ -409,6 +441,7 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_int_emmc = {
SDHCI_QUIRK2_STOP_WITH_TC |
SDHCI_QUIRK2_CAPS_BIT63_FOR_HS400,
.probe_slot = intel_probe_slot,
.setup_host = intel_setup_host,
.priv_size = sizeof(struct intel_host),
};
@ -421,6 +454,7 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_int_sdio = {
.flags = SDHCI_ACPI_RUNTIME_PM,
.pm_caps = MMC_PM_KEEP_POWER,
.probe_slot = intel_probe_slot,
.setup_host = intel_setup_host,
.priv_size = sizeof(struct intel_host),
};
@ -432,6 +466,7 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_int_sd = {
SDHCI_QUIRK2_STOP_WITH_TC,
.caps = MMC_CAP_WAIT_WHILE_BUSY | MMC_CAP_AGGRESSIVE_PM,
.probe_slot = intel_probe_slot,
.setup_host = intel_setup_host,
.priv_size = sizeof(struct intel_host),
};
@ -446,6 +481,83 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_qcom_sd = {
.caps = MMC_CAP_NONREMOVABLE,
};
/* AMD sdhci reset dll register. */
#define SDHCI_AMD_RESET_DLL_REGISTER 0x908
static int amd_select_drive_strength(struct mmc_card *card,
unsigned int max_dtr, int host_drv,
int card_drv, int *drv_type)
{
return MMC_SET_DRIVER_TYPE_A;
}
static void sdhci_acpi_amd_hs400_dll(struct sdhci_host *host)
{
/* AMD Platform requires dll setting */
sdhci_writel(host, 0x40003210, SDHCI_AMD_RESET_DLL_REGISTER);
usleep_range(10, 20);
sdhci_writel(host, 0x40033210, SDHCI_AMD_RESET_DLL_REGISTER);
}
/*
* For AMD Platform it is required to disable the tuning
* bit first controller to bring to HS Mode from HS200
* mode, later enable to tune to HS400 mode.
*/
static void amd_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
{
struct sdhci_host *host = mmc_priv(mmc);
unsigned int old_timing = host->timing;
sdhci_set_ios(mmc, ios);
if (old_timing == MMC_TIMING_MMC_HS200 &&
ios->timing == MMC_TIMING_MMC_HS)
sdhci_writew(host, 0x9, SDHCI_HOST_CONTROL2);
if (old_timing != MMC_TIMING_MMC_HS400 &&
ios->timing == MMC_TIMING_MMC_HS400) {
sdhci_writew(host, 0x80, SDHCI_HOST_CONTROL2);
sdhci_acpi_amd_hs400_dll(host);
}
}
static const struct sdhci_ops sdhci_acpi_ops_amd = {
.set_clock = sdhci_set_clock,
.set_bus_width = sdhci_set_bus_width,
.reset = sdhci_reset,
.set_uhs_signaling = sdhci_set_uhs_signaling,
};
static const struct sdhci_acpi_chip sdhci_acpi_chip_amd = {
.ops = &sdhci_acpi_ops_amd,
};
static int sdhci_acpi_emmc_amd_probe_slot(struct platform_device *pdev,
const char *hid, const char *uid)
{
struct sdhci_acpi_host *c = platform_get_drvdata(pdev);
struct sdhci_host *host = c->host;
sdhci_read_caps(host);
if (host->caps1 & SDHCI_SUPPORT_DDR50)
host->mmc->caps = MMC_CAP_1_8V_DDR;
if ((host->caps1 & SDHCI_SUPPORT_SDR104) &&
(host->mmc->caps & MMC_CAP_1_8V_DDR))
host->mmc->caps2 = MMC_CAP2_HS400_1_8V;
host->mmc_host_ops.select_drive_strength = amd_select_drive_strength;
host->mmc_host_ops.set_ios = amd_set_ios;
return 0;
}
static const struct sdhci_acpi_slot sdhci_acpi_slot_amd_emmc = {
.chip = &sdhci_acpi_chip_amd,
.caps = MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE,
.quirks = SDHCI_QUIRK_32BIT_DMA_ADDR | SDHCI_QUIRK_32BIT_DMA_SIZE |
SDHCI_QUIRK_32BIT_ADMA_SIZE,
.probe_slot = sdhci_acpi_emmc_amd_probe_slot,
};
struct sdhci_acpi_uid_slot {
const char *hid;
const char *uid;
@ -469,6 +581,7 @@ static const struct sdhci_acpi_uid_slot sdhci_acpi_uids[] = {
{ "PNP0D40" },
{ "QCOM8051", NULL, &sdhci_acpi_slot_qcom_sd_3v },
{ "QCOM8052", NULL, &sdhci_acpi_slot_qcom_sd },
{ "AMDI0040", NULL, &sdhci_acpi_slot_amd_emmc },
{ },
};
@ -485,6 +598,7 @@ static const struct acpi_device_id sdhci_acpi_ids[] = {
{ "PNP0D40" },
{ "QCOM8051" },
{ "QCOM8052" },
{ "AMDI0040" },
{ },
};
MODULE_DEVICE_TABLE(acpi, sdhci_acpi_ids);
@ -566,6 +680,10 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
host->hw_name = "ACPI";
host->ops = &sdhci_acpi_ops_dflt;
host->irq = platform_get_irq(pdev, 0);
if (host->irq <= 0) {
err = -EINVAL;
goto err_free;
}
host->ioaddr = devm_ioremap_nocache(dev, iomem->start,
resource_size(iomem));
@ -609,10 +727,20 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
}
}
err = sdhci_add_host(host);
err = sdhci_setup_host(host);
if (err)
goto err_free;
if (c->slot && c->slot->setup_host) {
err = c->slot->setup_host(pdev);
if (err)
goto err_cleanup;
}
err = __sdhci_add_host(host);
if (err)
goto err_cleanup;
if (c->use_runtime_pm) {
pm_runtime_set_active(dev);
pm_suspend_ignore_children(dev, 1);
@ -625,6 +753,8 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
return 0;
err_cleanup:
sdhci_cleanup_host(c->host);
err_free:
sdhci_free_host(c->host);
return err;

View File

@ -193,6 +193,7 @@ struct pltfm_imx_data {
struct clk *clk_ipg;
struct clk *clk_ahb;
struct clk *clk_per;
unsigned int actual_clock;
enum {
NO_CMD_PENDING, /* no multiblock command pending */
MULTIBLK_IN_PROCESS, /* exact multiblock cmd in process */
@ -1403,11 +1404,15 @@ static int sdhci_esdhc_runtime_suspend(struct device *dev)
int ret;
ret = sdhci_runtime_suspend_host(host);
if (ret)
return ret;
if (host->tuning_mode != SDHCI_TUNING_MODE_3)
mmc_retune_needed(host->mmc);
if (!sdhci_sdio_irq_enabled(host)) {
imx_data->actual_clock = host->mmc->actual_clock;
esdhc_pltfm_set_clock(host, 0);
clk_disable_unprepare(imx_data->clk_per);
clk_disable_unprepare(imx_data->clk_ipg);
}
@ -1423,31 +1428,34 @@ static int sdhci_esdhc_runtime_resume(struct device *dev)
struct pltfm_imx_data *imx_data = sdhci_pltfm_priv(pltfm_host);
int err;
err = clk_prepare_enable(imx_data->clk_ahb);
if (err)
return err;
if (!sdhci_sdio_irq_enabled(host)) {
err = clk_prepare_enable(imx_data->clk_per);
if (err)
return err;
goto disable_ahb_clk;
err = clk_prepare_enable(imx_data->clk_ipg);
if (err)
goto disable_per_clk;
esdhc_pltfm_set_clock(host, imx_data->actual_clock);
}
err = clk_prepare_enable(imx_data->clk_ahb);
if (err)
goto disable_ipg_clk;
err = sdhci_runtime_resume_host(host);
if (err)
goto disable_ahb_clk;
goto disable_ipg_clk;
return 0;
disable_ahb_clk:
clk_disable_unprepare(imx_data->clk_ahb);
disable_ipg_clk:
if (!sdhci_sdio_irq_enabled(host))
clk_disable_unprepare(imx_data->clk_ipg);
disable_per_clk:
if (!sdhci_sdio_irq_enabled(host))
clk_disable_unprepare(imx_data->clk_per);
disable_ahb_clk:
clk_disable_unprepare(imx_data->clk_ahb);
return err;
}
#endif

View File

@ -25,11 +25,13 @@
#include <linux/of_device.h>
#include <linux/phy/phy.h>
#include <linux/regmap.h>
#include "sdhci-pltfm.h"
#include <linux/of.h>
#define SDHCI_ARASAN_VENDOR_REGISTER 0x78
#include "cqhci.h"
#include "sdhci-pltfm.h"
#define SDHCI_ARASAN_VENDOR_REGISTER 0x78
#define SDHCI_ARASAN_CQE_BASE_ADDR 0x200
#define VENDOR_ENHANCED_STROBE BIT(0)
#define PHY_CLK_TOO_SLOW_HZ 400000
@ -90,6 +92,7 @@ struct sdhci_arasan_data {
struct phy *phy;
bool is_phy_on;
bool has_cqe;
struct clk_hw sdcardclk_hw;
struct clk *sdcardclk;
@ -262,6 +265,17 @@ static int sdhci_arasan_voltage_switch(struct mmc_host *mmc,
return -EINVAL;
}
static void sdhci_arasan_set_power(struct sdhci_host *host, unsigned char mode,
unsigned short vdd)
{
if (!IS_ERR(host->mmc->supply.vmmc)) {
struct mmc_host *mmc = host->mmc;
mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd);
}
sdhci_set_power_noreg(host, mode, vdd);
}
static const struct sdhci_ops sdhci_arasan_ops = {
.set_clock = sdhci_arasan_set_clock,
.get_max_clock = sdhci_pltfm_clk_get_max_clock,
@ -269,6 +283,7 @@ static const struct sdhci_ops sdhci_arasan_ops = {
.set_bus_width = sdhci_set_bus_width,
.reset = sdhci_arasan_reset,
.set_uhs_signaling = sdhci_set_uhs_signaling,
.set_power = sdhci_arasan_set_power,
};
static const struct sdhci_pltfm_data sdhci_arasan_pdata = {
@ -278,6 +293,62 @@ static const struct sdhci_pltfm_data sdhci_arasan_pdata = {
SDHCI_QUIRK2_CLOCK_DIV_ZERO_BROKEN,
};
static u32 sdhci_arasan_cqhci_irq(struct sdhci_host *host, u32 intmask)
{
int cmd_error = 0;
int data_error = 0;
if (!sdhci_cqe_irq(host, intmask, &cmd_error, &data_error))
return intmask;
cqhci_irq(host->mmc, intmask, cmd_error, data_error);
return 0;
}
static void sdhci_arasan_dumpregs(struct mmc_host *mmc)
{
sdhci_dumpregs(mmc_priv(mmc));
}
static void sdhci_arasan_cqe_enable(struct mmc_host *mmc)
{
struct sdhci_host *host = mmc_priv(mmc);
u32 reg;
reg = sdhci_readl(host, SDHCI_PRESENT_STATE);
while (reg & SDHCI_DATA_AVAILABLE) {
sdhci_readl(host, SDHCI_BUFFER);
reg = sdhci_readl(host, SDHCI_PRESENT_STATE);
}
sdhci_cqe_enable(mmc);
}
static const struct cqhci_host_ops sdhci_arasan_cqhci_ops = {
.enable = sdhci_arasan_cqe_enable,
.disable = sdhci_cqe_disable,
.dumpregs = sdhci_arasan_dumpregs,
};
static const struct sdhci_ops sdhci_arasan_cqe_ops = {
.set_clock = sdhci_arasan_set_clock,
.get_max_clock = sdhci_pltfm_clk_get_max_clock,
.get_timeout_clock = sdhci_pltfm_clk_get_max_clock,
.set_bus_width = sdhci_set_bus_width,
.reset = sdhci_arasan_reset,
.set_uhs_signaling = sdhci_set_uhs_signaling,
.set_power = sdhci_arasan_set_power,
.irq = sdhci_arasan_cqhci_irq,
};
static const struct sdhci_pltfm_data sdhci_arasan_cqe_pdata = {
.ops = &sdhci_arasan_cqe_ops,
.quirks = SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
SDHCI_QUIRK2_CLOCK_DIV_ZERO_BROKEN,
};
#ifdef CONFIG_PM_SLEEP
/**
* sdhci_arasan_suspend - Suspend method for the driver
@ -297,6 +368,12 @@ static int sdhci_arasan_suspend(struct device *dev)
if (host->tuning_mode != SDHCI_TUNING_MODE_3)
mmc_retune_needed(host->mmc);
if (sdhci_arasan->has_cqe) {
ret = cqhci_suspend(host->mmc);
if (ret)
return ret;
}
ret = sdhci_suspend_host(host);
if (ret)
return ret;
@ -353,7 +430,16 @@ static int sdhci_arasan_resume(struct device *dev)
sdhci_arasan->is_phy_on = true;
}
return sdhci_resume_host(host);
ret = sdhci_resume_host(host);
if (ret) {
dev_err(dev, "Cannot resume host.\n");
return ret;
}
if (sdhci_arasan->has_cqe)
return cqhci_resume(host->mmc);
return 0;
}
#endif /* ! CONFIG_PM_SLEEP */
@ -556,6 +642,49 @@ static void sdhci_arasan_unregister_sdclk(struct device *dev)
of_clk_del_provider(dev->of_node);
}
static int sdhci_arasan_add_host(struct sdhci_arasan_data *sdhci_arasan)
{
struct sdhci_host *host = sdhci_arasan->host;
struct cqhci_host *cq_host;
bool dma64;
int ret;
if (!sdhci_arasan->has_cqe)
return sdhci_add_host(host);
ret = sdhci_setup_host(host);
if (ret)
return ret;
cq_host = devm_kzalloc(host->mmc->parent,
sizeof(*cq_host), GFP_KERNEL);
if (!cq_host) {
ret = -ENOMEM;
goto cleanup;
}
cq_host->mmio = host->ioaddr + SDHCI_ARASAN_CQE_BASE_ADDR;
cq_host->ops = &sdhci_arasan_cqhci_ops;
dma64 = host->flags & SDHCI_USE_64_BIT_DMA;
if (dma64)
cq_host->caps |= CQHCI_TASK_DESC_SZ_128;
ret = cqhci_init(cq_host, host->mmc, dma64);
if (ret)
goto cleanup;
ret = __sdhci_add_host(host);
if (ret)
goto cleanup;
return 0;
cleanup:
sdhci_cleanup_host(host);
return ret;
}
static int sdhci_arasan_probe(struct platform_device *pdev)
{
int ret;
@ -566,9 +695,15 @@ static int sdhci_arasan_probe(struct platform_device *pdev)
struct sdhci_pltfm_host *pltfm_host;
struct sdhci_arasan_data *sdhci_arasan;
struct device_node *np = pdev->dev.of_node;
const struct sdhci_pltfm_data *pdata;
if (of_device_is_compatible(pdev->dev.of_node, "arasan,sdhci-5.1"))
pdata = &sdhci_arasan_cqe_pdata;
else
pdata = &sdhci_arasan_pdata;
host = sdhci_pltfm_init(pdev, pdata, sizeof(*sdhci_arasan));
host = sdhci_pltfm_init(pdev, &sdhci_arasan_pdata,
sizeof(*sdhci_arasan));
if (IS_ERR(host))
return PTR_ERR(host);
@ -663,9 +798,11 @@ static int sdhci_arasan_probe(struct platform_device *pdev)
sdhci_arasan_hs400_enhanced_strobe;
host->mmc_host_ops.start_signal_voltage_switch =
sdhci_arasan_voltage_switch;
sdhci_arasan->has_cqe = true;
host->mmc->caps2 |= MMC_CAP2_CQE | MMC_CAP2_CQE_DCMD;
}
ret = sdhci_add_host(host);
ret = sdhci_arasan_add_host(sdhci_arasan);
if (ret)
goto err_add_host;

View File

@ -589,10 +589,18 @@ static void esdhc_pltfm_set_bus_width(struct sdhci_host *host, int width)
static void esdhc_reset(struct sdhci_host *host, u8 mask)
{
u32 val;
sdhci_reset(host, mask);
sdhci_writel(host, host->ier, SDHCI_INT_ENABLE);
sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE);
if (mask & SDHCI_RESET_ALL) {
val = sdhci_readl(host, ESDHC_TBCTL);
val &= ~ESDHC_TB_EN;
sdhci_writel(host, val, ESDHC_TBCTL);
}
}
/* The SCFG, Supplemental Configuration Unit, provides SoC specific

View File

@ -0,0 +1,331 @@
// SPDX-License-Identifier: GPL-2.0
/*
* sdhci-pci-arasan.c - Driver for Arasan PCI Controller with
* integrated phy.
*
* Copyright (C) 2017 Arasan Chip Systems Inc.
*
* Author: Atul Garg <agarg@arasan.com>
*/
#include <linux/pci.h>
#include <linux/delay.h>
#include "sdhci.h"
#include "sdhci-pci.h"
/* Extra registers for Arasan SD/SDIO/MMC Host Controller with PHY */
#define PHY_ADDR_REG 0x300
#define PHY_DAT_REG 0x304
#define PHY_WRITE BIT(8)
#define PHY_BUSY BIT(9)
#define DATA_MASK 0xFF
/* PHY Specific Registers */
#define DLL_STATUS 0x00
#define IPAD_CTRL1 0x01
#define IPAD_CTRL2 0x02
#define IPAD_STS 0x03
#define IOREN_CTRL1 0x06
#define IOREN_CTRL2 0x07
#define IOPU_CTRL1 0x08
#define IOPU_CTRL2 0x09
#define ITAP_DELAY 0x0C
#define OTAP_DELAY 0x0D
#define STRB_SEL 0x0E
#define CLKBUF_SEL 0x0F
#define MODE_CTRL 0x11
#define DLL_TRIM 0x12
#define CMD_CTRL 0x20
#define DATA_CTRL 0x21
#define STRB_CTRL 0x22
#define CLK_CTRL 0x23
#define PHY_CTRL 0x24
#define DLL_ENBL BIT(3)
#define RTRIM_EN BIT(1)
#define PDB_ENBL BIT(1)
#define RETB_ENBL BIT(6)
#define ODEN_CMD BIT(1)
#define ODEN_DAT 0xFF
#define REN_STRB BIT(0)
#define REN_CMND BIT(1)
#define REN_DATA 0xFF
#define PU_CMD BIT(1)
#define PU_DAT 0xFF
#define ITAPDLY_EN BIT(0)
#define OTAPDLY_EN BIT(0)
#define OD_REL_CMD BIT(1)
#define OD_REL_DAT 0xFF
#define DLLTRM_ICP 0x8
#define PDB_CMND BIT(0)
#define PDB_DATA 0xFF
#define PDB_STRB BIT(0)
#define PDB_CLOCK BIT(0)
#define CALDONE_MASK 0x10
#define DLL_RDY_MASK 0x10
#define MAX_CLK_BUF 0x7
/* Mode Controls */
#define ENHSTRB_MODE BIT(0)
#define HS400_MODE BIT(1)
#define LEGACY_MODE BIT(2)
#define DDR50_MODE BIT(3)
/*
* Controller has no specific bits for HS200/HS.
* Used BIT(4), BIT(5) for software programming.
*/
#define HS200_MODE BIT(4)
#define HISPD_MODE BIT(5)
#define OTAPDLY(x) (((x) << 1) | OTAPDLY_EN)
#define ITAPDLY(x) (((x) << 1) | ITAPDLY_EN)
#define FREQSEL(x) (((x) << 5) | DLL_ENBL)
#define IOPAD(x, y) ((x) | ((y) << 2))
/* Arasan private data */
struct arasan_host {
u32 chg_clk;
};
static int arasan_phy_addr_poll(struct sdhci_host *host, u32 offset, u32 mask)
{
ktime_t timeout = ktime_add_us(ktime_get(), 100);
bool failed;
u8 val = 0;
while (1) {
failed = ktime_after(ktime_get(), timeout);
val = sdhci_readw(host, PHY_ADDR_REG);
if (!(val & mask))
return 0;
if (failed)
return -EBUSY;
}
}
static int arasan_phy_write(struct sdhci_host *host, u8 data, u8 offset)
{
sdhci_writew(host, data, PHY_DAT_REG);
sdhci_writew(host, (PHY_WRITE | offset), PHY_ADDR_REG);
return arasan_phy_addr_poll(host, PHY_ADDR_REG, PHY_BUSY);
}
static int arasan_phy_read(struct sdhci_host *host, u8 offset, u8 *data)
{
int ret;
sdhci_writew(host, 0, PHY_DAT_REG);
sdhci_writew(host, offset, PHY_ADDR_REG);
ret = arasan_phy_addr_poll(host, PHY_ADDR_REG, PHY_BUSY);
/* Masking valid data bits */
*data = sdhci_readw(host, PHY_DAT_REG) & DATA_MASK;
return ret;
}
static int arasan_phy_sts_poll(struct sdhci_host *host, u32 offset, u32 mask)
{
int ret;
ktime_t timeout = ktime_add_us(ktime_get(), 100);
bool failed;
u8 val = 0;
while (1) {
failed = ktime_after(ktime_get(), timeout);
ret = arasan_phy_read(host, offset, &val);
if (ret)
return -EBUSY;
else if (val & mask)
return 0;
if (failed)
return -EBUSY;
}
}
/* Initialize the Arasan PHY */
static int arasan_phy_init(struct sdhci_host *host)
{
int ret;
u8 val;
/* Program IOPADs and wait for calibration to be done */
if (arasan_phy_read(host, IPAD_CTRL1, &val) ||
arasan_phy_write(host, val | RETB_ENBL | PDB_ENBL, IPAD_CTRL1) ||
arasan_phy_read(host, IPAD_CTRL2, &val) ||
arasan_phy_write(host, val | RTRIM_EN, IPAD_CTRL2))
return -EBUSY;
ret = arasan_phy_sts_poll(host, IPAD_STS, CALDONE_MASK);
if (ret)
return -EBUSY;
/* Program CMD/Data lines */
if (arasan_phy_read(host, IOREN_CTRL1, &val) ||
arasan_phy_write(host, val | REN_CMND | REN_STRB, IOREN_CTRL1) ||
arasan_phy_read(host, IOPU_CTRL1, &val) ||
arasan_phy_write(host, val | PU_CMD, IOPU_CTRL1) ||
arasan_phy_read(host, CMD_CTRL, &val) ||
arasan_phy_write(host, val | PDB_CMND, CMD_CTRL) ||
arasan_phy_read(host, IOREN_CTRL2, &val) ||
arasan_phy_write(host, val | REN_DATA, IOREN_CTRL2) ||
arasan_phy_read(host, IOPU_CTRL2, &val) ||
arasan_phy_write(host, val | PU_DAT, IOPU_CTRL2) ||
arasan_phy_read(host, DATA_CTRL, &val) ||
arasan_phy_write(host, val | PDB_DATA, DATA_CTRL) ||
arasan_phy_read(host, STRB_CTRL, &val) ||
arasan_phy_write(host, val | PDB_STRB, STRB_CTRL) ||
arasan_phy_read(host, CLK_CTRL, &val) ||
arasan_phy_write(host, val | PDB_CLOCK, CLK_CTRL) ||
arasan_phy_read(host, CLKBUF_SEL, &val) ||
arasan_phy_write(host, val | MAX_CLK_BUF, CLKBUF_SEL) ||
arasan_phy_write(host, LEGACY_MODE, MODE_CTRL))
return -EBUSY;
return 0;
}
/* Set Arasan PHY for different modes */
static int arasan_phy_set(struct sdhci_host *host, u8 mode, u8 otap,
u8 drv_type, u8 itap, u8 trim, u8 clk)
{
u8 val;
int ret;
if (mode == HISPD_MODE || mode == HS200_MODE)
ret = arasan_phy_write(host, 0x0, MODE_CTRL);
else
ret = arasan_phy_write(host, mode, MODE_CTRL);
if (ret)
return ret;
if (mode == HS400_MODE || mode == HS200_MODE) {
ret = arasan_phy_read(host, IPAD_CTRL1, &val);
if (ret)
return ret;
ret = arasan_phy_write(host, IOPAD(val, drv_type), IPAD_CTRL1);
if (ret)
return ret;
}
if (mode == LEGACY_MODE) {
ret = arasan_phy_write(host, 0x0, OTAP_DELAY);
if (ret)
return ret;
ret = arasan_phy_write(host, 0x0, ITAP_DELAY);
} else {
ret = arasan_phy_write(host, OTAPDLY(otap), OTAP_DELAY);
if (ret)
return ret;
if (mode != HS200_MODE)
ret = arasan_phy_write(host, ITAPDLY(itap), ITAP_DELAY);
else
ret = arasan_phy_write(host, 0x0, ITAP_DELAY);
}
if (ret)
return ret;
if (mode != LEGACY_MODE) {
ret = arasan_phy_write(host, trim, DLL_TRIM);
if (ret)
return ret;
}
ret = arasan_phy_write(host, 0, DLL_STATUS);
if (ret)
return ret;
if (mode != LEGACY_MODE) {
ret = arasan_phy_write(host, FREQSEL(clk), DLL_STATUS);
if (ret)
return ret;
ret = arasan_phy_sts_poll(host, DLL_STATUS, DLL_RDY_MASK);
if (ret)
return -EBUSY;
}
return 0;
}
static int arasan_select_phy_clock(struct sdhci_host *host)
{
struct sdhci_pci_slot *slot = sdhci_priv(host);
struct arasan_host *arasan_host = sdhci_pci_priv(slot);
u8 clk;
if (arasan_host->chg_clk == host->mmc->ios.clock)
return 0;
arasan_host->chg_clk = host->mmc->ios.clock;
if (host->mmc->ios.clock == 200000000)
clk = 0x0;
else if (host->mmc->ios.clock == 100000000)
clk = 0x2;
else if (host->mmc->ios.clock == 50000000)
clk = 0x1;
else
clk = 0x0;
if (host->mmc_host_ops.hs400_enhanced_strobe) {
arasan_phy_set(host, ENHSTRB_MODE, 1, 0x0, 0x0,
DLLTRM_ICP, clk);
} else {
switch (host->mmc->ios.timing) {
case MMC_TIMING_LEGACY:
arasan_phy_set(host, LEGACY_MODE, 0x0, 0x0, 0x0,
0x0, 0x0);
break;
case MMC_TIMING_MMC_HS:
case MMC_TIMING_SD_HS:
arasan_phy_set(host, HISPD_MODE, 0x3, 0x0, 0x2,
DLLTRM_ICP, clk);
break;
case MMC_TIMING_MMC_HS200:
case MMC_TIMING_UHS_SDR104:
arasan_phy_set(host, HS200_MODE, 0x2,
host->mmc->ios.drv_type, 0x0,
DLLTRM_ICP, clk);
break;
case MMC_TIMING_MMC_DDR52:
case MMC_TIMING_UHS_DDR50:
arasan_phy_set(host, DDR50_MODE, 0x1, 0x0,
0x0, DLLTRM_ICP, clk);
break;
case MMC_TIMING_MMC_HS400:
arasan_phy_set(host, HS400_MODE, 0x1,
host->mmc->ios.drv_type, 0xa,
DLLTRM_ICP, clk);
break;
default:
break;
}
}
return 0;
}
static int arasan_pci_probe_slot(struct sdhci_pci_slot *slot)
{
int err;
slot->host->mmc->caps |= MMC_CAP_NONREMOVABLE | MMC_CAP_8_BIT_DATA;
err = arasan_phy_init(slot->host);
if (err)
return -ENODEV;
return 0;
}
static void arasan_sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
{
sdhci_set_clock(host, clock);
/* Change phy settings for the new clock */
arasan_select_phy_clock(host);
}
static const struct sdhci_ops arasan_sdhci_pci_ops = {
.set_clock = arasan_sdhci_set_clock,
.enable_dma = sdhci_pci_enable_dma,
.set_bus_width = sdhci_set_bus_width,
.reset = sdhci_reset,
.set_uhs_signaling = sdhci_set_uhs_signaling,
};
const struct sdhci_pci_fixes sdhci_arasan = {
.probe_slot = arasan_pci_probe_slot,
.ops = &arasan_sdhci_pci_ops,
.priv_size = sizeof(struct arasan_host),
};

View File

@ -30,17 +30,37 @@
#include <linux/mmc/sdhci-pci-data.h>
#include <linux/acpi.h>
#include "cqhci.h"
#include "sdhci.h"
#include "sdhci-pci.h"
static int sdhci_pci_enable_dma(struct sdhci_host *host);
static void sdhci_pci_hw_reset(struct sdhci_host *host);
#ifdef CONFIG_PM_SLEEP
static int __sdhci_pci_suspend_host(struct sdhci_pci_chip *chip)
static int sdhci_pci_init_wakeup(struct sdhci_pci_chip *chip)
{
mmc_pm_flag_t pm_flags = 0;
int i;
for (i = 0; i < chip->num_slots; i++) {
struct sdhci_pci_slot *slot = chip->slots[i];
if (slot)
pm_flags |= slot->host->mmc->pm_flags;
}
return device_set_wakeup_enable(&chip->pdev->dev,
(pm_flags & MMC_PM_KEEP_POWER) &&
(pm_flags & MMC_PM_WAKE_SDIO_IRQ));
}
static int sdhci_pci_suspend_host(struct sdhci_pci_chip *chip)
{
int i, ret;
sdhci_pci_init_wakeup(chip);
for (i = 0; i < chip->num_slots; i++) {
struct sdhci_pci_slot *slot = chip->slots[i];
struct sdhci_host *host;
@ -56,9 +76,6 @@ static int __sdhci_pci_suspend_host(struct sdhci_pci_chip *chip)
ret = sdhci_suspend_host(host);
if (ret)
goto err_pci_suspend;
if (host->mmc->pm_flags & MMC_PM_WAKE_SDIO_IRQ)
sdhci_enable_irq_wakeups(host);
}
return 0;
@ -69,36 +86,6 @@ err_pci_suspend:
return ret;
}
static int sdhci_pci_init_wakeup(struct sdhci_pci_chip *chip)
{
mmc_pm_flag_t pm_flags = 0;
int i;
for (i = 0; i < chip->num_slots; i++) {
struct sdhci_pci_slot *slot = chip->slots[i];
if (slot)
pm_flags |= slot->host->mmc->pm_flags;
}
return device_init_wakeup(&chip->pdev->dev,
(pm_flags & MMC_PM_KEEP_POWER) &&
(pm_flags & MMC_PM_WAKE_SDIO_IRQ));
}
static int sdhci_pci_suspend_host(struct sdhci_pci_chip *chip)
{
int ret;
ret = __sdhci_pci_suspend_host(chip);
if (ret)
return ret;
sdhci_pci_init_wakeup(chip);
return 0;
}
int sdhci_pci_resume_host(struct sdhci_pci_chip *chip)
{
struct sdhci_pci_slot *slot;
@ -116,6 +103,28 @@ int sdhci_pci_resume_host(struct sdhci_pci_chip *chip)
return 0;
}
static int sdhci_cqhci_suspend(struct sdhci_pci_chip *chip)
{
int ret;
ret = cqhci_suspend(chip->slots[0]->host->mmc);
if (ret)
return ret;
return sdhci_pci_suspend_host(chip);
}
static int sdhci_cqhci_resume(struct sdhci_pci_chip *chip)
{
int ret;
ret = sdhci_pci_resume_host(chip);
if (ret)
return ret;
return cqhci_resume(chip->slots[0]->host->mmc);
}
#endif
#ifdef CONFIG_PM
@ -166,8 +175,48 @@ static int sdhci_pci_runtime_resume_host(struct sdhci_pci_chip *chip)
return 0;
}
static int sdhci_cqhci_runtime_suspend(struct sdhci_pci_chip *chip)
{
int ret;
ret = cqhci_suspend(chip->slots[0]->host->mmc);
if (ret)
return ret;
return sdhci_pci_runtime_suspend_host(chip);
}
static int sdhci_cqhci_runtime_resume(struct sdhci_pci_chip *chip)
{
int ret;
ret = sdhci_pci_runtime_resume_host(chip);
if (ret)
return ret;
return cqhci_resume(chip->slots[0]->host->mmc);
}
#endif
static u32 sdhci_cqhci_irq(struct sdhci_host *host, u32 intmask)
{
int cmd_error = 0;
int data_error = 0;
if (!sdhci_cqe_irq(host, intmask, &cmd_error, &data_error))
return intmask;
cqhci_irq(host->mmc, intmask, cmd_error, data_error);
return 0;
}
static void sdhci_pci_dumpregs(struct mmc_host *mmc)
{
sdhci_dumpregs(mmc_priv(mmc));
}
/*****************************************************************************\
* *
* Hardware specific quirk handling *
@ -583,6 +632,18 @@ static const struct sdhci_ops sdhci_intel_byt_ops = {
.voltage_switch = sdhci_intel_voltage_switch,
};
static const struct sdhci_ops sdhci_intel_glk_ops = {
.set_clock = sdhci_set_clock,
.set_power = sdhci_intel_set_power,
.enable_dma = sdhci_pci_enable_dma,
.set_bus_width = sdhci_set_bus_width,
.reset = sdhci_reset,
.set_uhs_signaling = sdhci_set_uhs_signaling,
.hw_reset = sdhci_pci_hw_reset,
.voltage_switch = sdhci_intel_voltage_switch,
.irq = sdhci_cqhci_irq,
};
static void byt_read_dsm(struct sdhci_pci_slot *slot)
{
struct intel_host *intel_host = sdhci_pci_priv(slot);
@ -612,15 +673,83 @@ static int glk_emmc_probe_slot(struct sdhci_pci_slot *slot)
{
int ret = byt_emmc_probe_slot(slot);
slot->host->mmc->caps2 |= MMC_CAP2_CQE;
if (slot->chip->pdev->device != PCI_DEVICE_ID_INTEL_GLK_EMMC) {
slot->host->mmc->caps2 |= MMC_CAP2_HS400_ES,
slot->host->mmc_host_ops.hs400_enhanced_strobe =
intel_hs400_enhanced_strobe;
slot->host->mmc->caps2 |= MMC_CAP2_CQE_DCMD;
}
return ret;
}
static void glk_cqe_enable(struct mmc_host *mmc)
{
struct sdhci_host *host = mmc_priv(mmc);
u32 reg;
/*
* CQE gets stuck if it sees Buffer Read Enable bit set, which can be
* the case after tuning, so ensure the buffer is drained.
*/
reg = sdhci_readl(host, SDHCI_PRESENT_STATE);
while (reg & SDHCI_DATA_AVAILABLE) {
sdhci_readl(host, SDHCI_BUFFER);
reg = sdhci_readl(host, SDHCI_PRESENT_STATE);
}
sdhci_cqe_enable(mmc);
}
static const struct cqhci_host_ops glk_cqhci_ops = {
.enable = glk_cqe_enable,
.disable = sdhci_cqe_disable,
.dumpregs = sdhci_pci_dumpregs,
};
static int glk_emmc_add_host(struct sdhci_pci_slot *slot)
{
struct device *dev = &slot->chip->pdev->dev;
struct sdhci_host *host = slot->host;
struct cqhci_host *cq_host;
bool dma64;
int ret;
ret = sdhci_setup_host(host);
if (ret)
return ret;
cq_host = devm_kzalloc(dev, sizeof(*cq_host), GFP_KERNEL);
if (!cq_host) {
ret = -ENOMEM;
goto cleanup;
}
cq_host->mmio = host->ioaddr + 0x200;
cq_host->quirks |= CQHCI_QUIRK_SHORT_TXFR_DESC_SZ;
cq_host->ops = &glk_cqhci_ops;
dma64 = host->flags & SDHCI_USE_64_BIT_DMA;
if (dma64)
cq_host->caps |= CQHCI_TASK_DESC_SZ_128;
ret = cqhci_init(cq_host, host->mmc, dma64);
if (ret)
goto cleanup;
ret = __sdhci_add_host(host);
if (ret)
goto cleanup;
return 0;
cleanup:
sdhci_cleanup_host(host);
return ret;
}
#ifdef CONFIG_ACPI
static int ni_set_max_freq(struct sdhci_pci_slot *slot)
{
@ -699,11 +828,20 @@ static const struct sdhci_pci_fixes sdhci_intel_byt_emmc = {
static const struct sdhci_pci_fixes sdhci_intel_glk_emmc = {
.allow_runtime_pm = true,
.probe_slot = glk_emmc_probe_slot,
.add_host = glk_emmc_add_host,
#ifdef CONFIG_PM_SLEEP
.suspend = sdhci_cqhci_suspend,
.resume = sdhci_cqhci_resume,
#endif
#ifdef CONFIG_PM
.runtime_suspend = sdhci_cqhci_runtime_suspend,
.runtime_resume = sdhci_cqhci_runtime_resume,
#endif
.quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
SDHCI_QUIRK2_CAPS_BIT63_FOR_HS400 |
SDHCI_QUIRK2_STOP_WITH_TC,
.ops = &sdhci_intel_byt_ops,
.ops = &sdhci_intel_glk_ops,
.priv_size = sizeof(struct intel_host),
};
@ -778,6 +916,8 @@ static int intel_mrfld_mmc_probe_slot(struct sdhci_pci_slot *slot)
slot->host->quirks2 |= SDHCI_QUIRK2_NO_1_8_V;
break;
case INTEL_MRFLD_SDIO:
/* Advertise 2.0v for compatibility with the SDIO card's OCR */
slot->host->ocr_mask = MMC_VDD_20_21 | MMC_VDD_165_195;
slot->host->mmc->caps |= MMC_CAP_NONREMOVABLE |
MMC_CAP_POWER_OFF_CARD;
break;
@ -955,7 +1095,7 @@ static int jmicron_suspend(struct sdhci_pci_chip *chip)
{
int i, ret;
ret = __sdhci_pci_suspend_host(chip);
ret = sdhci_pci_suspend_host(chip);
if (ret)
return ret;
@ -965,8 +1105,6 @@ static int jmicron_suspend(struct sdhci_pci_chip *chip)
jmicron_enable_mmc(chip->slots[i]->host, 0);
}
sdhci_pci_init_wakeup(chip);
return 0;
}
@ -1306,6 +1444,7 @@ static const struct pci_device_id pci_ids[] = {
SDHCI_PCI_DEVICE(O2, SDS1, o2),
SDHCI_PCI_DEVICE(O2, SEABIRD0, o2),
SDHCI_PCI_DEVICE(O2, SEABIRD1, o2),
SDHCI_PCI_DEVICE(ARASAN, PHY_EMMC, arasan),
SDHCI_PCI_DEVICE_CLASS(AMD, SYSTEM_SDHCI, PCI_CLASS_MASK, amd),
/* Generic SD host controller */
{PCI_DEVICE_CLASS(SYSTEM_SDHCI, PCI_CLASS_MASK)},
@ -1320,7 +1459,7 @@ MODULE_DEVICE_TABLE(pci, pci_ids);
* *
\*****************************************************************************/
static int sdhci_pci_enable_dma(struct sdhci_host *host)
int sdhci_pci_enable_dma(struct sdhci_host *host)
{
struct sdhci_pci_slot *slot;
struct pci_dev *pdev;
@ -1543,10 +1682,13 @@ static struct sdhci_pci_slot *sdhci_pci_probe_slot(
}
}
host->mmc->pm_caps = MMC_PM_KEEP_POWER | MMC_PM_WAKE_SDIO_IRQ;
host->mmc->pm_caps = MMC_PM_KEEP_POWER;
host->mmc->slotno = slotno;
host->mmc->caps2 |= MMC_CAP2_NO_PRESCAN_POWERUP;
if (device_can_wakeup(&pdev->dev))
host->mmc->pm_caps |= MMC_PM_WAKE_SDIO_IRQ;
if (slot->cd_idx >= 0) {
ret = mmc_gpiod_request_cd(host->mmc, NULL, slot->cd_idx,
slot->cd_override_level, 0, NULL);

View File

@ -55,6 +55,9 @@
#define PCI_SUBDEVICE_ID_NI_7884 0x7884
#define PCI_VENDOR_ID_ARASAN 0x16e6
#define PCI_DEVICE_ID_ARASAN_PHY_EMMC 0x0670
/*
* PCI device class and mask
*/
@ -170,11 +173,13 @@ static inline void *sdhci_pci_priv(struct sdhci_pci_slot *slot)
#ifdef CONFIG_PM_SLEEP
int sdhci_pci_resume_host(struct sdhci_pci_chip *chip);
#endif
int sdhci_pci_enable_dma(struct sdhci_host *host);
int sdhci_pci_o2_probe_slot(struct sdhci_pci_slot *slot);
int sdhci_pci_o2_probe(struct sdhci_pci_chip *chip);
#ifdef CONFIG_PM_SLEEP
int sdhci_pci_o2_resume(struct sdhci_pci_chip *chip);
#endif
extern const struct sdhci_pci_fixes sdhci_arasan;
#endif /* __SDHCI_PCI_H */

View File

@ -82,6 +82,10 @@ static int sdhci_probe(struct platform_device *pdev)
host->hw_name = "sdhci";
host->ops = &sdhci_pltfm_ops;
host->irq = platform_get_irq(pdev, 0);
if (host->irq <= 0) {
ret = -EINVAL;
goto err_host;
}
host->quirks = SDHCI_QUIRK_BROKEN_ADMA;
sdhci = sdhci_priv(host);

View File

@ -230,7 +230,14 @@ static void xenon_set_power(struct sdhci_host *host, unsigned char mode,
mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd);
}
static void xenon_voltage_switch(struct sdhci_host *host)
{
/* Wait for 5ms after set 1.8V signal enable bit */
usleep_range(5000, 5500);
}
static const struct sdhci_ops sdhci_xenon_ops = {
.voltage_switch = xenon_voltage_switch,
.set_clock = sdhci_set_clock,
.set_power = xenon_set_power,
.set_bus_width = sdhci_set_bus_width,

View File

@ -1434,6 +1434,13 @@ void sdhci_set_power_noreg(struct sdhci_host *host, unsigned char mode,
if (mode != MMC_POWER_OFF) {
switch (1 << vdd) {
case MMC_VDD_165_195:
/*
* Without a regulator, SDHCI does not support 2.0v
* so we only get here if the driver deliberately
* added the 2.0v range to ocr_avail. Map it to 1.8v
* for the purpose of turning on the power.
*/
case MMC_VDD_20_21:
pwr = SDHCI_POWER_180;
break;
case MMC_VDD_29_30:
@ -2821,25 +2828,33 @@ static irqreturn_t sdhci_thread_irq(int irq, void *dev_id)
* sdhci_disable_irq_wakeups() since it will be set by
* sdhci_enable_card_detection() or sdhci_init().
*/
void sdhci_enable_irq_wakeups(struct sdhci_host *host)
static bool sdhci_enable_irq_wakeups(struct sdhci_host *host)
{
u8 mask = SDHCI_WAKE_ON_INSERT | SDHCI_WAKE_ON_REMOVE |
SDHCI_WAKE_ON_INT;
u32 irq_val = 0;
u8 wake_val = 0;
u8 val;
u8 mask = SDHCI_WAKE_ON_INSERT | SDHCI_WAKE_ON_REMOVE
| SDHCI_WAKE_ON_INT;
u32 irq_val = SDHCI_INT_CARD_INSERT | SDHCI_INT_CARD_REMOVE |
SDHCI_INT_CARD_INT;
if (!(host->quirks & SDHCI_QUIRK_BROKEN_CARD_DETECTION)) {
wake_val |= SDHCI_WAKE_ON_INSERT | SDHCI_WAKE_ON_REMOVE;
irq_val |= SDHCI_INT_CARD_INSERT | SDHCI_INT_CARD_REMOVE;
}
wake_val |= SDHCI_WAKE_ON_INT;
irq_val |= SDHCI_INT_CARD_INT;
val = sdhci_readb(host, SDHCI_WAKE_UP_CONTROL);
val |= mask ;
/* Avoid fake wake up */
if (host->quirks & SDHCI_QUIRK_BROKEN_CARD_DETECTION) {
val &= ~(SDHCI_WAKE_ON_INSERT | SDHCI_WAKE_ON_REMOVE);
irq_val &= ~(SDHCI_INT_CARD_INSERT | SDHCI_INT_CARD_REMOVE);
}
val &= ~mask;
val |= wake_val;
sdhci_writeb(host, val, SDHCI_WAKE_UP_CONTROL);
sdhci_writel(host, irq_val, SDHCI_INT_ENABLE);
host->irq_wake_enabled = !enable_irq_wake(host->irq);
return host->irq_wake_enabled;
}
EXPORT_SYMBOL_GPL(sdhci_enable_irq_wakeups);
static void sdhci_disable_irq_wakeups(struct sdhci_host *host)
{
@ -2850,6 +2865,10 @@ static void sdhci_disable_irq_wakeups(struct sdhci_host *host)
val = sdhci_readb(host, SDHCI_WAKE_UP_CONTROL);
val &= ~mask;
sdhci_writeb(host, val, SDHCI_WAKE_UP_CONTROL);
disable_irq_wake(host->irq);
host->irq_wake_enabled = false;
}
int sdhci_suspend_host(struct sdhci_host *host)
@ -2858,15 +2877,14 @@ int sdhci_suspend_host(struct sdhci_host *host)
mmc_retune_timer_stop(host->mmc);
if (!device_may_wakeup(mmc_dev(host->mmc))) {
if (!device_may_wakeup(mmc_dev(host->mmc)) ||
!sdhci_enable_irq_wakeups(host)) {
host->ier = 0;
sdhci_writel(host, 0, SDHCI_INT_ENABLE);
sdhci_writel(host, 0, SDHCI_SIGNAL_ENABLE);
free_irq(host->irq, host);
} else {
sdhci_enable_irq_wakeups(host);
enable_irq_wake(host->irq);
}
return 0;
}
@ -2894,15 +2912,14 @@ int sdhci_resume_host(struct sdhci_host *host)
mmiowb();
}
if (!device_may_wakeup(mmc_dev(host->mmc))) {
if (host->irq_wake_enabled) {
sdhci_disable_irq_wakeups(host);
} else {
ret = request_threaded_irq(host->irq, sdhci_irq,
sdhci_thread_irq, IRQF_SHARED,
mmc_hostname(host->mmc), host);
if (ret)
return ret;
} else {
sdhci_disable_irq_wakeups(host);
disable_irq_wake(host->irq);
}
sdhci_enable_card_detection(host);

View File

@ -484,6 +484,7 @@ struct sdhci_host {
bool bus_on; /* Bus power prevents runtime suspend */
bool preset_enabled; /* Preset is enabled */
bool pending_reset; /* Cmd/data reset is pending */
bool irq_wake_enabled; /* IRQ wakeup is enabled */
struct mmc_request *mrqs_done[SDHCI_MAX_MRQS]; /* Requests done */
struct mmc_command *cmd; /* Current command */
@ -718,7 +719,6 @@ void sdhci_enable_sdio_irq(struct mmc_host *mmc, int enable);
#ifdef CONFIG_PM
int sdhci_suspend_host(struct sdhci_host *host);
int sdhci_resume_host(struct sdhci_host *host);
void sdhci_enable_irq_wakeups(struct sdhci_host *host);
int sdhci_runtime_suspend_host(struct sdhci_host *host);
int sdhci_runtime_resume_host(struct sdhci_host *host);
#endif

View File

@ -10,9 +10,11 @@
* the Free Software Foundation, version 2 of the License.
*/
#include <linux/acpi.h>
#include <linux/err.h>
#include <linux/delay.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/property.h>
#include <linux/clk.h>
@ -146,7 +148,6 @@ static int sdhci_f_sdh30_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, host);
sdhci_get_of_property(pdev);
host->hw_name = "f_sdh30";
host->ops = &sdhci_f_sdh30_ops;
host->irq = irq;
@ -158,26 +159,30 @@ static int sdhci_f_sdh30_probe(struct platform_device *pdev)
goto err;
}
priv->clk_iface = devm_clk_get(&pdev->dev, "iface");
if (IS_ERR(priv->clk_iface)) {
ret = PTR_ERR(priv->clk_iface);
goto err;
if (dev_of_node(dev)) {
sdhci_get_of_property(pdev);
priv->clk_iface = devm_clk_get(&pdev->dev, "iface");
if (IS_ERR(priv->clk_iface)) {
ret = PTR_ERR(priv->clk_iface);
goto err;
}
ret = clk_prepare_enable(priv->clk_iface);
if (ret)
goto err;
priv->clk = devm_clk_get(&pdev->dev, "core");
if (IS_ERR(priv->clk)) {
ret = PTR_ERR(priv->clk);
goto err_clk;
}
ret = clk_prepare_enable(priv->clk);
if (ret)
goto err_clk;
}
ret = clk_prepare_enable(priv->clk_iface);
if (ret)
goto err;
priv->clk = devm_clk_get(&pdev->dev, "core");
if (IS_ERR(priv->clk)) {
ret = PTR_ERR(priv->clk);
goto err_clk;
}
ret = clk_prepare_enable(priv->clk);
if (ret)
goto err_clk;
/* init vendor specific regs */
ctrl = sdhci_readw(host, F_SDH30_AHB_CONFIG);
ctrl |= F_SDH30_SIN | F_SDH30_AHB_INCR_16 | F_SDH30_AHB_INCR_8 |
@ -226,16 +231,27 @@ static int sdhci_f_sdh30_remove(struct platform_device *pdev)
return 0;
}
#ifdef CONFIG_OF
static const struct of_device_id f_sdh30_dt_ids[] = {
{ .compatible = "fujitsu,mb86s70-sdhci-3.0" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, f_sdh30_dt_ids);
#endif
#ifdef CONFIG_ACPI
static const struct acpi_device_id f_sdh30_acpi_ids[] = {
{ "SCX0002" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(acpi, f_sdh30_acpi_ids);
#endif
static struct platform_driver sdhci_f_sdh30_driver = {
.driver = {
.name = "f_sdh30",
.of_match_table = f_sdh30_dt_ids,
.of_match_table = of_match_ptr(f_sdh30_dt_ids),
.acpi_match_table = ACPI_PTR(f_sdh30_acpi_ids),
.pm = &sdhci_pltfm_pmops,
},
.probe = sdhci_f_sdh30_probe,

View File

@ -916,7 +916,7 @@ static void sh_mmcif_start_cmd(struct sh_mmcif_host *host,
struct mmc_request *mrq)
{
struct mmc_command *cmd = mrq->cmd;
u32 opc = cmd->opcode;
u32 opc;
u32 mask = 0;
unsigned long flags;

View File

@ -3,7 +3,7 @@
* (C) Copyright 2007-2011 Reuuimlla Technology Co., Ltd.
* (C) Copyright 2007-2011 Aaron Maoye <leafy.myeh@reuuimllatech.com>
* (C) Copyright 2013-2014 O2S GmbH <www.o2s.ch>
* (C) Copyright 2013-2014 David Lanzend<EFBFBD>rfer <david.lanzendoerfer@o2s.ch>
* (C) Copyright 2013-2014 David Lanzendörfer <david.lanzendoerfer@o2s.ch>
* (C) Copyright 2013-2014 Hans de Goede <hdegoede@redhat.com>
* (C) Copyright 2017 Sootech SA
*
@ -1255,6 +1255,11 @@ static int sunxi_mmc_resource_request(struct sunxi_mmc_host *host,
goto error_assert_reset;
host->irq = platform_get_irq(pdev, 0);
if (host->irq <= 0) {
ret = -EINVAL;
goto error_assert_reset;
}
return devm_request_threaded_irq(&pdev->dev, host->irq, sunxi_mmc_irq,
sunxi_mmc_handle_manual_stop, 0, "sunxi-mmc", host);
@ -1393,5 +1398,5 @@ module_platform_driver(sunxi_mmc_driver);
MODULE_DESCRIPTION("Allwinner's SD/MMC Card Controller Driver");
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("David Lanzend<EFBFBD>rfer <david.lanzendoerfer@o2s.ch>");
MODULE_AUTHOR("David Lanzendörfer <david.lanzendoerfer@o2s.ch>");
MODULE_ALIAS("platform:sunxi-mmc");

View File

@ -92,14 +92,19 @@ static int tmio_mmc_probe(struct platform_device *pdev)
pdata->flags |= TMIO_MMC_HAVE_HIGH_REG;
host = tmio_mmc_host_alloc(pdev);
if (!host)
host = tmio_mmc_host_alloc(pdev, pdata);
if (IS_ERR(host)) {
ret = PTR_ERR(host);
goto cell_disable;
}
/* SD control register space size is 0x200, 0x400 for bus_shift=1 */
host->bus_shift = resource_size(res) >> 10;
ret = tmio_mmc_host_probe(host, pdata, NULL);
host->mmc->f_max = pdata->hclk;
host->mmc->f_min = pdata->hclk / 512;
ret = tmio_mmc_host_probe(host);
if (ret)
goto host_free;
@ -128,15 +133,11 @@ out:
static int tmio_mmc_remove(struct platform_device *pdev)
{
const struct mfd_cell *cell = mfd_get_cell(pdev);
struct mmc_host *mmc = platform_get_drvdata(pdev);
struct tmio_mmc_host *host = platform_get_drvdata(pdev);
if (mmc) {
struct tmio_mmc_host *host = mmc_priv(mmc);
tmio_mmc_host_remove(host);
if (cell->disable)
cell->disable(pdev);
}
tmio_mmc_host_remove(host);
if (cell->disable)
cell->disable(pdev);
return 0;
}

View File

@ -112,12 +112,6 @@
struct tmio_mmc_data;
struct tmio_mmc_host;
struct tmio_mmc_dma {
enum dma_slave_buswidth dma_buswidth;
bool (*filter)(struct dma_chan *chan, void *arg);
void (*enable)(struct tmio_mmc_host *host, bool enable);
};
struct tmio_mmc_dma_ops {
void (*start)(struct tmio_mmc_host *host, struct mmc_data *data);
void (*enable)(struct tmio_mmc_host *host, bool enable);
@ -134,6 +128,7 @@ struct tmio_mmc_host {
struct mmc_request *mrq;
struct mmc_data *data;
struct mmc_host *mmc;
struct mmc_host_ops ops;
/* Callbacks for clock / power control */
void (*set_pwr)(struct platform_device *host, int state);
@ -144,18 +139,15 @@ struct tmio_mmc_host {
struct scatterlist *sg_orig;
unsigned int sg_len;
unsigned int sg_off;
unsigned long bus_shift;
unsigned int bus_shift;
struct platform_device *pdev;
struct tmio_mmc_data *pdata;
struct tmio_mmc_dma *dma;
/* DMA support */
bool force_pio;
struct dma_chan *chan_rx;
struct dma_chan *chan_tx;
struct completion dma_dataend;
struct tasklet_struct dma_complete;
struct tasklet_struct dma_issue;
struct scatterlist bounce_sg;
u8 *bounce_buf;
@ -174,7 +166,6 @@ struct tmio_mmc_host {
struct mutex ios_lock; /* protect set_ios() context */
bool native_hotplug;
bool sdio_irq_enabled;
u32 scc_tappos;
/* Mandatory callback */
int (*clk_enable)(struct tmio_mmc_host *host);
@ -185,9 +176,6 @@ struct tmio_mmc_host {
void (*clk_disable)(struct tmio_mmc_host *host);
int (*multi_io_quirk)(struct mmc_card *card,
unsigned int direction, int blk_size);
int (*card_busy)(struct mmc_host *mmc);
int (*start_signal_voltage_switch)(struct mmc_host *mmc,
struct mmc_ios *ios);
int (*write16_hook)(struct tmio_mmc_host *host, int addr);
void (*hw_reset)(struct tmio_mmc_host *host);
void (*prepare_tuning)(struct tmio_mmc_host *host, unsigned long tap);
@ -207,11 +195,10 @@ struct tmio_mmc_host {
const struct tmio_mmc_dma_ops *dma_ops;
};
struct tmio_mmc_host *tmio_mmc_host_alloc(struct platform_device *pdev);
struct tmio_mmc_host *tmio_mmc_host_alloc(struct platform_device *pdev,
struct tmio_mmc_data *pdata);
void tmio_mmc_host_free(struct tmio_mmc_host *host);
int tmio_mmc_host_probe(struct tmio_mmc_host *host,
struct tmio_mmc_data *pdata,
const struct tmio_mmc_dma_ops *dma_ops);
int tmio_mmc_host_probe(struct tmio_mmc_host *host);
void tmio_mmc_host_remove(struct tmio_mmc_host *host);
void tmio_mmc_do_data_irq(struct tmio_mmc_host *host);
@ -240,26 +227,26 @@ int tmio_mmc_host_runtime_resume(struct device *dev);
static inline u16 sd_ctrl_read16(struct tmio_mmc_host *host, int addr)
{
return readw(host->ctl + (addr << host->bus_shift));
return ioread16(host->ctl + (addr << host->bus_shift));
}
static inline void sd_ctrl_read16_rep(struct tmio_mmc_host *host, int addr,
u16 *buf, int count)
{
readsw(host->ctl + (addr << host->bus_shift), buf, count);
ioread16_rep(host->ctl + (addr << host->bus_shift), buf, count);
}
static inline u32 sd_ctrl_read16_and_16_as_32(struct tmio_mmc_host *host,
int addr)
{
return readw(host->ctl + (addr << host->bus_shift)) |
readw(host->ctl + ((addr + 2) << host->bus_shift)) << 16;
return ioread16(host->ctl + (addr << host->bus_shift)) |
ioread16(host->ctl + ((addr + 2) << host->bus_shift)) << 16;
}
static inline void sd_ctrl_read32_rep(struct tmio_mmc_host *host, int addr,
u32 *buf, int count)
{
readsl(host->ctl + (addr << host->bus_shift), buf, count);
ioread32_rep(host->ctl + (addr << host->bus_shift), buf, count);
}
static inline void sd_ctrl_write16(struct tmio_mmc_host *host, int addr,
@ -270,26 +257,26 @@ static inline void sd_ctrl_write16(struct tmio_mmc_host *host, int addr,
*/
if (host->write16_hook && host->write16_hook(host, addr))
return;
writew(val, host->ctl + (addr << host->bus_shift));
iowrite16(val, host->ctl + (addr << host->bus_shift));
}
static inline void sd_ctrl_write16_rep(struct tmio_mmc_host *host, int addr,
u16 *buf, int count)
{
writesw(host->ctl + (addr << host->bus_shift), buf, count);
iowrite16_rep(host->ctl + (addr << host->bus_shift), buf, count);
}
static inline void sd_ctrl_write32_as_16_and_16(struct tmio_mmc_host *host,
int addr, u32 val)
{
writew(val & 0xffff, host->ctl + (addr << host->bus_shift));
writew(val >> 16, host->ctl + ((addr + 2) << host->bus_shift));
iowrite16(val & 0xffff, host->ctl + (addr << host->bus_shift));
iowrite16(val >> 16, host->ctl + ((addr + 2) << host->bus_shift));
}
static inline void sd_ctrl_write32_rep(struct tmio_mmc_host *host, int addr,
const u32 *buf, int count)
{
writesl(host->ctl + (addr << host->bus_shift), buf, count);
iowrite32_rep(host->ctl + (addr << host->bus_shift), buf, count);
}
#endif

View File

@ -806,7 +806,7 @@ static int tmio_mmc_execute_tuning(struct mmc_host *mmc, u32 opcode)
if (ret == 0)
set_bit(i, host->taps);
mdelay(1);
usleep_range(1000, 1200);
}
ret = host->select_tuning(host);
@ -926,20 +926,6 @@ static void tmio_mmc_done_work(struct work_struct *work)
tmio_mmc_finish_request(host);
}
static int tmio_mmc_clk_enable(struct tmio_mmc_host *host)
{
if (!host->clk_enable)
return -ENOTSUPP;
return host->clk_enable(host);
}
static void tmio_mmc_clk_disable(struct tmio_mmc_host *host)
{
if (host->clk_disable)
host->clk_disable(host);
}
static void tmio_mmc_power_on(struct tmio_mmc_host *host, unsigned short vdd)
{
struct mmc_host *mmc = host->mmc;
@ -958,7 +944,7 @@ static void tmio_mmc_power_on(struct tmio_mmc_host *host, unsigned short vdd)
* 100us were not enough. Is this the same 140us delay, as in
* tmio_mmc_set_ios()?
*/
udelay(200);
usleep_range(200, 300);
}
/*
* It seems, VccQ should be switched on after Vcc, this is also what the
@ -966,7 +952,7 @@ static void tmio_mmc_power_on(struct tmio_mmc_host *host, unsigned short vdd)
*/
if (!IS_ERR(mmc->supply.vqmmc) && !ret) {
ret = regulator_enable(mmc->supply.vqmmc);
udelay(200);
usleep_range(200, 300);
}
if (ret < 0)
@ -1059,7 +1045,7 @@ static void tmio_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
}
/* Let things settle. delay taken from winCE driver */
udelay(140);
usleep_range(140, 200);
if (PTR_ERR(host->mrq) == -EINTR)
dev_dbg(&host->pdev->dev,
"%s.%d: IOS interrupted: clk %u, mode %u",
@ -1076,15 +1062,9 @@ static int tmio_mmc_get_ro(struct mmc_host *mmc)
{
struct tmio_mmc_host *host = mmc_priv(mmc);
struct tmio_mmc_data *pdata = host->pdata;
int ret = mmc_gpio_get_ro(mmc);
if (ret >= 0)
return ret;
ret = !((pdata->flags & TMIO_MMC_WRPROTECT_DISABLE) ||
(sd_ctrl_read16_and_16_as_32(host, CTL_STATUS) & TMIO_STAT_WRPROTECT));
return ret;
return !((pdata->flags & TMIO_MMC_WRPROTECT_DISABLE) ||
(sd_ctrl_read16_and_16_as_32(host, CTL_STATUS) & TMIO_STAT_WRPROTECT));
}
static int tmio_multi_io_quirk(struct mmc_card *card,
@ -1098,7 +1078,7 @@ static int tmio_multi_io_quirk(struct mmc_card *card,
return blk_size;
}
static struct mmc_host_ops tmio_mmc_ops = {
static const struct mmc_host_ops tmio_mmc_ops = {
.request = tmio_mmc_request,
.set_ios = tmio_mmc_set_ios,
.get_ro = tmio_mmc_get_ro,
@ -1145,19 +1125,45 @@ static void tmio_mmc_of_parse(struct platform_device *pdev,
pdata->flags |= TMIO_MMC_WRPROTECT_DISABLE;
}
struct tmio_mmc_host*
tmio_mmc_host_alloc(struct platform_device *pdev)
struct tmio_mmc_host *tmio_mmc_host_alloc(struct platform_device *pdev,
struct tmio_mmc_data *pdata)
{
struct tmio_mmc_host *host;
struct mmc_host *mmc;
struct resource *res;
void __iomem *ctl;
int ret;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
ctl = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(ctl))
return ERR_CAST(ctl);
mmc = mmc_alloc_host(sizeof(struct tmio_mmc_host), &pdev->dev);
if (!mmc)
return NULL;
return ERR_PTR(-ENOMEM);
host = mmc_priv(mmc);
host->ctl = ctl;
host->mmc = mmc;
host->pdev = pdev;
host->pdata = pdata;
host->ops = tmio_mmc_ops;
mmc->ops = &host->ops;
ret = mmc_of_parse(host->mmc);
if (ret) {
host = ERR_PTR(ret);
goto free;
}
tmio_mmc_of_parse(pdev, pdata);
platform_set_drvdata(pdev, host);
return host;
free:
mmc_free_host(mmc);
return host;
}
@ -1169,32 +1175,24 @@ void tmio_mmc_host_free(struct tmio_mmc_host *host)
}
EXPORT_SYMBOL_GPL(tmio_mmc_host_free);
int tmio_mmc_host_probe(struct tmio_mmc_host *_host,
struct tmio_mmc_data *pdata,
const struct tmio_mmc_dma_ops *dma_ops)
int tmio_mmc_host_probe(struct tmio_mmc_host *_host)
{
struct platform_device *pdev = _host->pdev;
struct tmio_mmc_data *pdata = _host->pdata;
struct mmc_host *mmc = _host->mmc;
struct resource *res_ctl;
int ret;
u32 irq_mask = TMIO_MASK_CMD;
tmio_mmc_of_parse(pdev, pdata);
/*
* Check the sanity of mmc->f_min to prevent tmio_mmc_set_clock() from
* looping forever...
*/
if (mmc->f_min == 0)
return -EINVAL;
if (!(pdata->flags & TMIO_MMC_HAS_IDLE_WAIT))
_host->write16_hook = NULL;
res_ctl = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res_ctl)
return -EINVAL;
ret = mmc_of_parse(mmc);
if (ret < 0)
return ret;
_host->pdata = pdata;
platform_set_drvdata(pdev, mmc);
_host->set_pwr = pdata->set_pwr;
_host->set_clk_div = pdata->set_clk_div;
@ -1202,15 +1200,11 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host,
if (ret < 0)
return ret;
_host->ctl = devm_ioremap(&pdev->dev,
res_ctl->start, resource_size(res_ctl));
if (!_host->ctl)
return -ENOMEM;
tmio_mmc_ops.card_busy = _host->card_busy;
tmio_mmc_ops.start_signal_voltage_switch =
_host->start_signal_voltage_switch;
mmc->ops = &tmio_mmc_ops;
if (pdata->flags & TMIO_MMC_USE_GPIO_CD) {
ret = mmc_gpio_request_cd(mmc, pdata->cd_gpio, 0);
if (ret)
return ret;
}
mmc->caps |= MMC_CAP_4_BIT_DATA | pdata->capabilities;
mmc->caps2 |= pdata->capabilities2;
@ -1233,7 +1227,10 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host,
}
mmc->max_seg_size = mmc->max_req_size;
_host->native_hotplug = !(pdata->flags & TMIO_MMC_USE_GPIO_CD ||
if (mmc_can_gpio_ro(mmc))
_host->ops.get_ro = mmc_gpio_get_ro;
_host->native_hotplug = !(mmc_can_gpio_cd(mmc) ||
mmc->caps & MMC_CAP_NEEDS_POLL ||
!mmc_card_is_removable(mmc));
@ -1246,18 +1243,6 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host,
if (pdata->flags & TMIO_MMC_MIN_RCAR2)
_host->native_hotplug = true;
if (tmio_mmc_clk_enable(_host) < 0) {
mmc->f_max = pdata->hclk;
mmc->f_min = mmc->f_max / 512;
}
/*
* Check the sanity of mmc->f_min to prevent tmio_mmc_set_clock() from
* looping forever...
*/
if (mmc->f_min == 0)
return -EINVAL;
/*
* While using internal tmio hardware logic for card detection, we need
* to ensure it stays powered for it to work.
@ -1293,7 +1278,6 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host,
INIT_WORK(&_host->done, tmio_mmc_done_work);
/* See if we also get DMA */
_host->dma_ops = dma_ops;
tmio_mmc_request_dma(_host, pdata);
pm_runtime_set_active(&pdev->dev);
@ -1307,14 +1291,6 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host,
dev_pm_qos_expose_latency_limit(&pdev->dev, 100);
if (pdata->flags & TMIO_MMC_USE_GPIO_CD) {
ret = mmc_gpio_request_cd(mmc, pdata->cd_gpio, 0);
if (ret)
goto remove_host;
mmc_gpiod_request_cd_irq(mmc);
}
return 0;
remove_host:
@ -1343,16 +1319,27 @@ void tmio_mmc_host_remove(struct tmio_mmc_host *host)
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
tmio_mmc_clk_disable(host);
}
EXPORT_SYMBOL_GPL(tmio_mmc_host_remove);
#ifdef CONFIG_PM
static int tmio_mmc_clk_enable(struct tmio_mmc_host *host)
{
if (!host->clk_enable)
return -ENOTSUPP;
return host->clk_enable(host);
}
static void tmio_mmc_clk_disable(struct tmio_mmc_host *host)
{
if (host->clk_disable)
host->clk_disable(host);
}
int tmio_mmc_host_runtime_suspend(struct device *dev)
{
struct mmc_host *mmc = dev_get_drvdata(dev);
struct tmio_mmc_host *host = mmc_priv(mmc);
struct tmio_mmc_host *host = dev_get_drvdata(dev);
tmio_mmc_disable_mmc_irqs(host, TMIO_MASK_ALL);
@ -1372,8 +1359,7 @@ static bool tmio_mmc_can_retune(struct tmio_mmc_host *host)
int tmio_mmc_host_runtime_resume(struct device *dev)
{
struct mmc_host *mmc = dev_get_drvdata(dev);
struct tmio_mmc_host *host = mmc_priv(mmc);
struct tmio_mmc_host *host = dev_get_drvdata(dev);
tmio_mmc_reset(host);
tmio_mmc_clk_enable(host);

View File

@ -324,6 +324,7 @@ struct mmc_host {
#define MMC_CAP_DRIVER_TYPE_A (1 << 23) /* Host supports Driver Type A */
#define MMC_CAP_DRIVER_TYPE_C (1 << 24) /* Host supports Driver Type C */
#define MMC_CAP_DRIVER_TYPE_D (1 << 25) /* Host supports Driver Type D */
#define MMC_CAP_DONE_COMPLETE (1 << 27) /* RW reqs can be completed within mmc_request_done() */
#define MMC_CAP_CD_WAKE (1 << 28) /* Enable card detect wake */
#define MMC_CAP_CMD_DURING_TFR (1 << 29) /* Commands during data transfer */
#define MMC_CAP_CMD23 (1 << 30) /* CMD23 supported. */
@ -380,6 +381,7 @@ struct mmc_host {
unsigned int doing_retune:1; /* re-tuning in progress */
unsigned int retune_now:1; /* do re-tuning at next req */
unsigned int retune_paused:1; /* re-tuning is temporarily disabled */
unsigned int use_blk_mq:1; /* use blk-mq */
int rescan_disable; /* disable card detection */
int rescan_entered; /* used with nonremovable devices */
@ -422,9 +424,6 @@ struct mmc_host {
struct dentry *debugfs_root;
struct mmc_async_req *areq; /* active async req */
struct mmc_context_info context_info; /* async synchronization info */
/* Ongoing data transfer that allows commands during transfer */
struct mmc_request *ongoing_mrq;

View File

@ -33,5 +33,6 @@ void mmc_gpio_set_cd_isr(struct mmc_host *host,
irqreturn_t (*isr)(int irq, void *dev_id));
void mmc_gpiod_request_cd_irq(struct mmc_host *host);
bool mmc_can_gpio_cd(struct mmc_host *host);
bool mmc_can_gpio_ro(struct mmc_host *host);
#endif