During the cache sync test we verify that values we expect to have been
written only to the cache do not appear in the hardware. This works most
of the time but since we randomly generate both the original and new values
there is a low probability that these values may actually be the same.
Wrap get_random_bytes() to ensure that the values are different, there
are other tests which should have similar verification that we actually
changed something.
While we're at it refactor the test to use three changed values rather
than attempting to use one of them twice, that just complicates checking
that our new values are actually new.
We use random generation to try to avoid data dependencies in the tests.
Reported-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://msgid.link/r/20240211-regmap-kunit-random-change-v3-1-e387a9ea4468@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
The raw noinc write test places a known value in the register following
the noinc register to verify that it is not disturbed by the noinc
write. This test ensures this value is distinct by adding 100 to the
second element of the noinc write data.
The regmap registers are 16-bit, while the test value is stored in an
unsigned int. Therefore, adding 100 may cause the register to wrap while
the test value does not, causing the test to fail. This patch fixes this
by changing val_test and val_last from unsigned int to u16.
Signed-off-by: Ben Wolsieffer <ben.wolsieffer@hefring.com>
Reported-by: Guenter Roeck <linux@roeck-us.net>
Closes: https://lore.kernel.org/linux-kernel/745d3a11-15bc-48b6-84c8-c8761c943bed@roeck-us.net/T/
Tested-by: Guenter Roeck <linux@roeck-us.net>
Link: https://lore.kernel.org/r/20240206151004.1636761-2-ben.wolsieffer@hefring.com
Signed-off-by: Mark Brown <broonie@kernel.org>
This was a very quiet release for regmap, we added kunit test coverage
for a noinc fix that was merged during v6.7 and a couple of other
trivial cleanups.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEreZoqmdXGLWf4p/qJNaLcl1Uh9AFAmWbHBMACgkQJNaLcl1U
h9BR8Af/RS9TYUkKvbBZGZSZIddhyENOOQ8EDGuxIPk+P2/gi1KaZeyTkxnUNUB/
noVNmHhcgfc0hATF/cBNuo7ASxjg58EadzR7FpJYyfCJ4rFFksEr3lKnuJqzU//8
nV91OBJf1iv2Y+UV1OdMTvcyo6Rew2K/5vcxMzRZ20RueIRaiMgBpZUEsgTgU6ol
Zc2FJefdw9VrMfAYcdvrle14xzKfQdUJNkt76pcBG+P6Sus/5aUKrZskzOh6rOso
MpQYvdtTfe6AxojWWZjOdzeeVJp1PMv+hLTNbIkMjJTr311FxTi9kqwbKdU2MVQL
wxaqNyIl/ucYc6BYYTlmErlN0QWDHg==
=QYK1
-----END PGP SIGNATURE-----
Merge tag 'regmap-v6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap
Pull regmap updates from Mark Brown:
"This was a very quiet release for regmap, we added kunit test coverage
for a noinc fix that was merged during v6.7 and a couple of other
trivial cleanups"
* tag 'regmap-v6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap:
regmap: fix kcalloc() arguments order
regmap: fix regmap_noinc_write() description
regmap: kunit: add noinc write test
regmap: ram: support noinc semantics
are included in this merge do the following:
- Peng Zhang has done some mapletree maintainance work in the
series
"maple_tree: add mt_free_one() and mt_attr() helpers"
"Some cleanups of maple tree"
- In the series "mm: use memmap_on_memory semantics for dax/kmem"
Vishal Verma has altered the interworking between memory-hotplug
and dax/kmem so that newly added 'device memory' can more easily
have its memmap placed within that newly added memory.
- Matthew Wilcox continues folio-related work (including a few
fixes) in the patch series
"Add folio_zero_tail() and folio_fill_tail()"
"Make folio_start_writeback return void"
"Fix fault handler's handling of poisoned tail pages"
"Convert aops->error_remove_page to ->error_remove_folio"
"Finish two folio conversions"
"More swap folio conversions"
- Kefeng Wang has also contributed folio-related work in the series
"mm: cleanup and use more folio in page fault"
- Jim Cromie has improved the kmemleak reporting output in the
series "tweak kmemleak report format".
- In the series "stackdepot: allow evicting stack traces" Andrey
Konovalov to permits clients (in this case KASAN) to cause
eviction of no longer needed stack traces.
- Charan Teja Kalla has fixed some accounting issues in the page
allocator's atomic reserve calculations in the series "mm:
page_alloc: fixes for high atomic reserve caluculations".
- Dmitry Rokosov has added to the samples/ dorectory some sample
code for a userspace memcg event listener application. See the
series "samples: introduce cgroup events listeners".
- Some mapletree maintanance work from Liam Howlett in the series
"maple_tree: iterator state changes".
- Nhat Pham has improved zswap's approach to writeback in the
series "workload-specific and memory pressure-driven zswap
writeback".
- DAMON/DAMOS feature and maintenance work from SeongJae Park in
the series
"mm/damon: let users feed and tame/auto-tune DAMOS"
"selftests/damon: add Python-written DAMON functionality tests"
"mm/damon: misc updates for 6.8"
- Yosry Ahmed has improved memcg's stats flushing in the series
"mm: memcg: subtree stats flushing and thresholds".
- In the series "Multi-size THP for anonymous memory" Ryan Roberts
has added a runtime opt-in feature to transparent hugepages which
improves performance by allocating larger chunks of memory during
anonymous page faults.
- Matthew Wilcox has also contributed some cleanup and maintenance
work against eh buffer_head code int he series "More buffer_head
cleanups".
- Suren Baghdasaryan has done work on Andrea Arcangeli's series
"userfaultfd move option". UFFDIO_MOVE permits userspace heap
compaction algorithms to move userspace's pages around rather than
UFFDIO_COPY'a alloc/copy/free.
- Stefan Roesch has developed a "KSM Advisor", in the series
"mm/ksm: Add ksm advisor". This is a governor which tunes KSM's
scanning aggressiveness in response to userspace's current needs.
- Chengming Zhou has optimized zswap's temporary working memory
use in the series "mm/zswap: dstmem reuse optimizations and
cleanups".
- Matthew Wilcox has performed some maintenance work on the
writeback code, both code and within filesystems. The series is
"Clean up the writeback paths".
- Andrey Konovalov has optimized KASAN's handling of alloc and
free stack traces for secondary-level allocators, in the series
"kasan: save mempool stack traces".
- Andrey also performed some KASAN maintenance work in the series
"kasan: assorted clean-ups".
- David Hildenbrand has gone to town on the rmap code. Cleanups,
more pte batching, folio conversions and more. See the series
"mm/rmap: interface overhaul".
- Kinsey Ho has contributed some maintenance work on the MGLRU
code in the series "mm/mglru: Kconfig cleanup".
- Matthew Wilcox has contributed lruvec page accounting code
cleanups in the series "Remove some lruvec page accounting
functions".
-----BEGIN PGP SIGNATURE-----
iHUEABYIAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCZZyF2wAKCRDdBJ7gKXxA
jjWjAP42LHvGSjp5M+Rs2rKFL0daBQsrlvy6/jCHUequSdWjSgEAmOx7bc5fbF27
Oa8+DxGM9C+fwqZ/7YxU2w/WuUmLPgU=
=0NHs
-----END PGP SIGNATURE-----
Merge tag 'mm-stable-2024-01-08-15-31' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull MM updates from Andrew Morton:
"Many singleton patches against the MM code. The patch series which are
included in this merge do the following:
- Peng Zhang has done some mapletree maintainance work in the series
'maple_tree: add mt_free_one() and mt_attr() helpers'
'Some cleanups of maple tree'
- In the series 'mm: use memmap_on_memory semantics for dax/kmem'
Vishal Verma has altered the interworking between memory-hotplug
and dax/kmem so that newly added 'device memory' can more easily
have its memmap placed within that newly added memory.
- Matthew Wilcox continues folio-related work (including a few fixes)
in the patch series
'Add folio_zero_tail() and folio_fill_tail()'
'Make folio_start_writeback return void'
'Fix fault handler's handling of poisoned tail pages'
'Convert aops->error_remove_page to ->error_remove_folio'
'Finish two folio conversions'
'More swap folio conversions'
- Kefeng Wang has also contributed folio-related work in the series
'mm: cleanup and use more folio in page fault'
- Jim Cromie has improved the kmemleak reporting output in the series
'tweak kmemleak report format'.
- In the series 'stackdepot: allow evicting stack traces' Andrey
Konovalov to permits clients (in this case KASAN) to cause eviction
of no longer needed stack traces.
- Charan Teja Kalla has fixed some accounting issues in the page
allocator's atomic reserve calculations in the series 'mm:
page_alloc: fixes for high atomic reserve caluculations'.
- Dmitry Rokosov has added to the samples/ dorectory some sample code
for a userspace memcg event listener application. See the series
'samples: introduce cgroup events listeners'.
- Some mapletree maintanance work from Liam Howlett in the series
'maple_tree: iterator state changes'.
- Nhat Pham has improved zswap's approach to writeback in the series
'workload-specific and memory pressure-driven zswap writeback'.
- DAMON/DAMOS feature and maintenance work from SeongJae Park in the
series
'mm/damon: let users feed and tame/auto-tune DAMOS'
'selftests/damon: add Python-written DAMON functionality tests'
'mm/damon: misc updates for 6.8'
- Yosry Ahmed has improved memcg's stats flushing in the series 'mm:
memcg: subtree stats flushing and thresholds'.
- In the series 'Multi-size THP for anonymous memory' Ryan Roberts
has added a runtime opt-in feature to transparent hugepages which
improves performance by allocating larger chunks of memory during
anonymous page faults.
- Matthew Wilcox has also contributed some cleanup and maintenance
work against eh buffer_head code int he series 'More buffer_head
cleanups'.
- Suren Baghdasaryan has done work on Andrea Arcangeli's series
'userfaultfd move option'. UFFDIO_MOVE permits userspace heap
compaction algorithms to move userspace's pages around rather than
UFFDIO_COPY'a alloc/copy/free.
- Stefan Roesch has developed a 'KSM Advisor', in the series 'mm/ksm:
Add ksm advisor'. This is a governor which tunes KSM's scanning
aggressiveness in response to userspace's current needs.
- Chengming Zhou has optimized zswap's temporary working memory use
in the series 'mm/zswap: dstmem reuse optimizations and cleanups'.
- Matthew Wilcox has performed some maintenance work on the writeback
code, both code and within filesystems. The series is 'Clean up the
writeback paths'.
- Andrey Konovalov has optimized KASAN's handling of alloc and free
stack traces for secondary-level allocators, in the series 'kasan:
save mempool stack traces'.
- Andrey also performed some KASAN maintenance work in the series
'kasan: assorted clean-ups'.
- David Hildenbrand has gone to town on the rmap code. Cleanups, more
pte batching, folio conversions and more. See the series 'mm/rmap:
interface overhaul'.
- Kinsey Ho has contributed some maintenance work on the MGLRU code
in the series 'mm/mglru: Kconfig cleanup'.
- Matthew Wilcox has contributed lruvec page accounting code cleanups
in the series 'Remove some lruvec page accounting functions'"
* tag 'mm-stable-2024-01-08-15-31' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (361 commits)
mm, treewide: rename MAX_ORDER to MAX_PAGE_ORDER
mm, treewide: introduce NR_PAGE_ORDERS
selftests/mm: add separate UFFDIO_MOVE test for PMD splitting
selftests/mm: skip test if application doesn't has root privileges
selftests/mm: conform test to TAP format output
selftests: mm: hugepage-mmap: conform to TAP format output
selftests/mm: gup_test: conform test to TAP format output
mm/selftests: hugepage-mremap: conform test to TAP format output
mm/vmstat: move pgdemote_* out of CONFIG_NUMA_BALANCING
mm: zsmalloc: return -ENOSPC rather than -EINVAL in zs_malloc while size is too large
mm/memcontrol: remove __mod_lruvec_page_state()
mm/khugepaged: use a folio more in collapse_file()
slub: use a folio in __kmalloc_large_node
slub: use folio APIs in free_large_kmalloc()
slub: use alloc_pages_node() in alloc_slab_page()
mm: remove inc/dec lruvec page state functions
mm: ratelimit stat flush from workingset shrinker
kasan: stop leaking stack trace handles
mm/mglru: remove CONFIG_TRANSPARENT_HUGEPAGE
mm/mglru: add dummy pmd_dirty()
...
commit 23baf831a3 ("mm, treewide: redefine MAX_ORDER sanely") has
changed the definition of MAX_ORDER to be inclusive. This has caused
issues with code that was not yet upstream and depended on the previous
definition.
To draw attention to the altered meaning of the define, rename MAX_ORDER
to MAX_PAGE_ORDER.
Link: https://lkml.kernel.org/r/20231228144704.14033-2-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
When compiling with gcc version 14.0.0 20231220 (experimental)
and W=1, I've noticed a bunch of four similar warnings like:
drivers/base/regmap/regmap-ram.c: In function '__regmap_init_ram':
drivers/base/regmap/regmap-ram.c:68:37: warning: 'kcalloc' sizes specified with
'sizeof' in the earlier argument and not in the later argument [-Wcalloc-transposed-args]
68 | data->read = kcalloc(sizeof(bool), config->max_register + 1,
| ^~~~
Since 'n' and 'size' arguments of 'kcalloc()' are multiplied to
calculate the final size, their actual order doesn't affect the
result and so this is not a bug. But it's still worth to fix it.
Signed-off-by: Dmitry Antipov <dmantipov@yandex.ru>
Link: https://msgid.link/r/20231220175829.533700-1-dmantipov@yandex.ru
Signed-off-by: Mark Brown <broonie@kernel.org>
Since commit 0ec7731655 ("regmap: Ensure range selector registers
are updated after cache sync") opening pcm512x based soundcards fail
with EINVAL and dmesg shows sync cache and pm_runtime_get errors:
[ 228.794676] pcm512x 1-004c: Failed to sync cache: -22
[ 228.794740] pcm512x 1-004c: ASoC: error at snd_soc_pcm_component_pm_runtime_get on pcm512x.1-004c: -22
This is caused by the cache check result leaking out into the
regcache_sync return value.
Fix this by making the check local-only, as the comment above the
regcache_read call states a non-zero return value means there's
nothing to do so the return value should not be altered.
Fixes: 0ec7731655 ("regmap: Ensure range selector registers are updated after cache sync")
Cc: stable@vger.kernel.org
Signed-off-by: Matthias Reichl <hias@horus.com>
Link: https://lore.kernel.org/r/20231203222216.96547-1-hias@horus.com
Signed-off-by: Mark Brown <broonie@kernel.org>
Add a test for writing to a noinc register, which verifies that the
write does not touch adjacent registers. This test succeeds with [1]
applied and fails without it.
[1] 984a4afdc8 ("regmap: prevent noinc writes from clobbering cache")
Signed-off-by: Ben Wolsieffer <ben.wolsieffer@hefring.com>
Link: https://lore.kernel.org/r/20231102203039.3069305-2-ben.wolsieffer@hefring.com
Signed-off-by: Mark Brown <broonie@kernel.org>
Support noinc semantics in RAM backed regmaps, for testing purposes. Add
a new callback that selects registers which should have noinc behavior.
Bulk writes to a noinc register will cause the last value in the buffer
to be assigned to the register, while bulk reads will copy the same
value repeatedly into the buffer.
This patch only adds support to regmap-raw-ram, since regmap-ram does
not support bulk operations.
Signed-off-by: Ben Wolsieffer <ben.wolsieffer@hefring.com>
Link: https://lore.kernel.org/r/20231102203039.3069305-1-ben.wolsieffer@hefring.com
Signed-off-by: Mark Brown <broonie@kernel.org>
One fix here, for an interaction between noinc registers and caches - if
a device uses noinc registers (which is rare) then we could corrupt
registers after the noinc register in the cache.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEreZoqmdXGLWf4p/qJNaLcl1Uh9AFAmVKGw8ACgkQJNaLcl1U
h9C8Bwf/U2a00UtJqvg+deoXHQ/7QY8w093DnHxdTLyUcDvwffbfTMrPPOmPtRLh
1OYTtPPAgcF2PjyMaazHON2Y3a1/jPLEyjfRI3UmMWoVCVObUzh30ARASdRlvZj0
1bnj88v32W2hFu/TuUPVVii3g2qH3XccEvX/JaNm0yG9lBHfugPGbVOI2S+MzWL9
iFq+nYyqi6RzD7gq99ZaoA6UdacbCyPWgvHKxoZqhknTlX/KQQDdBlq2alhmbOoU
lW3t9dwtZlcoskSTmKDI7Ukz0EnRvK1fcrkhZ/3XrQl349eLmtqcG7rM/ZMoORqL
wx8eoxNCyCDnjxV+QfEL6SSQEX+8Mg==
=M81g
-----END PGP SIGNATURE-----
Merge tag 'regmap-fix-v6.7-merge-window' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap
Pull regmap fix from Mark Brown:
"One fix here, for an interaction between noinc registers and caches.
If a device uses noinc registers (which is rare) then we could corrupt
registers after the noinc register in the cache"
* tag 'regmap-fix-v6.7-merge-window' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap:
regmap: prevent noinc writes from clobbering cache
Currently, noinc writes are cached as if they were standard incrementing
writes, overwriting unrelated register values in the cache. Instead, we
want to cache the last value written to the register, as is done in the
accelerated noinc handler (regmap_noinc_readwrite).
Fixes: cdf6b11daa ("regmap: Add regmap_noinc_write API")
Signed-off-by: Ben Wolsieffer <ben.wolsieffer@hefring.com>
Link: https://lore.kernel.org/r/20231101142926.2722603-2-ben.wolsieffer@hefring.com
Signed-off-by: Mark Brown <broonie@kernel.org>
When we sync the register cache we do so with the cache bypassed in order
to avoid overhead from writing the synced values back into the cache. If
the regmap has ranges and the selector register for those ranges is in a
register which is cached this has the unfortunate side effect of meaning
that the physical and cached copies of the selector register can be out of
sync after a cache sync. The cache will have whatever the selector was when
the sync started and the hardware will have the selector for the register
that was synced last.
Fix this by rewriting all cached selector registers after every sync,
ensuring that the hardware and cache have the same content. This will
result in extra writes that wouldn't otherwise be needed but is simple
so hopefully robust. We don't read from the hardware since not all
devices have physical read support.
Given that nobody noticed this until now it is likely that we are rarely if
ever hitting this case.
Reported-by: Hector Martin <marcan@marcan.st>
Cc: stable@vger.kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20231026-regmap-fix-selector-sync-v1-1-633ded82770d@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
Hector Martin reports that since when doing a cache sync we enable cache
bypass if the selector register for a range is cached then we might leave
the physical selector register pointing to a different value to that which
we have in the cache. If we then try to write to the page that our cache
tells us is selected we will not update the selector register and write to
the wrong page. Add a test case covering this.
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20231023-regmap-test-window-cache-v1-2-d8a71f441968@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
For some reason the regmap used for testing ranges was not including the
end of the range of paged registers as volatile since it found the end by
counting from the selector register rather than the base of the window.
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20231023-regmap-test-window-cache-v1-1-d8a71f441968@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
Not all regmaps have a name so make sure to check for that to avoid
dereferencing a NULL pointer when dev_get_regmap() is used to lookup a
named regmap.
Fixes: e84861fec3 ("regmap: dev_get_regmap_match(): fix string comparison")
Cc: stable@vger.kernel.org # 5.8
Cc: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
Link: https://lore.kernel.org/r/20231006082104.16707-1-johan+linaro@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
When regcache_rbtree_write() creates a new rbtree_node it was passing the
wrong bit number to regcache_rbtree_set_register(). The bit number is the
offset __in number of registers__, but in the case of creating a new block
regcache_rbtree_write() was not dividing by the address stride to get the
number of registers.
Fix this by dividing by map->reg_stride.
Compare with regcache_rbtree_read() where the bit is checked.
This bug meant that the wrong register was marked as present. The register
that was written to the cache could not be read from the cache because it
was not marked as cached. But a nearby register could be marked as having
a cached value even if it was never written to the cache.
Signed-off-by: Richard Fitzgerald <rf@opensource.cirrus.com>
Fixes: 3f4ff561bc ("regmap: rbtree: Make cache_present bitmap per node")
Link: https://lore.kernel.org/r/20230922153711.28103-1-rf@opensource.cirrus.com
Signed-off-by: Mark Brown <broonie@kernel.org>
Thanks to Dan and Guenter's very prompt updates of the rbtree and maple
caches to support GPF_ATOMIC allocations and since the update shook out
a bunch of users at least some of whom have been suitably careful about
ensuring that the cache is prepoulated so there are no dynamic
allocations after init let's revert the warnings.
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20230721-regmap-enable-kmalloc-v1-1-f78287e794d3@kernel.org
The kunit tests discovered a sleeping in atomic bug. The allocations
in the regcache-rbtree code should use the map->alloc_flags instead of
GFP_KERNEL.
[ 5.005510] BUG: sleeping function called from invalid context at include/linux/sched/mm.h:306
[ 5.005960] in_atomic(): 1, irqs_disabled(): 128, non_block: 0, pid: 117, name: kunit_try_catch
[ 5.006219] preempt_count: 1, expected: 0
[ 5.006414] 1 lock held by kunit_try_catch/117:
[ 5.006590] #0: 833b9010 (regmap_kunit:86:(config)->lock){....}-{2:2}, at: regmap_lock_spinlock+0x14/0x1c
[ 5.007493] irq event stamp: 162
[ 5.007627] hardirqs last enabled at (161): [<80786738>] crng_make_state+0x1a0/0x294
[ 5.007871] hardirqs last disabled at (162): [<80c531ec>] _raw_spin_lock_irqsave+0x7c/0x80
[ 5.008119] softirqs last enabled at (0): [<801110ac>] copy_process+0x810/0x2138
[ 5.008356] softirqs last disabled at (0): [<00000000>] 0x0
[ 5.008688] CPU: 0 PID: 117 Comm: kunit_try_catch Tainted: G N 6.4.4-rc3-g0e8d2fdfb188 #1
[ 5.009011] Hardware name: Generic DT based system
[ 5.009277] unwind_backtrace from show_stack+0x18/0x1c
[ 5.009497] show_stack from dump_stack_lvl+0x38/0x5c
[ 5.009676] dump_stack_lvl from __might_resched+0x188/0x2d0
[ 5.009860] __might_resched from __kmem_cache_alloc_node+0x1dc/0x25c
[ 5.010061] __kmem_cache_alloc_node from kmalloc_trace+0x30/0xc8
[ 5.010254] kmalloc_trace from regcache_rbtree_write+0x26c/0x468
[ 5.010446] regcache_rbtree_write from _regmap_write+0x88/0x140
[ 5.010634] _regmap_write from regmap_write+0x44/0x68
[ 5.010803] regmap_write from basic_read_write+0x8c/0x270
[ 5.010980] basic_read_write from kunit_try_run_case+0x48/0xa0
Fixes: 28644c809f ("regmap: Add the rbtree cache support")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Closes: https://lore.kernel.org/all/ee59d128-413c-48ad-a3aa-d9d350c80042@roeck-us.net/
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Link: https://lore.kernel.org/r/58f12a07-5f4b-4a8f-ab84-0a42d1908cb9@moroto.mountain
Signed-off-by: Mark Brown <broonie@kernel.org>
REGCACHE_MAPLE needs to allocate memory for regmap operations.
This results in lockdep splats if used with fast_io since fast_io uses
spinlocks for locking.
BUG: sleeping function called from invalid context at include/linux/sched/mm.h:306
in_atomic(): 1, irqs_disabled(): 128, non_block: 0, pid: 167, name: kunit_try_catch
preempt_count: 1, expected: 0
1 lock held by kunit_try_catch/167:
#0: 838e9c10 (regmap_kunit:86:(config)->lock){....}-{2:2}, at: regmap_lock_spinlock+0x14/0x1c
irq event stamp: 146
hardirqs last enabled at (145): [<8078bfa8>] crng_make_state+0x1a0/0x294
hardirqs last disabled at (146): [<80c5f62c>] _raw_spin_lock_irqsave+0x7c/0x80
softirqs last enabled at (0): [<80110cc4>] copy_process+0x810/0x216c
softirqs last disabled at (0): [<00000000>] 0x0
CPU: 0 PID: 167 Comm: kunit_try_catch Tainted: G N 6.5.0-rc1-00028-gc4be22597a36-dirty #6
Hardware name: Generic DT based system
unwind_backtrace from show_stack+0x18/0x1c
show_stack from dump_stack_lvl+0x38/0x5c
dump_stack_lvl from __might_resched+0x188/0x2d0
__might_resched from __kmem_cache_alloc_node+0x1f4/0x258
__kmem_cache_alloc_node from __kmalloc+0x48/0x170
__kmalloc from regcache_maple_write+0x194/0x248
regcache_maple_write from _regmap_write+0x88/0x140
_regmap_write from regmap_write+0x44/0x68
regmap_write from basic_read_write+0x8c/0x27c
basic_read_write from kunit_generic_run_threadfn_adapter+0x1c/0x28
kunit_generic_run_threadfn_adapter from kthread+0xf8/0x120
kthread from ret_from_fork+0x14/0x3c
Exception stack(0x881a5fb0 to 0x881a5ff8)
5fa0: 00000000 00000000 00000000 00000000
5fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
5fe0: 00000000 00000000 00000000 00000000 00000013 00000000
Use map->alloc_flags instead of GFP_KERNEL for memory allocations to fix
the problem.
Fixes: f033c26de5 ("regmap: Add maple tree based register cache")
Cc: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Link: https://lore.kernel.org/r/20230720172021.2617326-1-linux@roeck-us.net
Signed-off-by: Mark Brown <broonie@kernel.org>
REGCACHE_RBTREE and REGCACHE_MAPLE dynamically allocate memory for regmap
operations. This is incompatible with spinlock based locking which is used
for fast_io operations. Reject affected configurations.
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Link: https://lore.kernel.org/r/20230720032848.1306349-2-linux@roeck-us.net
Signed-off-by: Mark Brown <broonie@kernel.org>
REGCACHE_RBTREE and REGCACHE_MAPLE dynamically allocate memory
for regmap operations. This is incompatible with spinlock based locking
which is used for fast_io operations. Disable locking for the associated
unit tests to avoid lockdep splashes.
Fixes: f033c26de5 ("regmap: Add maple tree based register cache")
Fixes: 2238959b6a ("regmap: Add some basic kunit tests")
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Link: https://lore.kernel.org/r/20230720032848.1306349-1-linux@roeck-us.net
Signed-off-by: Mark Brown <broonie@kernel.org>
Currently the regcache core unconditionally enables async I/O for all cache
types, causing problems for the maple tree cache which dynamically allocates
the buffers used to write registers to the device since async requires the
buffers to be kept around until the I/O has been completed.
This use of async I/O is mainly for the rbtree cache which stores data in
a format directly usable for regmap_raw_write(), though there is a special
case for single register writes which would also have allowed it to be used
with the flat cache. It is a bit of a landmine for other caches since it
implicitly converts sync operations to async, and with modern hardware it
is not clear that async I/O is actually a performance win as shown by the
performance work David Jander did with SPI. In multi core systems the cost
of managing concurrency ends up swamping the performance benefit and almost
all modern systems are multi core.
Address this by pushing the enablement of async I/O down into the rbtree
cache where it is actively used, avoiding surprises for other cache
implementations.
Reported-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Fixes: bfa0b38c14 ("regmap: maple: Implement block sync for the maple tree cache")
Reviewed-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Tested-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20230719-regcache-async-rbtree-v1-1-b03d30cf1daf@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
The HDA driver has a use case for checking if a register is cached which
it bodges in awkwardly and unclearly. Provide an API which allows it to
directly do what it's trying to do.
Reviewed-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20230717-regmap-cache-check-v1-1-73ef688afae3@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
The SMBus I2C buses have limits on the size of transfers they can do but
do not factor in the register length meaning we may try to do a transfer
longer than our length limit, the core will not take care of this.
Future changes will factor this out into the core but there are a number
of users that assume current behaviour so let's just do something
conservative here.
This does not take account padding bits but practically speaking these
are very rarely if ever used on I2C buses given that they generally run
slowly enough to mean there's no issue.
Cc: stable@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Xu Yilun <yilun.xu@intel.com>
Link: https://lore.kernel.org/r/20230712-regmap-max-transfer-v1-2-80e2aed22e83@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
When problems were noticed with the register address not being taken
into account when limiting raw transfers with I2C devices we fixed this
in the core. Unfortunately it has subsequently been realised that a lot
of buses were relying on the prior behaviour, partly due to unclear
documentation not making it obvious what was intended in the core. This
is all more involved to fix than is sensible for a fix commit so let's
just drop the original fixes, a separate commit will fix the originally
observed problem in an I2C specific way
Fixes: 3981514180 ("regmap: Account for register length when chunking")
Fixes: c8e796895e ("regmap: spi-avmm: Fix regmap_bus max_raw_write")
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Xu Yilun <yilun.xu@intel.com>
Cc: stable@kernel.org
Link: https://lore.kernel.org/r/20230712-regmap-max-transfer-v1-1-80e2aed22e83@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
Since apparently enabling all the KUnit tests shouldn't enable any new
subsystems it is hard to enable the regmap KUnit tests in normal KUnit
testing scenarios that don't enable any drivers. Add a Kconfig option
to help with this and include it in the KUnit all tests config.
Reviewed-by: David Gow <davidgow@google.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20230712-regmap-kunit-enable-v1-1-13e296bd0204@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
When allocating the 2D array for handling IRQ type registers in
regmap_add_irq_chip_fwnode(), the intent is to allocate a matrix
with num_config_bases rows and num_config_regs columns.
This is currently handled by allocating a buffer to hold a pointer for
each row (i.e. num_config_bases). After that, the logic attempts to
allocate the memory required to hold the register configuration for
each row. However, instead of doing this allocation for each row
(i.e. num_config_bases allocations), the logic erroneously does this
allocation num_config_regs number of times.
This scenario can lead to out-of-bounds accesses when num_config_regs
is greater than num_config_bases. Fix this by updating the terminating
condition of the loop that allocates the memory for holding the register
configuration to allocate memory only for each row in the matrix.
Amit Pundir reported a crash that was occurring on his db845c device
due to memory corruption (see "Closes" tag for Amit's report). The KASAN
report below helped narrow it down to this issue:
[ 14.033877][ T1] ==================================================================
[ 14.042507][ T1] BUG: KASAN: invalid-access in regmap_add_irq_chip_fwnode+0x594/0x1364
[ 14.050796][ T1] Write of size 8 at addr 06ffff8081021850 by task init/1
[ 14.242004][ T1] The buggy address belongs to the object at ffffff8081021850
[ 14.242004][ T1] which belongs to the cache kmalloc-8 of size 8
[ 14.255669][ T1] The buggy address is located 0 bytes inside of
[ 14.255669][ T1] 8-byte region [ffffff8081021850, ffffff8081021858)
Fixes: faa87ce919 ("regmap-irq: Introduce config registers for irq types")
Reported-by: Amit Pundir <amit.pundir@linaro.org>
Closes: https://lore.kernel.org/all/CAMi1Hd04mu6JojT3y6wyN2YeVkPR5R3qnkKJ8iR8if_YByCn4w@mail.gmail.com/
Tested-by: John Stultz <jstultz@google.com>
Tested-by: Amit Pundir <amit.pundir@linaro.org> # tested on Dragonboard 845c
Cc: stable@vger.kernel.org # v6.0+
Cc: Aidan MacDonald <aidanmacdonald.0x0@gmail.com>
Cc: Saravana Kannan <saravanak@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: "Isaac J. Manjarres" <isaacmanjarres@google.com>
Link: https://lore.kernel.org/r/20230711193059.2480971-1-isaacmanjarres@google.com
Signed-off-by: Mark Brown <broonie@kernel.org>
With unsigned int type we never ever can pass 64-bit value.
Remove never properly worked code.
Note, there are no users in kernel for this size of register
offsets or data.
This reverts commit afcc00b91f.
Also revert other 64-bit code excerpts in the regmap implementation
that had been induced by the false impression made by the above
mentioned change that there is a support of that data size.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20230622183613.58762-2-andriy.shevchenko@linux.intel.com
Signed-off-by: Mark Brown <broonie@kernel.org>
Another busy release for regmap with the second half fo the maple tree
register cache implementation, there's some smaller optimisations that
could be done but this should now be able to replace the rbtree cache
for most devices.
We also had a followup from Aidan MacDonald's refactoring of some of the
regmap-irq interfaces, the conversion is complete so the old interfaces
are removed. This means that even with the new features for the maple
tree cache we'd have a nice negative diffstat were it not for the
addition of a bunch more KUnit coverage.
There's one GPIO patch in here, it was a dependency for a cleanup of an
API in the regmap-irq code for which the gpio-104-dio-48e driver was the
only user.
Highlights:
- The maple tree cache can now load in default values more efficiently,
and is capabale of syncing multiple registers in a single write
during cache sync.
- More KUnit coverage, including some coverage for raw I/O and a dummy
RAM backed cache to support it.
- Removal of several old interfaces in regmap-irq now all the users
have been modernised.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEreZoqmdXGLWf4p/qJNaLcl1Uh9AFAmSZj0cACgkQJNaLcl1U
h9BNdAf+Kjw+WbqDj/glPTAfrth5aPOyheq9stklzY3mE0E+AxAsHd2PrvYwn1Rt
jiAb96CqsVNP3WFVj6vmM/kfonE7xiBzPng2QtIWKJjxM/PYyLJZkElF6VqZz4cz
ftS4GMAWJadOvIMZgtCFOOujTaBoN0ik2ryZYofVvI8e98W89ifLvCv4aRiJ3Qn8
2R4Wk37JvZtIc2q5kaZ+wo+JIXVijlt7gU+9ZMT7BvJu1ot/KXY3Q3npfSK2Bv5u
qDkMWNZoy9kNd035E8rXt2OTzMlxgUu766wpg3YGU2Hqt15N0n5rpgLc2uB24niG
0yiCYbA2NT5J6+P+/VsE/VTBmJwK8w==
=E+Tk
-----END PGP SIGNATURE-----
Merge tag 'regmap-v6.5' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap
Pull regmap updates from Mark Brown:
"Another busy release for regmap with the second half of the maple tree
register cache implementation, there's some smaller optimisations that
could be done but this should now be able to replace the rbtree cache
for most devices.
We also had a followup from Aidan MacDonald's refactoring of some of
the regmap-irq interfaces, the conversion is complete so the old
interfaces are removed. This means that even with the new features for
the maple tree cache we'd have a nice negative diffstat were it not
for the addition of a bunch more KUnit coverage.
There's one GPIO patch in here, it was a dependency for a cleanup of
an API in the regmap-irq code for which the gpio-104-dio-48e driver
was the only user.
Highlights:
- The maple tree cache can now load in default values more
efficiently, and is capabale of syncing multiple registers
in a single write during cache sync
- More KUnit coverage, including some coverage for raw I/O
and a dummy RAM backed cache to support it
- Removal of several old interfaces in regmap-irq now all
users have been modernised"
* tag 'regmap-v6.5' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap: (23 commits)
regmap: Allow reads from write only registers with the flat cache
regmap: Drop early readability check
regmap: Check for register readability before checking cache during read
regmap: Add test to make sure we don't sync to read only registers
regmap: Add a test case for write only registers
regmap: Add test that writes to write only registers are prevented
regmap: Add debugfs file for forcing field writes
regmap: Don't check for changes in regcache_set_val()
regmap: maple: Implement block sync for the maple tree cache
regmap: Provide basic KUnit coverage for the raw register I/O
regmap: Provide a ram backed regmap with raw support
regmap: Add missing cache_only checks
regmap: regmap-irq: Move handle_post_irq to before pm_runtime_put
regmap: Load register defaults in blocks rather than register by register
regmap: mmio: Allow passing an empty config->reg_stride
regmap-irq: Drop backward compatibility for inverted mask/unmask
regmap-irq: Minor adjustments to .handle_mask_sync()
regmap-irq: Remove support for not_fixed_stride
regmap-irq: Remove type registers
regmap-irq: Remove virtual registers
...
The max_raw_write member of the regmap_spi_avmm_bus structure is defined
as:
.max_raw_write = SPI_AVMM_VAL_SIZE * MAX_WRITE_CNT
SPI_AVMM_VAL_SIZE == 4 and MAX_WRITE_CNT == 1 so this results in a
maximum write transfer size of 4 bytes which provides only enough space to
transfer the address of the target register. It provides no space for the
value to be transferred. This bug became an issue (divide-by-zero in
_regmap_raw_write()) after the following was accepted into mainline:
commit 3981514180 ("regmap: Account for register length when chunking")
Change max_raw_write to include space (4 additional bytes) for both the
register address and value:
.max_raw_write = SPI_AVMM_REG_SIZE + SPI_AVMM_VAL_SIZE * MAX_WRITE_CNT
Fixes: 7f9fb67358 ("regmap: add Intel SPI Slave to AVMM Bus Bridge support")
Reviewed-by: Matthew Gerlach <matthew.gerlach@linux.intel.com>
Signed-off-by: Russ Weight <russell.h.weight@intel.com>
Link: https://lore.kernel.org/r/20230620202824.380313-1-russell.h.weight@intel.com
Signed-off-by: Mark Brown <broonie@kernel.org>
We have some drivers that have a use case for cached write only
registers, doing read/modify/writes on read only registers in order to
work more easily with bitfields. Go back to trying the cache before we
check if we can read from the device.
Fixes: eab5abdeb7 ("regmap: Check for register readability before checking cache during read")
Reported-by: Konrad Dybcio <konradybcio@kernel.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20230615-regmap-drop-early-readability-v1-1-8135094362de@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
Merge series from Mark Brown <broonie@kernel.org>:
Since Takashi found an issue with maple tree syncing registers it
shouldn't do add some test cases that catch that case and some more
potential issues, ideally we'd run through the combination of
readability with all possible I/O calls but that's lifting for another
day. We did find one issue with missing readability checks which will
be fixed separately.
`_regmap_update_bits()` checks if the current register value differs
from the new value, and only writes to the register if they differ. When
testing hardware drivers, it might be desirable to always force a
register write, for example when writing to a `regmap_field`. This
enables and simplifies testing and verification of the hardware
interaction. For example, when using a hardware mock/simulation model,
one can then more easily verify that the driver makes the correct
expected register writes during certain events.
Add a bool variable `force_write_field` and a corresponding debugfs
entry to enable this. Since this feature could interfere with driver
operation, guard it with a macro.
Signed-off-by: Waqar Hameed <waqar.hameed@axis.com>
Link: https://lore.kernel.org/r/pnd1qifa7sj.fsf@axis.com
Signed-off-by: Mark Brown <broonie@kernel.org>
regcache_maple_sync() tries to sync all cached values no matter
whether it's writable or not. OTOH, regache_sync_val() does care the
wrtability and returns -EIO for a read-only register. This results in
an error message like:
snd_hda_codec_realtek hdaudioC0D0: Unable to sync register 0x2f0009. -5
and the sync loop is aborted incompletely.
This patch adds the writable register check to regcache_sync_val() for
addressing the bug above.
Note that, although we may add the check in the caller side
(regcache_maple_sync()), here we put in regcache_sync_val(), so that a
similar case like this can be avoided in future.
Fixes: f033c26de5 ("regmap: Add maple tree based register cache")
Link: https://lore.kernel.org/r/877cs7g6f1.wl-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Link: https://lore.kernel.org/r/20230613112240.3361-1-tiwai@suse.de
Signed-off-by: Mark Brown <broonie@kernel.org>
Merge series from Mark Brown <broonie@kernel.org>:
Our existing coverage only deals with buses that provide single register
read and write operations, extend it to cover raw buses using a similar
approach with a RAM backed register map that the tests can inspect to
check operations. This coverage could be more complete but provides a
good start.
The only user of regcache_set_val() ignores the return value so we may as
well not bother checking if the value we are trying to set is the same as
the value already stored.
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20230609-regcache-set-val-no-ret-v1-1-9a6932760cf8@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
For register maps where we can write multiple values in a single bus
operation it is generally much faster to do so. Improve the performance of
maple tree cache syncs on such devices by identifying blocks of adjacent
registers that need to be written out and combining them into a single
operation.
Combining writes does mean that we need to allocate a scratch buffer and
format the data into it but it is expected that for most cases where caches
are in use the cost of I/O will be much greater than the cost of doing the
allocation and format.
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20230609-regcache-maple-sync-raw-v1-1-8ddeb4e2b9ab@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>