aboutsummaryrefslogtreecommitdiff
path: root/tools/testing
AgeCommit message (Collapse)AuthorFilesLines
14 daysselftests/bpf: Add stress test for timer async cancelMykyta Yatsenko2-4/+28
Extend BPF timer selftest to run stress test for async cancel. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20260201025403.66625-6-alexei.starovoitov@gmail.com
14 daysselftests/bpf: Refactor timer selftestsMykyta Yatsenko1-19/+36
Refactor timer selftests, extracting stress test into a separate test. This makes it easier to debug test failures and allows to extend. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20260201025403.66625-5-alexei.starovoitov@gmail.com
14 daysselftests/bpf: Add selftests for stream functions under lockEmil Tsalapatis1-0/+32
Add a selftest to ensure BPF stream functions can now be called while holding a lock. Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20260203180424.14057-5-emil@etsalapatis.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
14 daysselftests/bpf: Add selftests for bpf_stream_print_stackEmil Tsalapatis1-0/+21
Add selftests for the new bpf_stream_print_stack kfunc. Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20260203180424.14057-3-emil@etsalapatis.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
14 daysselftests/bpf: Add a test for ids=0 to verifier_scalar_ids testPuranjay Mohan1-0/+45
Test that two registers with their id=0 (unlinked) in the cached state can be mapped to a single id (linked) in the current state. Signed-off-by: Puranjay Mohan <puranjay@kernel.org> Link: https://lore.kernel.org/r/20260203165102.2302462-6-puranjay@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
14 daysbpf: Relax scalar id equivalence for state pruningPuranjay Mohan1-3/+5
Scalar register IDs are used by the verifier to track relationships between registers and enable bounds propagation across those relationships. Once an ID becomes singular (i.e. only a single register/stack slot carries it), it can no longer contribute to bounds propagation and effectively becomes stale. The previous commit makes the verifier clear such ids before caching the state. When comparing the current and cached states for pruning, these stale IDs can cause technically equivalent states to be considered different and thus prevent pruning. For example, in the selftest added in the next commit, two registers - r6 and r7 are not linked to any other registers and get cached with id=0, in the current state, they are both linked to each other with id=A. Before this commit, check_scalar_ids would give temporary ids to r6 and r7 (say tid1 and tid2) and then check_ids() would map tid1->A, and when it would see tid2->A, it would not consider these state equivalent. Relax scalar ID equivalence by treating rold->id == 0 as "independent": if the old state did not rely on any ID relationships for a register, then any ID/linking present in the current state only adds constraints and is always safe to accept for pruning. Implement this by returning true immediately in check_scalar_ids() when old_id == 0. Maintain correctness for the opposite direction (old_id != 0 && cur_id == 0) by still allocating a temporary ID for cur_id == 0. This avoids incorrectly allowing multiple independent current registers (id==0) to satisfy a single linked old ID during mapping. Signed-off-by: Puranjay Mohan <puranjay@kernel.org> Link: https://lore.kernel.org/r/20260203165102.2302462-5-puranjay@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
14 daysselftests/net: packetdrill: add TCP Accurate ECN casesChia-Yu Chang58-0/+1363
Linux Accurate ECN test sets using ACE counters and AccECN options to cover several scenarios: Connection teardown, different ACK conditions, counter wrapping, SACK space grabbing, fallback schemes, negotiation retransmission/reorder/loss, AccECN option drop/loss, different handshake reflectors, data with marking, and different sysctl values. The packetdrill used is commit cbe405666c9c8698ac1e72f5e8ffc551216dfa56 of repo: https://github.com/minuscat/packetdrill/tree/upstream_accecn. And corresponding patches are sent to google/packetdrill email list. Signed-off-by: Chia-Yu Chang <chia-yu.chang@nokia-bell-labs.com> Co-developed-by: Ilpo Järvinen <ij@kernel.org> Signed-off-by: Ilpo Järvinen <ij@kernel.org> Co-developed-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://patch.msgid.link/20260131222515.8485-16-chia-yu.chang@nokia-bell-labs.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
14 daysselftests/net: gro: add self-test for TCP CWR flagChia-Yu Chang2-24/+60
Currently, GRO does not flush packets when the CWR bit is set. A corresponding self-test is being added, in which the CWR flag is set for two consecutive packets, but the first packet with the CWR flag set will not be flushed immediately. +===================+==========+===============+===========+ | Packet id | CWR flag | Payload | Flushing? | +===================+==========+===============+===========+ | 0 | 0 | PAYLOAD_LEN | 0 | | ... | 0 | PAYLOAD_LEN | 1 | +-------------------+----------+---------------+-----------+ | NUM_PACKETS/2 - 1 | 1 | payload_len | 0 | | NUM_PACKETS/2 | 1 | payload_len | 1 | +-------------------+----------+---------------+-----------+ | ... | 0 | PAYLOAD_LEN | 0 | | NUM_PACKETS | 0 | PAYLOAD_LEN | 1 | +===================+==========+===============+===========+ Signed-off-by: Chia-Yu Chang <chia-yu.chang@nokia-bell-labs.com> Acked-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://patch.msgid.link/20260131222515.8485-4-chia-yu.chang@nokia-bell-labs.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2026-02-03selftests/sched_ext: Add test for DL server total_bw consistencyJoel Fernandes2-0/+282
Add a new kselftest to verify that the total_bw value in /sys/kernel/debug/sched/debug remains consistent across all CPUs under different sched_ext BPF program states: 1. Before a BPF scheduler is loaded 2. While a BPF scheduler is loaded and active 3. After a BPF scheduler is unloaded The test runs CPU stress threads to ensure DL server bandwidth values stabilize before checking consistency. This helps catch potential issues with DL server bandwidth accounting during sched_ext transitions. Co-developed-by: Andrea Righi <arighi@nvidia.com> Signed-off-by: Andrea Righi <arighi@nvidia.com> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Christian Loehle <christian.loehle@arm.com> Link: https://patch.msgid.link/20260126100050.3854740-8-arighi@nvidia.com
2026-02-03selftests/sched_ext: Add test for sched_ext dl_serverAndrea Righi3-0/+264
Add a selftest to validate the correct behavior of the deadline server for the ext_sched_class. Co-developed-by: Joel Fernandes <joelagnelf@nvidia.com> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com> Signed-off-by: Andrea Righi <arighi@nvidia.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Emil Tsalapatis <emil@etsalapatis.com> Tested-by: Christian Loehle <christian.loehle@arm.com> Link: https://patch.msgid.link/20260126100050.3854740-7-arighi@nvidia.com
2026-02-03Merge branch 'v6.19-rc8'Peter Zijlstra103-525/+2273
Update to avoid conflicts with /urgent patches. Signed-off-by: Peter Zijlstra <peterz@infradead.org>
2026-02-02selftests: mptcp: connect: cover splice modeGeliang Tang2-0/+6
The "splice" alternate mode for mptcp_connect.sh/.c is available now, this patch adds mptcp_connect_splice.sh to test it in the MPTCP CI by default. Note that this mode is also supported by stable kernel versions, but optimised in this patch series. Suggested-by: Matthieu Baerts <matttbe@kernel.org> Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://patch.msgid.link/20260130-net-next-mptcp-splice-v2-6-31332ba70d7f@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2026-02-02selftests: mptcp: add splice io modeGeliang Tang1-1/+78
This patch adds a new 'splice' io mode for mptcp_connect to test the newly added read_sock() and splice_read() functions of MPTCP. do_splice() efficiently transfers data directly between two file descriptors (infd and outfd) without copying to userspace, using Linux's splice() system call. Usage: ./mptcp_connect.sh -m splice Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Mat Martineau <martineau@kernel.org> Co-developed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://patch.msgid.link/20260130-net-next-mptcp-splice-v2-5-31332ba70d7f@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2026-02-02selftests: drv-net: rss: validate min RSS table sizeJakub Kicinski2-0/+89
Add a test which checks that the RSS table is at least 4x the max queue count supported by the device. The original RSS spec from Microsoft stated that the RSS indirection table should be 2 to 8 times the CPU count, presumably assuming queue per CPU. If the CPU count is not a power of two, however, a power-of-2 table 2x larger than queue count results in a 33% traffic imbalance. Validate that the indirection table is at least 4x the queue count. This lowers the imbalance to 16% which empirically appears to be more acceptable to memcache-like workloads. Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20260131225454.1225151-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2026-02-02selftests/sched_ext: Fix init_enable_count flakinessTejun Heo1-11/+23
The init_enable_count test is flaky. The test forks 1024 children before attaching the scheduler to verify that existing tasks get ops.init_task() called. The children were using sleep(1) before exiting. 7900aa699c34 ("sched_ext: Fix cgroup exit ordering by moving sched_ext_free() to finish_task_switch()") changed when tasks are removed from scx_tasks - previously when the task_struct was freed, now immediately in finish_task_switch() when the task dies. Before the commit, pre-forked children would linger on scx_tasks until freed regardless of when they exited, so the scheduler would always see them during iteration. The sleep(1) was unnecessary. After the commit, children are removed as soon as they die. The sleep(1) masks the problem in most cases but the test becomes flaky depending on timing. Fix by synchronizing properly using a pipe. All children block on read() and the parent signals them to exit by closing the write end after attaching the scheduler. The children are auto-reaped so there's no need to wait on them. Reported-by: Ihor Solodrai <ihor.solodrai@linux.dev> Cc: David Vernet <void@manifault.com> Cc: Andrea Righi <arighi@nvidia.com> Cc: Changwoo Min <changwoo@igalia.com> Cc: Emil Tsalapatis <emil@etsalapatis.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2026-02-02Merge branch 'for-7.0/cxl-aer-prep' into cxl-for-nextDave Jiang4-65/+9
Fixup and refactor downstream port enumeration to prepare for CXL port protocol error handling. Main motivation is to move endpoint component register mapping to a port object. cxl/port: Unify endpoint and switch port lookup cxl/port: Move endpoint component register management to cxl_port cxl/port: Map Port RAS registers cxl/port: Move dport RAS setup to dport add time cxl/port: Move dport probe operations to a driver event cxl/port: Move decoder setup before dport creation cxl/port: Cleanup dport removal with a devres group cxl/port: Reduce number of @dport variables in cxl_port_add_dport() cxl/port: Cleanup handling of the nr_dports 0 -> 1 transition
2026-02-02cxl/port: Move dport RAS setup to dport add timeDan Williams2-13/+0
Towards the end goal of making all CXL RAS capability handling uniform across host bridge ports, upstream switch ports, and endpoint ports, move dport RAS setup. Move it to cxl_switch_port_probe() context for switch / VH dports (via cxl_port_add_dport()) and cxl_endpoint_port_probe() context for an RCH dport. Rename the RAS setup helper to devm_cxl_dport_ras_setup() for symmetry with devm_cxl_switch_port_decoders_setup(). Only the RCH version needs to be exported and the cxl_test mocking can be deleted with a dev_is_pci() check on the dport_dev. Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Tested-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Link: https://patch.msgid.link/20260131000403.2135324-7-dan.j.williams@intel.com Signed-off-by: Dave Jiang <dave.jiang@intel.com>
2026-02-02cxl/port: Move dport probe operations to a driver eventDan Williams4-52/+9
In preparation for adding more register setup to the cxl_port_add_dport() path (for RAS register mapping), move the dport creation event to a driver callback. This achieves two goals, it puts driver operations logically where they belong, in a driver, and it obviates the gymnastics of DECLARE_TESTABLE() which just makes a mess of grepping for CXL symbols. In other words, a driver callback is less of an ongoing maintenance burden than this DECLARE_TESTABLE arrangement that does not scale and diminishes the grep-ability of the codebase. cxl_port_add_dport() moves mostly unmodified from drivers/cxl/core/port.c. The only deliberate change is that it now assumes that the device_lock is held on entry and the driver is attached (just like cxl_port_probe()). Reviewed-by: Terry Bowman <terry.bowman@amd.com> Tested-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Link: https://patch.msgid.link/20260131000403.2135324-6-dan.j.williams@intel.com Signed-off-by: Dave Jiang <dave.jiang@intel.com>
2026-01-31selftests/mm: have the harness run each test category separatelyMark Brown31-1/+152
At present the mm selftests are integrated into the kselftest harness by having it run run_vmtest.sh and letting it pick it's default set of tests to invoke, rather than by telling the kselftest framework about each test program individually as is more standard. This has some unfortunate interactions with the kselftest harness: - If any of the tests hangs the harness will kill the entire mm selftests run rather than just the individual test, meaning no further tests get run. - The timeout applied by the harness is applied to the whole run rather than an individual test which frequently leads to the suite not being completed in production testing. Deploy a crude but effective mitigation for these issues by telling the kselftest framework to run each of the test categories that run_vmtests.sh has separately. Since kselftest really wants to run test programs this is done by providing a trivial wrapper script for each categorty that invokes run_vmtest.sh, this is not a thing of great elegence but it is clear and simple. Since run_vmtests.sh is doing runtime support detection, scenario enumeration and setup for many of the tests we can't consistently tell the framework about the individual test programs. This has the side effect of reordering the tests, hopefully the testing is not overly sensitive to this. Link: https://lkml.kernel.org/r/20260123-selftests-mm-run-suites-separately-v2-1-3e934edacbfa@kernel.org Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: David Hildenbrand <david@kernel.org> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Leon Romanovsky <leon@kernel.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-01-31selftests/damon/wss_estimation: deduplicate failed samples outputSeongJae Park1-1/+5
When the test fails, it shows whole sampled working set size measurements. The purpose is showing the distribution of the measured values, to let the tester know if it was just intermittent failure. Multiple same values on the output are therefore unnecessary. It was not a big deal since the test was failing only once in the past. But the test can now fail multiple times with increased working set size, until it passes or the working set size reaches a limit. Hence the noisy output can be quite long and annoying. Print only the deduplicated distribution information. Link: https://lkml.kernel.org/r/20260117020731.226785-6-sj@kernel.org Signed-off-by: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-01-31selftests/damon/wss_estimation: ensure number of collected wssSeongJae Park1-2/+4
DAMON selftest for working set size estimation collects DAMON's working set size measurements of the running artificial memory access generator program until the program is finished. Depending on how quickly the program finishes, and how quickly DAMON starts, the number of collected working set size measurements may vary, and make the test results unreliable. Ensure it collects 40 measurements by using the repeat mode of the artificial memory access generator program, and finish the measurements only after the desired number of collections are made. Link: https://lkml.kernel.org/r/20260117020731.226785-5-sj@kernel.org Signed-off-by: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-01-31selftests/damon/access_memory: add repeat modeSeongJae Park1-8/+21
'access_memory' is an artificial memory access generator program that is used for a few DAMON selftests. It accesses a given number of regions one by one only once, and exits. Depending on systems, the test workload may exit faster than expected, making the tests unreliable. For reliable control of the artificial memory access pattern, add a mode to make it repeat running. Link: https://lkml.kernel.org/r/20260117020731.226785-4-sj@kernel.org Signed-off-by: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-01-31selftests/damon/wss_estimation: test for up to 160 MiB working set sizeSeongJae Park1-6/+23
DAMON reads and writes Accessed bits of page tables without manual TLB flush for two reasons. First, it minimizes the overhead. Second, real systems that need DAMON are expected to be memory intensive enough to cause periodic TLB flushes. For test setups that use small test workloads, however, the system's TLB could be big enough to cover whole or most accesses of the test workload. In this case, no page table walk happens and DAMON cannot show any access from the test workload. The test workload for DAMON's working set size estimation selftest is such a case. It accesses only 10 MiB working set, and it turned out there are test setups that have TLBs large enough to cover the 10 MiB data accesses. As a result, the test fails depending on the test machine. Make it more reliable by trying larger working sets up to 160 MiB when it fails. Link: https://lkml.kernel.org/r/20260117020731.226785-3-sj@kernel.org Signed-off-by: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-01-31selftests/damon/sysfs_memcg_path_leak.sh: use kmemleakSeongJae Park1-12/+14
Patch series "selftests/damon: improve leak detection and wss estimation reliability". Two DAMON selftets, namely 'sysfs_memcg_leak' and 'sysfs_update_schemes_tried_regions_wss_estimation' frequently show intermittent failures due to their unreliable leak detection and working set size estimation. Make those more reliable. This patch (of 5): sysfs_memcg_path_leak.sh determines if the memory leak has happened by seeing if Slab size on /proc/meminfo increases more than expected after an action. Depending on the system and background workloads, the reasonable expectation varies. For the reason, the test frequently shows intermittent failures. Use kmemleak, which is much more reliable and correct, instead. Link: https://lkml.kernel.org/r/20260117020731.226785-1-sj@kernel.org Link: https://lkml.kernel.org/r/20260117020731.226785-2-sj@kernel.org Signed-off-by: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-01-31selftests/mm: remove virtual_address_range testLorenzo Stoakes4-276/+0
This self test is asserting internal implementation details and is highly vulnerable to internal kernel changes as a result. It is currently failing locally from at least v6.17, and it seems that it may have been failing for longer in many configurations/hardware as it skips if e.g. CONFIG_ANON_VMA_NAME is not specified. With these skips and the fact that run_vmtests.sh won't run the tests in certain configurations it is likely we have simply missed this test being broken in CI for a long while. I have tried multiple versions of these tests and am unable to find a working bisect as previous versions of the test fail also. The tests are essentially mmap()'ing a series of mappings with no hint and asserting what the get_unmapped_area*() functions will come up with, with seemingly few checks for what other mappings may already be in place. It then appears to be mmap()'ing with a hint, and making a series of similar assertions about the internal implementation details of the hinting logic. Commit 0ef3783d7558 ("selftests/mm: add support to test 4PB VA on PPC64"), commit 3bd6137220bb ("selftests/mm: virtual_address_range: avoid reading from VM_IO mappings"), and especially commit a005145b9c96 ("selftests/mm: virtual_address_range: mmap() without PROT_WRITE") are good examples of the whack-a-mole nature of maintaining this test. The last commit there being particularly pertinent as it was accounting for an internal implementation detail change that really should have no bearing on self-tests, that is commit e93d2521b27f ("x86/vdso: Split virtual clock pages into dedicated mapping"). The purpose of the mm self-tests are to assert attributes about the API exposed to users, and to ensure that expectations are met. This test is emphatically not doing this, rather making a series of assumptions about internal implementation details and asserting them. It therefore, sadly, seems that the best course is to remove this test altogether. Link: https://lkml.kernel.org/r/20260116132053.857887-1-lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Acked-by: SeongJae Park <sj@kernel.org> Cc: David Hildenbrand <david@kernel.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Mark Brown <broonie@kernel.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-01-31selftests/mm: report SKIP in pfnmap if a check failsKevin Brodsky1-31/+53
pfnmap currently checks the target file in FIXTURE_SETUP(pfnmap), meaning once for every test, and skips the test if any check fails. The target file is the same for every test so this is a little overkill. More importantly, this approach means that the whole suite will report PASS even if all the tests are skipped because kernel configuration (e.g. CONFIG_STRICT_DEVMEM=y) prevented /dev/mem from being mapped, for instance. Let's ensure that KSFT_SKIP is returned as exit code if any check fails by performing the checks in pfnmap_init(), run once. That function also takes care of finding the offset of the pages to be mapped and saves it in a global. The file is now opened only once and the fd saved in a global, but it is still mapped/unmapped for every test, as some of them modify the mapping. Link: https://lkml.kernel.org/r/20260122170224.4056513-10-kevin.brodsky@arm.com Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> Cc: Dev Jain <dev.jain@arm.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mark Brown <broonie@kernel.org> Cc: Paolo Abeni <pabeni@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Usama Anjum <Usama.Anjum@arm.com> Cc: wang lian <lianux.mm@gmail.com> Cc: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-01-31selftests/mm: fix exit code in pagemap_ioctlKevin Brodsky1-3/+3
Make sure pagemap_ioctl exits with an appropriate value: * If the tests are run, call ksft_finished() to report the right status instead of reporting PASS unconditionally. * Report SKIP if userfaultfd isn't available (in line with other tests) * Report FAIL if we failed to open /proc/self/pagemap, as this file has been added a long time ago and doesn't depend on any CONFIG option (returning -EINVAL from main() is meaningless) Link: https://lkml.kernel.org/r/20260122170224.4056513-9-kevin.brodsky@arm.com Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> Acked-by: SeongJae Park <sj@kernel.org> Reviewed-by: wang lian <lianux.mm@gmail.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Cc: Usama Anjum <Usama.Anjum@arm.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Paolo Abeni <pabeni@redhat.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-01-31selftests/mm: fix faulting-in code in pagemap_ioctl testKevin Brodsky1-5/+4
One of the pagemap_ioctl tests attempts to fault in pages by memcpy()'ing them to an unused buffer. This probably worked originally, but since commit 46036188ea1f ("selftests/mm: build with -O2") the compiler is free to optimise away that unused buffer and the memcpy() with it. As a result there might not be any resident page in the mapping and the test may fail. We don't need to copy all that memory anyway. Just fault in every page. While at it also make sure to compute the number of pages once using simple integer arithmetic instead of ceilf() and implicit conversions. Link: https://lkml.kernel.org/r/20260122170224.4056513-8-kevin.brodsky@arm.com Fixes: 46036188ea1f ("selftests/mm: build with -O2") Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> Reviewed-by: Dev Jain <dev.jain@arm.com> Reviewed-by: Muhammad Usama Anjum <usama.anjum@arm.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mark Brown <broonie@kernel.org> Cc: Paolo Abeni <pabeni@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Cc: wang lian <lianux.mm@gmail.com> Cc: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-01-31selftests/mm: introduce helper to read every pageKevin Brodsky4-19/+12
FORCE_READ(*addr) ensures that the compiler will emit a load from addr. Several tests need to trigger such a load for a range of pages, ensuring that every page is faulted in, if it wasn't already. Introduce a new helper force_read_pages() that does exactly that and replace existing loops with a call to it. The step size (regular/huge page size) is preserved for all loops, except in split_huge_page_test. Reading every byte is unnecessary; we now read every huge page, matching the following call to check_huge_file(). Link: https://lkml.kernel.org/r/20260122170224.4056513-7-kevin.brodsky@arm.com Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Reviewed-by: Muhammad Usama Anjum <usama.anjum@arm.com> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mark Brown <broonie@kernel.org> Cc: Paolo Abeni <pabeni@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Cc: wang lian <lianux.mm@gmail.com> Cc: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-01-31selftests/mm: check that FORCE_READ() succeededKevin Brodsky1-10/+33
Many cow tests rely on FORCE_READ() to populate pages. Introduce a helper to make sure that the pages are actually populated, and fail otherwise. Link: https://lkml.kernel.org/r/20260122170224.4056513-6-kevin.brodsky@arm.com Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org> Cc: Dev Jain <dev.jain@arm.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mark Brown <broonie@kernel.org> Cc: Paolo Abeni <pabeni@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Usama Anjum <Usama.Anjum@arm.com> Cc: wang lian <lianux.mm@gmail.com> Cc: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-01-31selftests/mm: fix usage of FORCE_READ() in cow testsKevin Brodsky1-8/+8
Commit 5bbc2b785e63 ("selftests/mm: fix FORCE_READ to read input value correctly") modified FORCE_READ() to take a value instead of a pointer. It also changed most of the call sites accordingly, but missed many of them in cow.c. In those cases, we ended up with the pointer itself being read, not the memory it points to. No failure occurred as a result, so it looks like the tests work just fine without faulting in. However, the huge_zeropage tests explicitly check that pages are populated, so those became skipped. Convert all the remaining FORCE_READ() to fault in the mapped page, as was originally intended. This allows the huge_zeropage tests to run again (3 tests in total). Link: https://lkml.kernel.org/r/20260122170224.4056513-5-kevin.brodsky@arm.com Fixes: 5bbc2b785e63 ("selftests/mm: fix FORCE_READ to read input value correctly") Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Acked-by: SeongJae Park <sj@kernel.org> Reviewed-by: wang lian <lianux.mm@gmail.com> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> Reviewed-by: Dev Jain <dev.jain@arm.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mark Brown <broonie@kernel.org> Cc: Paolo Abeni <pabeni@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Usama Anjum <Usama.Anjum@arm.com> Cc: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-01-31selftests/mm: pass down full CC and CFLAGS to check_config.shKevin Brodsky2-3/+2
check_config.sh checks that liburing is available by running the compiler provided as its first argument. This makes two assumptions: 1. CC consists of only one word 2. No extra flag is required Unfortunately, there are many situations where these assumptions don't hold. For instance: - When using Clang, CC consists of multiple words - When cross-compiling, extra flags may be required to allow the compiler to find headers Remove these assumptions by passing down CC and CFLAGS as-is from the Makefile, so that the same command line is used as when actually building the tests. Link: https://lkml.kernel.org/r/20260122170224.4056513-4-kevin.brodsky@arm.com Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Dev Jain <dev.jain@arm.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Paolo Abeni <pabeni@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Usama Anjum <Usama.Anjum@arm.com> Cc: wang lian <lianux.mm@gmail.com> Cc: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-01-31selftests/mm: remove flaky header checkKevin Brodsky1-4/+0
Commit 96ed62ea0298 ("mm: page_frag: fix a compile error when kernel is not compiled") introduced a check to avoid attempting to build the page_frag module if <linux/page_frag_cache.h> is missing. Unfortunately this check only works if KDIR points to /lib/modules/... or an in-tree kernel build. It always fails if KDIR points to an out-of-tree build (i.e. when the kernel was built with O=... make) because only generated headers are present under $KDIR/include/ in that case. A recent commit switched KDIR to default to the kernel's build directory, so that check is no longer justified. Link: https://lkml.kernel.org/r/20260122170224.4056513-3-kevin.brodsky@arm.com Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Cc: Paolo Abeni <pabeni@redhat.com> Cc: Yunsheng Lin <linyunsheng@huawei.com> Cc: David Hildenbrand <david@kernel.org> Cc: Dev Jain <dev.jain@arm.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Usama Anjum <Usama.Anjum@arm.com> Cc: wang lian <lianux.mm@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-01-31selftests/mm: default KDIR to build directoryKevin Brodsky2-2/+2
Patch series "Various mm kselftests improvements/fixes", v3. Various improvements/fixes for the mm kselftests: - Patch 1-3 extend support for more build configurations: out-of-tree $KDIR, cross-compilation, etc. - Patch 4-7 fix issues related to faulting in pages, introducing a new helper for that purpose. - Patch 8 fixes the value returned by pagemap_ioctl (PASS was always returned, which explains why the issue fixed in patch 6 went unnoticed). - Patch 9 improves the exit code of pfnmap. Net results: - 1 test no longer fails (patch 7) - 3 tests are no longer skipped (patch 4) - More accurate return values for whole suites (patch 8, 9) - Extra tests are more likely to be built (patch 1-3) This patch (of 9): KDIR currently defaults to the running kernel's modules directory when building the page_frag module. The underlying assumption is that most users build the kselftests in order to run them against the system they're built on. This assumption seems questionable, and there is no guarantee that the module can actually be built against the running kernel. Switch the default value of KDIR to the kernel's build directory, i.e. $(O) if O= or KBUILD_OUTPUT= is used, and the source directory otherwise. This seems like the least surprising option: the test module is built against the kernel that has been previously built. Note: we can't use $(top_srcdir) in mm/Makefile because it is only defined once lib.mk is included. Link: https://lkml.kernel.org/r/20260122170224.4056513-1-kevin.brodsky@arm.com Link: https://lkml.kernel.org/r/20260122170224.4056513-2-kevin.brodsky@arm.com Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Cc: David Hildenbrand <david@kernel.org> Cc: Dev Jain <dev.jain@arm.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mark Brown <broonie@kernel.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Paolo Abeni <pabeni@redhat.com> Cc: SeongJae Park <sj@kernel.org> Cc: Usama Anjum <Usama.Anjum@arm.com> Cc: wang lian <lianux.mm@gmail.com> Cc: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-01-31selftests: ublk: improve I/O ordering test with bpftraceMing Lei4-65/+54
Remove test_generic_01.sh since block layer may reorder I/O, making the test prone to false positives. Apply the improvements to test_generic_02.sh instead, which supposes for covering ublk dispatch io order. Rework test_generic_02 to verify that ublk dispatch doesn't reorder I/O by comparing request start order with completion order using bpftrace. The bpftrace script now: - Tracks each request's start sequence number in a map keyed by sector - On completion, verifies the request's start order matches expected completion order - Reports any out-of-order completions detected The test script: - Wait bpftrace BEGIN code block is run - Pins fio to CPU 0 for deterministic behavior - Uses block_io_start and block_rq_complete tracepoints - Checks bpftrace output for reordering errors Reported-and-tested-by: Alexander Atanasov <alex@zazolabs.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-01-31selftests: ublk: reorganize tests into integrity and recover groupsMing Lei7-6/+8
Move integrity-focused tests into new 'integrity' group: - test_null_04.sh -> test_integrity_01.sh - test_loop_08.sh -> test_integrity_02.sh Move recovery-focused tests into new 'recover' group: - test_generic_04.sh -> test_recover_01.sh - test_generic_05.sh -> test_recover_02.sh - test_generic_11.sh -> test_recover_03.sh - test_generic_14.sh -> test_recover_04.sh Update Makefile to reflect the reorganization. Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-01-31selftests: ublk: increase timeouts for parallel test executionMing Lei1-5/+5
When running tests in parallel with high JOBS count (e.g., JOBS=64), the existing timeouts can be insufficient due to system load: - Increase state wait loops from 20/50 to 100 iterations in _recover_ublk_dev(), __ublk_quiesce_dev(), and __ublk_kill_daemon() to handle slower state transitions under heavy load - Add --timeout=20 to udevadm settle calls to prevent indefinite hangs when udev event queue is overwhelmed by rapid device creation/deletion Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-01-31selftests: ublk: add _ublk_sleep helper for parallel executionMing Lei3-1/+12
Add _ublk_sleep() helper function that uses different sleep times depending on whether tests run in parallel or sequential mode. Usage: _ublk_sleep <normal_secs> <parallel_secs> Export JOBS variable from Makefile so test scripts can detect parallel execution, and use _ublk_sleep in test_part_02.sh to handle the partition scan delay (1s normal, 5s parallel). Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-01-31selftests: ublk: add group-based test targetsMing Lei1-0/+36
Add convenient Makefile targets for running specific test groups: - run_generic, run_batch, run_null, run_loop, run_stripe, run_stress, etc. - run_all for running all tests Test groups are auto-detected from TEST_PROGS using pattern matching (test_<group>_<num>.sh -> group), and targets are generated dynamically using define/eval templates. Supports parallel execution via JOBS variable: - JOBS=1 (default): sequential with kselftest TAP output - JOBS>1: parallel execution with xargs -P Usage examples: make run_null # Sequential execution make run_stress JOBS=4 # Parallel with 4 jobs make run_all JOBS=8 # Run all tests with 8 parallel jobs With JOBS=8, running time of `make run_all` is reduced to 2m2s from 6m5s in my test VM. Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-01-31selftests: ublk: track created devices for per-test cleanupMing Lei1-1/+12
Track device IDs in UBLK_DEVS array when created. Update _cleanup_test() to only delete devices created by this test instead of using 'del -a' which removes all devices. This prepares for running tests concurrently where each test should only clean up its own devices. Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-01-31selftests: ublk: add _ublk_del_dev helper functionMing Lei4-14/+15
Add _ublk_del_dev() to delete a specific ublk device by ID and use it in all test scripts instead of calling UBLK_PROG directly. Also remove unused _remove_ublk_devices() function. Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-01-31selftests: ublk: refactor test_loop_08 into separate functionsMing Lei1-84/+115
Encapsulate each test case in its own function for better organization and maintainability: - _setup_device(): device and backfile initialization - _test_fill_and_verify(): initial data population - _test_corrupted_reftag(): reftag corruption detection test - _test_corrupted_data(): data corruption detection test - _test_bad_apptag(): apptag mismatch detection test Also fix temp file creation to use ${UBLK_TEST_DIR}/fio_err_XXXXX instead of creating in current directory. Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-01-31selftests: ublk: simplify UBLK_TEST_DIR handlingMing Lei1-4/+1
Remove intermediate TDIR variable and set UBLK_TEST_DIR directly in _prep_test(). Remove default initialization since the directory is created dynamically when tests run. Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-01-31selftests/bpf: Enable get_func_args and get_func_ip tests on arm64Leon Hwang2-2/+2
Allow get_func_args, and get_func_ip fsession selftests to run on arm64. Acked-by: Puranjay Mohan <puranjay@kernel.org> Tested-by: Puranjay Mohan <puranjay@kernel.org> Signed-off-by: Leon Hwang <leon.hwang@linux.dev> Link: https://lore.kernel.org/r/20260131144950.16294-4-leon.hwang@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-01-31bpf: Add bpf_jit_supports_fsession()Leon Hwang1-8/