diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2026-02-10 12:28:44 -0800 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2026-02-10 12:28:44 -0800 |
| commit | 0923fd0419a1a2c8846e15deacac11b619e996d9 (patch) | |
| tree | 7cc5fecc1680f5881f1d4183be400b51c81e6943 /rust | |
| parent | 4d84667627c4ff70826b349c449bbaf63b9af4e5 (diff) | |
| parent | 7a562d5d2396c9c78fbbced7ae81bcfcfa0fde3f (diff) | |
Merge tag 'locking-core-2026-02-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking updates from Ingo Molnar:
"Lock debugging:
- Implement compiler-driven static analysis locking context checking,
using the upcoming Clang 22 compiler's context analysis features
(Marco Elver)
We removed Sparse context analysis support, because prior to
removal even a defconfig kernel produced 1,700+ context tracking
Sparse warnings, the overwhelming majority of which are false
positives. On an allmodconfig kernel the number of false positive
context tracking Sparse warnings grows to over 5,200... On the plus
side of the balance actual locking bugs found by Sparse context
analysis is also rather ... sparse: I found only 3 such commits in
the last 3 years. So the rate of false positives and the
maintenance overhead is rather high and there appears to be no
active policy in place to achieve a zero-warnings baseline to move
the annotations & fixers to developers who introduce new code.
Clang context analysis is more complete and more aggressive in
trying to find bugs, at least in principle. Plus it has a different
model to enabling it: it's enabled subsystem by subsystem, which
results in zero warnings on all relevant kernel builds (as far as
our testing managed to cover it). Which allowed us to enable it by
default, similar to other compiler warnings, with the expectation
that there are no warnings going forward. This enforces a
zero-warnings baseline on clang-22+ builds (Which are still limited
in distribution, admittedly)
Hopefully the Clang approach can lead to a more maintainable
zero-warnings status quo and policy, with more and more subsystems
and drivers enabling the feature. Context tracking can be enabled
for all kernel code via WARN_CONTEXT_ANALYSIS_ALL=y (default
disabled), but this will generate a lot of false positives.
( Having said that, Sparse support could still be added back,
if anyone is interested - the removal patch is still
relatively straightforward to revert at this stage. )
Rust integration updates: (Alice Ryhl, Fujita Tomonori, Boqun Feng)
- Add support for Atomic<i8/i16/bool> and replace most Rust native
AtomicBool usages with Atomic<bool>
- Clean up LockClassKey and improve its documentation
- Add missing Send and Sync trait implementation for SetOnce
- Make ARef Unpin as it is supposed to be
- Add __rust_helper to a few Rust helpers as a preparation for
helper LTO
- Inline various lock related functions to avoid additional function
calls
WW mutexes:
- Extend ww_mutex tests and other test-ww_mutex updates (John
Stultz)
Misc fixes and cleanups:
- rcu: Mark lockdep_assert_rcu_helper() __always_inline (Arnd
Bergmann)
- locking/local_lock: Include more missing headers (Peter Zijlstra)
- seqlock: fix scoped_seqlock_read kernel-doc (Randy Dunlap)
- rust: sync: Replace `kernel::c_str!` with C-Strings (Tamir
Duberstein)"
* tag 'locking-core-2026-02-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (90 commits)
locking/rwlock: Fix write_trylock_irqsave() with CONFIG_INLINE_WRITE_TRYLOCK
rcu: Mark lockdep_assert_rcu_helper() __always_inline
compiler-context-analysis: Remove __assume_ctx_lock from initializers
tomoyo: Use scoped init guard
crypto: Use scoped init guard
kcov: Use scoped init guard
compiler-context-analysis: Introduce scoped init guards
cleanup: Make __DEFINE_LOCK_GUARD handle commas in initializers
seqlock: fix scoped_seqlock_read kernel-doc
tools: Update context analysis macros in compiler_types.h
rust: sync: Replace `kernel::c_str!` with C-Strings
rust: sync: Inline various lock related methods
rust: helpers: Move #define __rust_helper out of atomic.c
rust: wait: Add __rust_helper to helpers
rust: time: Add __rust_helper to helpers
rust: task: Add __rust_helper to helpers
rust: sync: Add __rust_helper to helpers
rust: refcount: Add __rust_helper to helpers
rust: rcu: Add __rust_helper to helpers
rust: processor: Add __rust_helper to helpers
...
Diffstat (limited to 'rust')
| -rw-r--r-- | rust/helpers/atomic.c | 7 | ||||
| -rw-r--r-- | rust/helpers/atomic_ext.c | 139 | ||||
| -rw-r--r-- | rust/helpers/barrier.c | 6 | ||||
| -rw-r--r-- | rust/helpers/blk.c | 4 | ||||
| -rw-r--r-- | rust/helpers/completion.c | 2 | ||||
| -rw-r--r-- | rust/helpers/cpu.c | 2 | ||||
| -rw-r--r-- | rust/helpers/helpers.c | 3 | ||||
| -rw-r--r-- | rust/helpers/mutex.c | 13 | ||||
| -rw-r--r-- | rust/helpers/processor.c | 2 | ||||
| -rw-r--r-- | rust/helpers/rcu.c | 4 | ||||
| -rw-r--r-- | rust/helpers/refcount.c | 10 | ||||
| -rw-r--r-- | rust/helpers/signal.c | 2 | ||||
| -rw-r--r-- | rust/helpers/spinlock.c | 13 | ||||
| -rw-r--r-- | rust/helpers/sync.c | 4 | ||||
| -rw-r--r-- | rust/helpers/task.c | 24 | ||||
| -rw-r--r-- | rust/helpers/time.c | 14 | ||||
| -rw-r--r-- | rust/helpers/wait.c | 2 | ||||
| -rw-r--r-- | rust/kernel/list/arc.rs | 14 | ||||
| -rw-r--r-- | rust/kernel/sync.rs | 73 | ||||
| -rw-r--r-- | rust/kernel/sync/aref.rs | 3 | ||||
| -rw-r--r-- | rust/kernel/sync/atomic/internal.rs | 114 | ||||
| -rw-r--r-- | rust/kernel/sync/atomic/predefine.rs | 55 | ||||
| -rw-r--r-- | rust/kernel/sync/lock.rs | 7 | ||||
| -rw-r--r-- | rust/kernel/sync/lock/global.rs | 2 | ||||
| -rw-r--r-- | rust/kernel/sync/lock/mutex.rs | 5 | ||||
| -rw-r--r-- | rust/kernel/sync/lock/spinlock.rs | 5 | ||||
| -rw-r--r-- | rust/kernel/sync/set_once.rs | 8 |
27 files changed, 428 insertions, 109 deletions
diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c index cf06b7ef9a1c..4b24eceef5fc 100644 --- a/rust/helpers/atomic.c +++ b/rust/helpers/atomic.c @@ -11,11 +11,6 @@ #include <linux/atomic.h> -// TODO: Remove this after INLINE_HELPERS support is added. -#ifndef __rust_helper -#define __rust_helper -#endif - __rust_helper int rust_helper_atomic_read(const atomic_t *v) { @@ -1037,4 +1032,4 @@ rust_helper_atomic64_dec_if_positive(atomic64_t *v) } #endif /* _RUST_ATOMIC_API_H */ -// 615a0e0c98b5973a47fe4fa65e92935051ca00ed +// e4edb6174dd42a265284958f00a7cea7ddb464b1 diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c new file mode 100644 index 000000000000..7d0c2bd340da --- /dev/null +++ b/rust/helpers/atomic_ext.c @@ -0,0 +1,139 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include <asm/barrier.h> +#include <asm/rwonce.h> +#include <linux/atomic.h> + +__rust_helper s8 rust_helper_atomic_i8_read(s8 *ptr) +{ + return READ_ONCE(*ptr); +} + +__rust_helper s8 rust_helper_atomic_i8_read_acquire(s8 *ptr) +{ + return smp_load_acquire(ptr); +} + +__rust_helper s16 rust_helper_atomic_i16_read(s16 *ptr) +{ + return READ_ONCE(*ptr); +} + +__rust_helper s16 rust_helper_atomic_i16_read_acquire(s16 *ptr) +{ + return smp_load_acquire(ptr); +} + +__rust_helper void rust_helper_atomic_i8_set(s8 *ptr, s8 val) +{ + WRITE_ONCE(*ptr, val); +} + +__rust_helper void rust_helper_atomic_i8_set_release(s8 *ptr, s8 val) +{ + smp_store_release(ptr, val); +} + +__rust_helper void rust_helper_atomic_i16_set(s16 *ptr, s16 val) +{ + WRITE_ONCE(*ptr, val); +} + +__rust_helper void rust_helper_atomic_i16_set_release(s16 *ptr, s16 val) +{ + smp_store_release(ptr, val); +} + +/* + * xchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the + * architecture provding xchg() support for i8 and i16. + * + * The architectures that currently support Rust (x86_64, armv7, + * arm64, riscv, and loongarch) satisfy these requirements. + */ +__rust_helper s8 rust_helper_atomic_i8_xchg(s8 *ptr, s8 new) +{ + return xchg(ptr, new); +} + +__rust_helper s16 rust_helper_atomic_i16_xchg(s16 *ptr, s16 new) +{ + return xchg(ptr, new); +} + +__rust_helper s8 rust_helper_atomic_i8_xchg_acquire(s8 *ptr, s8 new) +{ + return xchg_acquire(ptr, new); +} + +__rust_helper s16 rust_helper_atomic_i16_xchg_acquire(s16 *ptr, s16 new) +{ + return xchg_acquire(ptr, new); +} + +__rust_helper s8 rust_helper_atomic_i8_xchg_release(s8 *ptr, s8 new) +{ + return xchg_release(ptr, new); +} + +__rust_helper s16 rust_helper_atomic_i16_xchg_release(s16 *ptr, s16 new) +{ + return xchg_release(ptr, new); +} + +__rust_helper s8 rust_helper_atomic_i8_xchg_relaxed(s8 *ptr, s8 new) +{ + return xchg_relaxed(ptr, new); +} + +__rust_helper s16 rust_helper_atomic_i16_xchg_relaxed(s16 *ptr, s16 new) +{ + return xchg_relaxed(ptr, new); +} + +/* + * try_cmpxchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the + * architecture provding try_cmpxchg() support for i8 and i16. + * + * The architectures that currently support Rust (x86_64, armv7, + * arm64, riscv, and loongarch) satisfy these requirements. + */ +__rust_helper bool rust_helper_atomic_i8_try_cmpxchg(s8 *ptr, s8 *old, s8 new) +{ + return try_cmpxchg(ptr, old, new); +} + +__rust_helper bool rust_helper_atomic_i16_try_cmpxchg(s16 *ptr, s16 *old, s16 new) +{ + return try_cmpxchg(ptr, old, new); +} + +__rust_helper bool rust_helper_atomic_i8_try_cmpxchg_acquire(s8 *ptr, s8 *old, s8 new) +{ + return try_cmpxchg_acquire(ptr, old, new); +} + +__rust_helper bool rust_helper_atomic_i16_try_cmpxchg_acquire(s16 *ptr, s16 *old, s16 new) +{ + return try_cmpxchg_acquire(ptr, old, new); +} + +__rust_helper bool rust_helper_atomic_i8_try_cmpxchg_release(s8 *ptr, s8 *old, s8 new) +{ + return try_cmpxchg_release(ptr, old, new); +} + +__rust_helper bool rust_helper_atomic_i16_try_cmpxchg_release(s16 *ptr, s16 *old, s16 new) +{ + return try_cmpxchg_release(ptr, old, new); +} + +__rust_helper bool rust_helper_atomic_i8_try_cmpxchg_relaxed(s8 *ptr, s8 *old, s8 new) +{ + return try_cmpxchg_relaxed(ptr, old, new); +} + +__rust_helper bool rust_helper_atomic_i16_try_cmpxchg_relaxed(s16 *ptr, s16 *old, s16 new) +{ + return try_cmpxchg_relaxed(ptr, old, new); +} diff --git a/rust/helpers/barrier.c b/rust/helpers/barrier.c index cdf28ce8e511..fed8853745c8 100644 --- a/rust/helpers/barrier.c +++ b/rust/helpers/barrier.c @@ -2,17 +2,17 @@ #include <asm/barrier.h> -void rust_helper_smp_mb(void) +__rust_helper void rust_helper_smp_mb(void) { smp_mb(); } -void rust_helper_smp_wmb(void) +__rust_helper void rust_helper_smp_wmb(void) { smp_wmb(); } -void rust_helper_smp_rmb(void) +__rust_helper void rust_helper_smp_rmb(void) { smp_rmb(); } diff --git a/rust/helpers/blk.c b/rust/helpers/blk.c index cc9f4e6a2d23..20c512e46a7a 100644 --- a/rust/helpers/blk.c +++ b/rust/helpers/blk.c @@ -3,12 +3,12 @@ #include <linux/blk-mq.h> #include <linux/blkdev.h> -void *rust_helper_blk_mq_rq_to_pdu(struct request *rq) +__rust_helper void *rust_helper_blk_mq_rq_to_pdu(struct request *rq) { return blk_mq_rq_to_pdu(rq); } -struct request *rust_helper_blk_mq_rq_from_pdu(void *pdu) +__rust_helper struct request *rust_helper_blk_mq_rq_from_pdu(void *pdu) { return blk_mq_rq_from_pdu(pdu); } diff --git a/rust/helpers/completion.c b/rust/helpers/completion.c index b2443262a2ae..0126767cc3be 100644 --- a/rust/helpers/completion.c +++ b/rust/helpers/completion.c @@ -2,7 +2,7 @@ #include <linux/completion.h> -void rust_helper_init_completion(struct completion *x) +__rust_helper void rust_helper_init_completion(struct completion *x) { init_completion(x); } diff --git a/rust/helpers/cpu.c b/rust/helpers/cpu.c index 824e0adb19d4..5759349b2c88 100644 --- a/rust/helpers/cpu.c +++ b/rust/helpers/cpu.c @@ -2,7 +2,7 @@ #include <linux/smp.h> -unsigned int rust_helper_raw_smp_processor_id(void) +__rust_helper unsigned int rust_helper_raw_smp_processor_id(void) { return raw_smp_processor_id(); } diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index 79c72762ad9c..a3c42e51f00a 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -7,7 +7,10 @@ * Sorted alphabetically. */ +#define __rust_helper + #include "atomic.c" +#include "atomic_ext.c" #include "auxiliary.c" #include "barrier.c" #include "binder.c" diff --git a/rust/helpers/mutex.c b/rust/helpers/mutex.c index e487819125f0..1b07d6e64299 100644 --- a/rust/helpers/mutex.c +++ b/rust/helpers/mutex.c @@ -2,28 +2,29 @@ #include <linux/mutex.h> -void rust_helper_mutex_lock(struct mutex *lock) +__rust_helper void rust_helper_mutex_lock(struct mutex *lock) { mutex_lock(lock); } -int rust_helper_mutex_trylock(struct mutex *lock) +__rust_helper int rust_helper_mutex_trylock(struct mutex *lock) { return mutex_trylock(lock); } -void rust_helper___mutex_init(struct mutex *mutex, const char *name, - struct lock_class_key *key) +__rust_helper void rust_helper___mutex_init(struct mutex *mutex, + const char *name, + struct lock_class_key *key) { __mutex_init(mutex, name, key); } -void rust_helper_mutex_assert_is_held(struct mutex *mutex) +__rust_helper void rust_helper_mutex_assert_is_held(struct mutex *mutex) { lockdep_assert_held(mutex); } -void rust_helper_mutex_destroy(struct mutex *lock) +__rust_helper void rust_helper_mutex_destroy(struct mutex *lock) { mutex_destroy(lock); } diff --git a/rust/helpers/processor.c b/rust/helpers/processor.c index d41355e14d6e..76fadbb647c5 100644 --- a/rust/helpers/processor.c +++ b/rust/helpers/processor.c @@ -2,7 +2,7 @@ #include <linux/processor.h> -void rust_helper_cpu_relax(void) +__rust_helper void rust_helper_cpu_relax(void) { cpu_relax(); } diff --git a/rust/helpers/rcu.c b/rust/helpers/rcu.c index f1cec6583513..481274c05857 100644 --- a/rust/helpers/rcu.c +++ b/rust/helpers/rcu.c @@ -2,12 +2,12 @@ #include <linux/rcupdate.h> -void rust_helper_rcu_read_lock(void) +__rust_helper void rust_helper_rcu_read_lock(void) { rcu_read_lock(); } -void rust_helper_rcu_read_unlock(void) +__rust_helper void rust_helper_rcu_read_unlock(void) { rcu_read_unlock(); } diff --git a/rust/helpers/refcount.c b/rust/helpers/refcount.c index d175898ad7b8..36334a674ee4 100644 --- a/rust/helpers/refcount.c +++ b/rust/helpers/refcount.c @@ -2,27 +2,27 @@ #include <linux/refcount.h> -refcount_t rust_helper_REFCOUNT_INIT(int n) +__rust_helper refcount_t rust_helper_REFCOUNT_INIT(int n) { return (refcount_t)REFCOUNT_INIT(n); } -void rust_helper_refcount_set(refcount_t *r, int n) +__rust_helper void rust_helper_refcount_set(refcount_t *r, int n) { refcount_set(r, n); } -void rust_helper_refcount_inc(refcount_t *r) +__rust_helper void rust_helper_refcount_inc(refcount_t *r) { refcount_inc(r); } -void rust_helper_refcount_dec(refcount_t *r) +__rust_helper void rust_helper_refcount_dec(refcount_t *r) { refcount_dec(r); } -bool rust_helper_refcount_dec_and_test(refcount_t *r) +__rust_helper bool rust_helper_refcount_dec_and_test(refcount_t *r) { return refcount_dec_and_test(r); } diff --git a/rust/helpers/signal.c b/rust/helpers/signal.c index 1a6bbe9438e2..85111186cf3d 100644 --- a/rust/helpers/signal.c +++ b/rust/helpers/signal.c @@ -2,7 +2,7 @@ #include <linux/sched/signal.h> -int rust_helper_signal_pending(struct task_struct *t) +__rust_helper int rust_helper_signal_pending(struct task_struct *t) { return signal_pending(t); } diff --git a/rust/helpers/spinlock.c b/rust/helpers/spinlock.c index 42c4bf01a23e..4d13062cf253 100644 --- a/rust/helpers/spinlock.c +++ b/rust/helpers/spinlock.c @@ -2,8 +2,9 @@ #include <linux/spinlock.h> -void rust_helper___spin_lock_init(spinlock_t *lock, const char *name, - struct lock_class_key *key) +__rust_helper void rust_helper___spin_lock_init(spinlock_t *lock, + const char *name, + struct lock_class_key *key) { #ifdef CONFIG_DEBUG_SPINLOCK # if defined(CONFIG_PREEMPT_RT) @@ -16,22 +17,22 @@ void rust_helper___spin_lock_init(spinlock_t *lock, const char *name, #endif /* CONFIG_DEBUG_SPINLOCK */ } -void rust_helper_spin_lock(spinlock_t *lock) +__rust_helper void rust_helper_spin_lock(spinlock_t *lock) { spin_lock(lock); } -void rust_helper_spin_unlock(spinlock_t *lock) +__rust_helper void rust_helper_spin_unlock(spinlock_t *lock) { spin_unlock(lock); } -int rust_helper_spin_trylock(spinlock_t *lock) +__rust_helper int rust_helper_spin_trylock(spinlock_t *lock) { return spin_trylock(lock); } -void rust_helper_spin_assert_is_held(spinlock_t *lock) +__rust_helper void rust_helper_spin_assert_is_held(spinlock_t *lock) { lockdep_assert_held(lock); } diff --git a/rust/helpers/sync.c b/rust/helpers/sync.c index ff7e68b48810..82d6aff73b04 100644 --- a/rust/helpers/sync.c +++ b/rust/helpers/sync.c @@ -2,12 +2,12 @@ #include <linux/lockdep.h> -void rust_helper_lockdep_register_key(struct lock_class_key *k) +__rust_helper void rust_helper_lockdep_register_key(struct lock_class_key *k) { lockdep_register_key(k); } -void rust_helper_lockdep_unregister_key(struct lock_class_key *k) +__rust_helper void rust_helper_lockdep_unregister_key(struct lock_class_key *k) { lockdep_unregister_key(k); } diff --git a/rust/helpers/task.c b/rust/helpers/task.c index 2c85bbc2727e..c0e1a06ede78 100644 --- a/rust/helpers/task.c +++ b/rust/helpers/task.c @@ -3,60 +3,60 @@ #include <linux/kernel.h> #include <linux/sched/task.h> -void rust_helper_might_resched(void) +__rust_helper void rust_helper_might_resched(void) { might_resched(); } -struct task_struct *rust_helper_get_current(void) +__rust_helper struct task_struct *rust_helper_get_current(void) { return current; } -void rust_helper_get_task_struct(struct task_struct *t) +__rust_helper void rust_helper_get_task_struct(struct task_struct *t) { get_task_struct(t); } -void rust_helper_put_task_struct(struct task_struct *t) +__rust_helper void rust_helper_put_task_struct(struct task_struct *t) { put_task_struct(t); } -kuid_t rust_helper_task_uid(struct task_struct *task) +__rust_helper kuid_t rust_helper_task_uid(struct task_struct *task) { return task_uid(task); } -kuid_t rust_helper_task_euid(struct task_struct *task) +__rust_helper kuid_t rust_helper_task_euid(struct task_struct *task) { return task_euid(task); } #ifndef CONFIG_USER_NS -uid_t rust_helper_from_kuid(struct user_namespace *to, kuid_t uid) +__rust_helper uid_t rust_helper_from_kuid(struct user_namespace *to, kuid_t uid) { return from_kuid(to, uid); } #endif /* CONFIG_USER_NS */ -bool rust_helper_uid_eq(kuid_t left, kuid_t right) +__rust_helper bool rust_helper_uid_eq(kuid_t left, kuid_t right) { return uid_eq(left, right); } -kuid_t rust_helper_current_euid(void) +__rust_helper kuid_t rust_helper_current_euid(void) { return current_euid(); } -struct user_namespace *rust_helper_current_user_ns(void) +__rust_helper struct user_namespace *rust_helper_current_user_ns(void) { return current_user_ns(); } -pid_t rust_helper_task_tgid_nr_ns(struct task_struct *tsk, - struct pid_namespace *ns) +__rust_helper pid_t rust_helper_task_tgid_nr_ns(struct task_struct *tsk, + struct pid_namespace *ns) { return task_tgid_nr_ns(tsk, ns); } diff --git a/rust/helpers/time.c b/rust/helpers/time.c index 67a36ccc3ec4..32f495970493 100644 --- a/rust/helpers/time.c +++ b/rust/helpers/time.c @@ -4,37 +4,37 @@ #include <linux/ktime.h> #include <linux/timekeeping.h> -void rust_helper_fsleep(unsigned long usecs) +__rust_helper void rust_helper_fsleep(unsigned long usecs) { fsleep(usecs); } -ktime_t rust_helper_ktime_get_real(void) +__rust_helper ktime_t rust_helper_ktime_get_real(void) { return ktime_get_real(); } -ktime_t rust_helper_ktime_get_boottime(void) +__rust_helper ktime_t rust_helper_ktime_get_boottime(void) { return ktime_get_boottime(); } -ktime_t rust_helper_ktime_get_clocktai(void) +__rust_helper ktime_t rust_helper_ktime_get_clocktai(void) { return ktime_get_clocktai(); } -s64 rust_helper_ktime_to_us(const ktime_t kt) +__rust_helper s64 rust_helper_ktime_to_us(const ktime_t kt) { return ktime_to_us(kt); } -s64 rust_helper_ktime_to_ms(const ktime_t kt) +__rust_helper s64 rust_helper_ktime_to_ms(const ktime_t kt) { return ktime_to_ms(kt); } -void rust_helper_udelay(unsigned long usec) +__rust_helper void rust_helper_udelay(unsigned long usec) { udelay(usec); } diff --git a/rust/helpers/wait.c b/rust/helpers/wait.c index ae48e33d9da3..2dde1e451780 100644 --- a/rust/helpers/wait.c +++ b/rust/helpers/wait.c @@ -2,7 +2,7 @@ #include <linux/wait.h> -void rust_helper_init_wait(struct wait_queue_entry *wq_entry) +__rust_helper void rust_helper_init_wait(struct wait_queue_entry *wq_entry) { init_wait(wq_entry); } diff --git a/rust/kernel/list/arc.rs b/rust/kernel/list/arc.rs index d92bcf665c89..2282f33913ee 100644 --- a/rust/kernel/list/arc.rs +++ b/rust/kernel/list/arc.rs @@ -6,11 +6,11 @@ use crate::alloc::{AllocError, Flags}; use crate::prelude::*; +use crate::sync::atomic::{ordering, Atomic}; use crate::sync::{Arc, ArcBorrow, UniqueArc}; use core::marker::PhantomPinned; use core::ops::Deref; use core::pin::Pin; -use core::sync::atomic::{AtomicBool, Ordering}; /// Declares that this type has some way to ensure that there is exactly one `ListArc` instance for /// this id. @@ -469,7 +469,7 @@ where /// If the boolean is `false`, then there is no [`ListArc`] for this value. #[repr(transparent)] pub struct AtomicTracker<const ID: u64 = 0> { - inner: AtomicBool, + inner: Atomic<bool>, // This value needs to be pinned to justify the INVARIANT: comment in `AtomicTracker::new`. _pin: PhantomPinned, } @@ -480,12 +480,12 @@ impl<const ID: u64> AtomicTracker<ID> { // INVARIANT: Pin-init initializers can't be used on an existing `Arc`, so this value will // not be constructed in an `Arc` that already has a `ListArc`. Self { - inner: AtomicBool::new(false), + inner: Atomic::new(false), _pin: PhantomPinned, } } - fn project_inner(self: Pin<&mut Self>) -> &mut AtomicBool { + fn project_inner(self: Pin<&mut Self>) -> &mut Atomic<bool> { // SAFETY: The `inner` field is not structurally pinned, so we may obtain a mutable // reference to it even if we only have a pinned reference to `self`. unsafe { &mut Pin::into_inner_unchecked(self).inner } @@ -500,7 +500,7 @@ impl<const ID: u64> ListArcSafe<ID> for AtomicTracker<ID> { unsafe fn on_drop_list_arc(&self) { // INVARIANT: We just dropped a ListArc, so the boolean should be false. - self.inner.store(false, Ordering::Release); + self.inner.store(false, ordering::Release); } } @@ -514,8 +514,6 @@ unsafe impl<const ID: u64> TryNewListArc<ID> for AtomicTracker<ID> { fn try_new_list_arc(&self) -> bool { // INVARIANT: If this method returns true, then the boolean used to be false, and is no // longer false, so it is okay for the caller to create a new [`ListArc`]. - self.inner - .compare_exchange(false, true, Ordering::Acquire, Ordering::Relaxed) - .is_ok() + self.inner.cmpxchg(false, true, ordering::Acquire).is_ok() } } diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index 5df87e2bd212..993dbf2caa0e 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -32,7 +32,9 @@ pub use locked_by::LockedBy; pub use refcount::Refcount; pub use set_once::SetOnce; -/// Represents a lockdep class. It's a wrapper around C's `lock_class_key`. +/// Represents a lockdep class. +/// +/// Wraps the kernel's `struct lock_class_key`. #[repr(transparent)] #[pin_data(PinnedDrop)] pub struct LockClassKey { @@ -40,20 +42,42 @@ pub struct LockClassKey { inner: Opaque<bindings::lock_class_key>, } +// SAFETY: Unregistering a lock class key from a different thread than where it was registered is +// allowed. +unsafe impl Send for LockClassKey {} + // SAFETY: `bindings::lock_class_key` is designed to be used concurrently from multiple threads and // provides its own synchronization. unsafe impl Sync for LockClassKey {} impl LockClassKey { - /// Initializes a dynamically allocated lock class key. In the common case of using a - /// statically allocated lock class key, the static_lock_class! macro should be used instead. + /// Initializes a statically allocated lock class key. + /// + /// This is usually used indirectly through the [`static_lock_class!`] macro. See its + /// documentation for more information. + /// + /// # Safety + /// + /// * Before using the returned value, it must be pinned in a static memory location. + /// * The destructor must never run on the returned `LockClassKey`. + pub const unsafe fn new_static() -> Self { + LockClassKey { + inner: Opaque::uninit(), + } + } + + /// Initializes a dynamically allocated lock class key. + /// + /// In the common case of using a statically allocated lock class key, the + /// [`static_lock_class!`] macro should be used instead. /// /// # Examples + /// /// ``` - /// # use kernel::alloc::KBox; - /// # use kernel::types::ForeignOwnable; - /// # use kernel::sync::{LockClassKey, SpinLock}; - /// # use pin_init::stack_pin_init; + /// use kernel::alloc::KBox; + /// use kernel::types::ForeignOwnable; + /// use kernel::sync::{LockClassKey, SpinLock}; + /// use pin_init::stack_pin_init; /// /// let key = KBox::pin_init(LockClassKey::new_dynamic(), GFP_KERNEL)?; /// let key_ptr = key.into_foreign(); @@ -71,7 +95,6 @@ impl LockClassKey { /// // SAFETY: We dropped `num`, the only use of the key, so the result of the previous /// // `borrow` has also been dropped. Thus, it's safe to use from_foreign. /// unsafe { drop(<Pin<KBox<LockClassKey>> as ForeignOwnable>::from_foreign(key_ptr)) }; - /// /// # Ok::<(), Error>(()) /// ``` pub fn new_dynamic() -> impl PinInit<Self> { @@ -81,7 +104,10 @@ impl LockClassKey { }) } - pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key { + /// Returns a raw pointer to the inner C struct. + /// + /// It is up to the caller to use the raw pointer correctly. + pub fn as_ptr(&self) -> *mut bindings::lock_class_key { self.inner.get() } } @@ -89,27 +115,38 @@ impl LockClassKey { #[pinned_drop] impl PinnedDrop for LockClassKey { fn drop(self: Pin<&mut Self>) { - // SAFETY: self.as_ptr was registered with lockdep and self is pinned, so the address - // hasn't changed. Thus, it's safe to pass to unregister. + // SAFETY: `self.as_ptr()` was registered with lockdep and `self` is pinned, so the address + // hasn't changed. Thus, it's safe to pass it to unregister. unsafe { bindings::lockdep_unregister_key(self.as_ptr()) } } } /// Defines a new static lock class and returns a pointer to it. -#[doc(hidden)] +/// +/// # Examples +/// +/// ``` +/// use kernel::sync::{static_lock_class, Arc, SpinLock}; +/// +/// fn new_locked_int() -> Result<Arc<SpinLock<u32>>> { +/// Arc::pin_init(SpinLock::new( +/// 42, +/// c"new_locked_int", +/// static_lock_class!(), +/// ), GFP_KERNEL) +/// } +/// ``` #[macro_export] macro_rules! static_lock_class { () => {{ static CLASS: $crate::sync::LockClassKey = - // Lockdep expects uninitialized memory when it's handed a statically allocated `struct - // lock_class_key`. - // - // SAFETY: `LockClassKey` transparently wraps `Opaque` which permits uninitialized - // memory. - unsafe { ::core::mem::MaybeUninit::uninit().assume_init() }; + // SAFETY: The returned `LockClassKey` is stored in static memory and we pin it. Drop + // never runs on a static global. + unsafe { $crate::sync::LockClassKey::new_static() }; $crate::prelude::Pin::static_ref(&CLASS) }}; } +pub use static_lock_class; /// Returns the given string, if one is provided, otherwise generates one based on the source code /// location. diff --git a/rust/kernel/sync/aref.rs b/rust/kernel/sync/aref.rs index 0d24a0432015..0616c0353c2b 100644 --- a/rust/kernel/sync/aref.rs +++ b/rust/kernel/sync/aref.rs @@ -83,6 +83,9 @@ unsafe impl<T: AlwaysRefCounted + Sync + Send> Send for ARef<T> {} // example, when the reference count reaches zero and `T` is dropped. unsafe impl<T: AlwaysRefCounted + Sync + Send> Sync for ARef<T> {} +// Even if `T` is pinned, pointers to `T` can still move. +impl<T: AlwaysRefCounted> Unpin for ARef<T> {} + impl<T: AlwaysRefCounted> ARef<T> { /// Creates a new instance of [`ARef`]. /// diff --git a/rust/kernel/sync/atomic/internal.rs b/rust/kernel/sync/atomic/internal.rs index 6fdd8e59f45b..0dac58bca2b3 100644 --- a/rust/kernel/sync/atomic/internal.rs +++ b/rust/kernel/sync/atomic/internal.rs @@ -13,17 +13,22 @@ mod private { pub trait Sealed {} } -// `i32` and `i64` are only supported atomic implementations. +// The C side supports atomic primitives only for `i32` and `i64` (`atomic_t` and `atomic64_t`), +// while the Rust side also layers provides atomic support for `i8` and `i16` +// on top of lower-level C primitives. +impl private::Sealed for i8 {} +impl private::Sealed for i16 {} impl private::Sealed for i32 {} impl private::Sealed for i64 {} /// A marker trait for types that implement atomic operations with C side primitives. /// -/// This trait is sealed, and only types that have directly mapping to the C side atomics should -/// impl this: +/// This trait is sealed, and only types that map directly to the C side atomics +/// or can be implemented with lower-level C primitives are allowed to implement this: /// -/// - `i32` maps to `atomic_t`. -/// - `i64` maps to `atomic64_t`. +/// - `i8` and `i16` are implemented with lower-level C primitives. +/// - `i32` map to `atomic_t` +/// - `i64` map to `atomic64_t` pub trait AtomicImpl: Sized + Send + Copy + private::Sealed { /// The type of the delta in arithmetic or logical operations. /// @@ -32,6 +37,20 @@ pub trait AtomicImpl: Sized + Send + Copy + private::Sealed { type Delta; } +// The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the atomicity is only +// guaranteed against read-modify-write operations if the architecture supports native atomic RmW. +#[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)] +impl AtomicImpl for i8 { + type Delta = Self; +} + +// The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the atomicity is only +// guaranteed against read-modify-write operations if the architecture supports native atomic RmW. +#[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)] +impl AtomicImpl for i16 { + type Delta = Self; +} + // `atomic_t` implements atomic operations on `i32`. impl AtomicImpl for i32 { type Delta = Self; @@ -156,16 +175,17 @@ macro_rules! impl_atomic_method { } } -// Delcares $ops trait with methods and implements the trait for `i32` and `i64`. -macro_rules! declare_and_impl_atomic_methods { - ($(#[$attr:meta])* $pub:vis trait $ops:ident { - $( - $(#[doc=$doc:expr])* - fn $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:ty)? { - $unsafe:tt { bindings::#call($($arg:tt)*) } - } - )* - }) => { +macro_rules! declare_atomic_ops_trait { + ( + $(#[$attr:meta])* $pub:vis trait $ops:ident { + $( + $(#[doc=$doc:expr])* + fn $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:ty)? { + $unsafe:tt { bindings::#call($($arg:tt)*) } + } + )* + } + ) => { $(#[$attr])* $pub trait $ops: AtomicImpl { $( @@ -175,21 +195,25 @@ macro_rules! declare_and_impl_atomic_methods { ); )* } + } +} - impl $ops for i32 { +macro_rules! impl_atomic_ops_for_one { + ( + $ty:ty => $ctype:ident, + $(#[$attr:meta])* $pub:vis trait $ops:ident { $( - impl_atomic_method!( - (atomic) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? { - $unsafe { call($($arg)*) } - } - ); + $(#[doc=$doc:expr])* + fn $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:ty)? { + $unsafe:tt { bindings::#call($($arg:tt)*) } + } )* } - - impl $ops for i64 { + ) => { + impl $ops for $ty { $( impl_atomic_method!( - (atomic64) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? { + ($ctype) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? { $unsafe { call($($arg)*) } } ); @@ -198,7 +222,47 @@ macro_rules! declare_and_impl_atomic_methods { } } +// Declares $ops trait with methods and implements the trait. +macro_rules! declare_and_impl_atomic_methods { + ( + [ $($map:tt)* ] + $(#[$attr:meta])* $pub:vis trait $ops:ident { $($body:tt)* } + ) => { + declare_and_impl_atomic_methods!( + @with_ops_def + [ $($map)* ] + ( $(#[$attr])* $pub trait $ops { $($body)* } ) + ); + }; + + (@with_ops_def [ $($map:tt)* ] ( $($ops_def:tt)* )) => { + declare_atomic_ops_trait!( $($ops_def)* ); + + declare_and_impl_atomic_methods!( + @munch + [ $($map)* ] + ( $($ops_def)* ) + ); + }; + + (@munch [] ( $($ops_def:tt)* )) => {}; + + (@munch [ $ty:ty => $ctype:ident $(, $($rest:tt)*)? ] ( $($ops_def:tt)* )) => { + impl_atomic_ops_for_one!( + $ty => $ctype, + $($ops_def)* + ); + + declare_and_impl_atomic_methods!( + @munch + [ $($($rest)*)? ] + ( $($ops_def)* ) + ); + }; +} + declare_and_impl_atomic_methods!( + [ i8 => atomic_i8, i16 => atomic_i16, i32 => atomic, i64 => atomic64 ] /// Basic atomic operations pub trait AtomicBasicOps { /// Atomic read (load). @@ -216,6 +280,7 @@ declare_and_impl_atomic_methods!( ); declare_and_impl_atomic_methods!( + [ i8 => atomic_i8, i16 => atomic_i16, i32 => atomic, i64 => atomic64 ] /// Exchange and compare-and-exchange atomic operations pub trait AtomicExchangeOps { /// Atomic exchange. @@ -243,6 +308,7 @@ declare_and_impl_atomic_methods!( ); declare_and_impl_atomic_methods!( + [ i32 => atomic, i64 => atomic64 ] /// Atomic arithmetic operations pub trait AtomicArithmeticOps { /// Atomic add (wrapping). diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic/predefine.rs index 0fca1ba3c2db..67a0406d3ea4 100644 --- a/rust/kernel/sync/atomic/predefine.rs +++ b/rust/kernel/sync/atomic/predefine.rs @@ -5,6 +5,29 @@ use crate::static_assert; use core::mem::{align_of, size_of}; +// Ensure size and alignment requirements are checked. +static_assert!(size_of::<bool>() == size_of::<i8>()); +static_assert!(align_of::<bool>() == align_of::<i8>()); + +// SAFETY: `bool` has the same size and alignment as `i8`, and Rust guarantees that `bool` has +// only two valid bit patterns: 0 (false) and 1 (true). Those are valid `i8` values, so `bool` is +// round-trip transmutable to `i8`. +unsafe impl super::AtomicType for bool { + type Repr = i8; +} + +// SAFETY: `i8` has the same size and alignment with itself, and is round-trip transmutable to +// itself. +unsafe impl super::AtomicType for i8 { + type Repr = i8; +} + +// SAFETY: `i16` has the same size and alignment with itself, and is round-trip transmutable to +// itself. +unsafe impl super::AtomicType for i16 { + type Repr = i16; +} + // SAFETY: `i32` has the same size and alignment with itself, and is round-trip transmutable to // itself. unsafe impl super::AtomicType for i32 { @@ -129,7 +152,7 @@ mod tests { #[test] fn atomic_basic_tests() { - for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { + for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| { let x = Atomic::new(v); assert_eq!(v, x.load(Relaxed)); @@ -137,8 +160,18 @@ mod tests { } #[test] + fn atomic_acquire_release_tests() { + for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| { + let x = Atomic::new(0); + + x.store(v, Release); + assert_eq!(v, x.load(Acquire)); + }); + } + + #[test] fn atomic_xchg_tests() { - for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { + for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| { let x = Atomic::new(v); let old = v; @@ -151,7 +184,7 @@ mod tests { #[test] fn atomic_cmpxchg_tests() { - for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { + for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| { let x = Atomic::new(v); let old = v; @@ -177,4 +210,20 @@ mod tests { assert_eq!(v + 25, x.load(Relaxed)); }); } + + #[test] + fn atomic_bool_tests() { + let x = Atomic::new(false); + + assert_eq!(false, x.load(Relaxed)); + x.store(true, Relaxed); + assert_eq!(true, x.load(Relaxed)); + + assert_eq!(true, x.xchg(false, Relaxed)); + assert_eq!(false, x.load(Relaxed)); + + assert_eq!(Err(false), x.cmpxchg(true, true, Relaxed)); + assert_eq!(false, x.load(Relaxed)); + assert_eq!(Ok(false), x.cmpxchg(false, true, Full)); + } } diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs index 46a57d1fc309..10b6b5e9b024 100644 --- a/rust/kernel/sync/lock.rs +++ b/rust/kernel/sync/lock.rs @@ -156,6 +156,7 @@ impl<B: Backend> Lock<(), B> { /// the whole lifetime of `'a`. /// /// [`State`]: Backend::State + #[inline] pub unsafe fn from_raw<'a>(ptr: *mut B::State) -> &'a Self { // SAFETY: // - By the safety contract `ptr` must point to a valid initialised instance of `B::State` @@ -169,6 +170,7 @@ impl<B: Backend> Lock<(), B> { impl<T: ?Sized, B: Backend> Lock<T, B> { /// Acquires the lock and gives the caller access to the data protected by it. + #[inline] pub fn lock(&self) -> Guard<'_, T, B> { // SAFETY: The constructor of the type calls `init`, so the existence of the object proves // that `init` was called. @@ -182,6 +184,7 @@ impl<T: ?Sized, B: Backend> Lock<T, B> { /// Returns a guard that can be used to access the data protected by the lock if successful. // `Option<T>` is not `#[must_use]` even if `T` is, thus the attribute is needed here. #[must_use = "if unused, the lock will be immediately unlocked"] + #[inline] pub fn try_lock(&self) -> Option<Guard<'_, T, B>> { // SAFETY: The constructor of the type calls `init`, so the existence of the object proves // that `init` was called. @@ -275,6 +278,7 @@ impl<'a, T: ?Sized, B: Backend> Guard<'a, T, B> { impl<T: ?Sized, B: Backend> core::ops::Deref for Guard<'_, T, B> { type Target = T; + #[inline] fn deref(&self) -> &Self::Target { // SAFETY: The caller owns the lock, so it is safe to deref the protected data. unsafe { &*self.lock.data.get() } @@ -285,6 +289,7 @@ impl<T: ?Sized, B: Backend> core::ops::DerefMut for Guard<'_, T, B> where T: Unpin, { + #[inline] fn deref_mut(&mut self) -> &mut Self::Target { // SAFETY: The caller owns the lock, so it is safe to deref the protected data. unsafe { &mut *self.lock.data.get() } @@ -292,6 +297,7 @@ where } impl<T: ?Sized, B: Backend> Drop for Guard<'_, T, B> { + #[inline] fn drop(&mut self) { // SAFETY: The caller owns the lock, so it is safe to unlock it. unsafe { B::unlock(self.lock.state.get(), &self.state) }; @@ -304,6 +310,7 @@ impl<'a, T: ?Sized, B: Backend> Guard<'a, T, B> { /// # Safety /// /// The caller must ensure that it owns the lock. + #[inline] pub unsafe fn new(lock: &'a Lock<T, B>, state: B::GuardState) -> Self { // SAFETY: The caller can only hold the lock if `Backend::init` has already been called. unsafe { B::assert_is_held(lock.state.get()) }; diff --git a/rust/kernel/sync/lock/global.rs b/rust/kernel/sync/lock/global.rs index eab48108a4ae..aecbdc34738f 100644 --- a/rust/kernel/sync/lock/global.rs +++ b/rust/kernel/sync/lock/global.rs @@ -77,6 +77,7 @@ impl<B: GlobalLockBackend> GlobalLock<B> { } /// Lock this global lock. + #[inline] pub fn lock(&'static self) -> GlobalGuard<B> { GlobalGuard { inner: self.inner.lock(), @@ -84,6 +85,7 @@ impl<B: GlobalLockBackend> GlobalLock<B> { } /// Try to lock this global lock. + #[inline] pub fn try_lock(&'static self) -> Option<GlobalGuard<B>> { Some(GlobalGuard { inner: self.inner.try_lock()?, diff --git a/rust/kernel/sync/lock/mutex.rs b/rust/kernel/sync/lock/mutex.rs index 581cee7ab842..cda0203efefb 100644 --- a/rust/kernel/sync/lock/mutex.rs +++ b/rust/kernel/sync/lock/mutex.rs @@ -102,6 +102,7 @@ unsafe impl super::Backend for MutexBackend { type State = bindings::mutex; type GuardState = (); + #[inline] unsafe fn init( ptr: *mut Self::State, name: *const crate::ffi::c_char, @@ -112,18 +113,21 @@ unsafe impl super::Backend for MutexBackend { unsafe { bindings::__mutex_init(ptr, name, key) } } + #[inline] unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState { // SAFETY: The safety requirements of this function ensure that `ptr` points to valid // memory, and that it has been initialised before. unsafe { bindings::mutex_lock(ptr) }; } + #[inline] unsafe fn unlock(ptr: *mut Self::State, _guard_state: &Self::GuardState) { // SAFETY: The safety requirements of this function ensure that `ptr` is valid and that the // caller is the owner of the mutex. unsafe { bindings::mutex_unlock(ptr) }; } + #[inline] unsafe fn try_lock(ptr: *mut Self::State) -> Option<Self::GuardState> { // SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use. let result = unsafe { bindings::mutex_trylock(ptr) }; @@ -135,6 +139,7 @@ unsafe impl super::Backend for MutexBackend { } } + #[inline] unsafe fn assert_is_held(ptr: *mut Self::State) { // SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use. unsafe { bindings::mutex_assert_is_held(ptr) } diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spinlock.rs index d7be38ccbdc7..ef76fa07ca3a 100644 --- a/rust/kernel/sync/lock/spinlock.rs +++ b/rust/kernel/sync/lock/spinlock.rs @@ -101,6 +101,7 @@ unsafe impl super::Backend for SpinLockBackend { type State = bindings::spinlock_t; type GuardState = (); + #[inline] unsafe fn init( ptr: *mut Self::State, name: *const crate::ffi::c_char, @@ -111,18 +112,21 @@ unsafe impl super::Backend for SpinLockBackend { unsafe { bindings::__spin_lock_init(ptr, name, key) } } + #[inline] unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState { // SAFETY: The safety requirements of this function ensure that `ptr` points to valid // memory, and that it has been initialised before. unsafe { bindings::spin_lock(ptr) } } + #[inline] unsafe fn unlock(ptr: *mut Self::State, _guard_state: &Self::GuardState) { // SAFETY: The safety requirements of this function ensure that `ptr` is valid and that the // caller is the owner of the spinlock. unsafe { bindings::spin_unlock(ptr) } } + #[inline] unsafe fn try_lock(ptr: *mut Self::State) -> Option<Self::GuardState> { // SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use. let result = unsafe { bindings::spin_trylock(ptr) }; @@ -134,6 +138,7 @@ unsafe impl super::Backend for SpinLockBackend { } } + #[inline] unsafe fn assert_is_held(ptr: *mut Self::State) { // SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use. unsafe { bindings::spin_assert_is_held(ptr) } diff --git a/rust/kernel/sync/set_once.rs b/rust/kernel/sync/set_once.rs index bdba601807d8..139cef05e935 100644 --- a/rust/kernel/sync/set_once.rs +++ b/rust/kernel/sync/set_once.rs @@ -123,3 +123,11 @@ impl<T> Drop for SetOnce<T> { } } } + +// SAFETY: `SetOnce` can be transferred across thread boundaries iff the data it contains can. +unsafe impl<T: Send> Send for SetOnce<T> {} + +// SAFETY: `SetOnce` synchronises access to the inner value via atomic operations, +// so shared references are safe when `T: Sync`. Since the inner `T` may be dropped +// on any thread, we also require `T: Send`. +unsafe impl<T: Send + Sync> Sync for SetOnce<T> {} |
