summaryrefslogtreecommitdiff
path: root/kernel/workqueue.c
diff options
context:
space:
mode:
authorPetr Mladek <pmladek@suse.com>2026-03-25 13:34:18 +0100
committerTejun Heo <tj@kernel.org>2026-03-25 05:51:02 -1000
commite398978ddf18fe5a2fc8299c77e6fe50e6c306c4 (patch)
tree17a507284a400d63f19fa33d11890980e3ea2942 /kernel/workqueue.c
parentc7f27a8ab9f2f43570f0725256597a0d7abe2c5b (diff)
workqueue: Better describe stall check
Try to be more explicit why the workqueue watchdog does not take pool->lock by default. Spin locks are full memory barriers which delay anything. Obviously, they would primary delay operations on the related worker pools. Explain why it is enough to prevent the false positive by re-checking the timestamp under the pool->lock. Finally, make it clear what would be the alternative solution in __queue_work() which is a hotter path. Signed-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Song Liu <song@kernel.org> Signed-off-by: Tejun Heo <tj@kernel.org>
Diffstat (limited to 'kernel/workqueue.c')
-rw-r--r--kernel/workqueue.c15
1 files changed, 8 insertions, 7 deletions
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index ff97b705f25e..eda756556341 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -7702,13 +7702,14 @@ static void wq_watchdog_timer_fn(struct timer_list *unused)
/*
* Did we stall?
*
- * Do a lockless check first. On weakly ordered
- * architectures, the lockless check can observe a
- * reordering between worklist insert_work() and
- * last_progress_ts update from __queue_work(). Since
- * __queue_work() is a much hotter path than the timer
- * function, we handle false positive here by reading
- * last_progress_ts again with pool->lock held.
+ * Do a lockless check first to do not disturb the system.
+ *
+ * Prevent false positives by double checking the timestamp
+ * under pool->lock. The lock makes sure that the check reads
+ * an updated pool->last_progress_ts when this CPU saw
+ * an already updated pool->worklist above. It seems better
+ * than adding another barrier into __queue_work() which
+ * is a hotter path.
*/
if (time_after(now, ts + thresh)) {
scoped_guard(raw_spinlock_irqsave, &pool->lock) {