The first time I reached for tokio::task::block_in_place, I was pretty happy. The problem was an old rusqlite call deep inside an async handler — a synchronous database operation I couldn’t easily make async. The docs for block_in_place say, roughly, “when you have blocking code in an async function, wrap it in this and Tokio will move the rest of your runtime’s work onto another thread.” Perfect, right?

Six months later, I’ve pulled block_in_place out of most of our code. This is my attempt to explain why.

The naive usage looks like:

use tokio::task;

async fn handle_request(db: &Connection, id: i64) -> Result<User, Error> {
    let user = task::block_in_place(|| {
        let mut stmt = db.prepare_cached("SELECT * FROM users WHERE id = ?")?;
        let user = stmt.query_row([id], User::from_row)?;
        Ok::<_, Error>(user)
    })?;
    Ok(user)
}

This works. The blocking SQLite call happens, and Tokio handles the rest of its scheduling around it. The problem is what “handles it” actually means.

block_in_place only works on the multi-threaded runtime, and what it does is re-purpose the current worker thread to run your blocking code while pushing all other tasks off it. Tokio spawns a fresh worker to pick up the slack. If you call block_in_place a lot, Tokio ends up growing its worker pool.

The first problem: this interacts badly with scope. The thread that’s now running your blocking code is NOT a “blocking thread pool” thread — it’s still owned by the async runtime. While it’s busy, the other worker has to cover. If you have n workers and n of them are all inside block_in_place at once, the runtime has to spin up n more. If those also call block_in_place, you now have 2n workers. I watched a service’s thread count climb to 112 threads in a staging stress test because of nested block_in_place calls.

The second problem: it’s a silent correctness issue when your blocking code holds async resources. If inside block_in_place you try to, say, .lock() an async fn-aware mutex, or drop a tokio::sync::oneshot::Sender, or touch anything that expects to be polled — you can deadlock the runtime. The whole point of block_in_place is to say “this chunk of work is not async-friendly.” But if that chunk touches async-world resources, you’re violating the contract.

The third problem: block_in_place doesn’t work on the current-thread runtime. If you write a library that uses it internally, you’ve just made your library fail in users’ tests, which usually use the current-thread runtime for speed. This has bitten me personally. The error message is not great — it panics at runtime.

The alternative I mostly use now is spawn_blocking:

async fn handle_request(db: Arc<Pool>, id: i64) -> Result<User, Error> {
    let user = tokio::task::spawn_blocking(move || {
        let conn = db.get()?;
        let mut stmt = conn.prepare_cached("SELECT * FROM users WHERE id = ?")?;
        let user = stmt.query_row([id], User::from_row)?;
        Ok::<_, Error>(user)
    })
    .await??;
    Ok(user)
}

spawn_blocking runs on a dedicated blocking thread pool (default size 512 threads — yes really), which is configured to grow and shrink as needed. The downside is you have to move everything you need into the closure, because it runs on a different thread that doesn’t inherit from your async context. That’s actually good discipline — it forces you to be explicit about what your blocking code needs.

The deeper lesson I’ve learned is that “block_in_place vs spawn_blocking” isn’t really the choice. The real choice is “does this operation need to be on the async runtime at all?” If you’ve got a true CPU-bound or I/O-blocking operation, it’s not async-shaped — it belongs on a blocking pool, and spawn_blocking is the right tool. block_in_place is an escape hatch for very specific cases: code that can’t be moved off the runtime’s thread for some ownership reason, and where you’ve carefully verified none of the async contract is violated.

A short list of things I now check when I see block_in_place in a PR:

  • Is the blocking work bounded? A call that might take ten seconds under load needs a thread pool, not a runtime worker.
  • Does the closure touch any Tokio types (channels, mutexes, JoinHandles)? If yes, reconsider.
  • Is this library code, or binary code? Library code should prefer spawn_blocking because it’s portable to any runtime setup.
  • How often does this code path hit? Hot-path use of block_in_place can silently expand the worker pool.

For database code specifically, the real answer is usually “use an async driver.” sqlx, tokio-postgres, redis-rs with the async feature — they all let you stay on the runtime. rusqlite is a special case because SQLite’s library itself is blocking (it touches the filesystem), so the pattern there is deadpool with spawn_blocking per operation. There’s also sqlx with a SQLite backend which does this wrapping for you.

I still reach for block_in_place maybe once a year, in very constrained cases — usually a migration from sync to async where I want to make a small change work immediately. But when I do, I leave a comment in the code, and I measure thread counts in staging before I ship. It’s not a forbidden tool, just a loaded one.

See also the post on tokio::sync vs std::sync for more on when to use which synchronization primitive in async code.