All of this is true - reducers are an interesting pitstop on the journey to reusable composable transformations ultimately embodied in transducers. BUT it’s not why reducers were created or why they are (still) interesting.
Reducers are interesting because they were intended to address the problem that sequence transformation stacks are not (by themselves) suitable for parallelization. The driver here being that in longer term, languages that will remain interesting (to managers and architects) are those that can automatically take advantage of parallelism when applying transformations.
Reducers replace nested sequence transformations with a functional representation of the stack of transformations (as do transducers) and provide a native way to do parallel reduce on those transformation stacks (specifically for maps and vectors). The perfect intro to these ideas can be found in Guy Steele’s talk “How to Think About Parallel Programming: NOT!”, really the 2nd half.
Transducers are MUCH easier to write, much more composable, and much easier to apply in more contexts. However, transducers have not (yet) fulfilled the parallelism goals laid down by reducers, and that is still seen as an important and achievable goal. It is possible to use transducers in the reduce stage of reducers which is a partial win, but there is more that can be done.
To circle back to the original question (what are they for?), the thing reducers are great at is at applying transformation stacks on large collections with fine-grained data parallelism (where you have many independent elements to be transformed, and as opposed to coarse task-grained parallelism where ExecutorService like things are a pretty good answer). Anything described as “embarrassingly parallel” is going to be a good match, like applying the same fn to every pixel in an image. Since reducers were created a lot of things have happened in the hardware/GPU space around parallel computation, so maybe that changes the implementation details.