Got the archived directory structure better but the split version is causing weird problems, so shelved for now (or until inspiration happens).
Tried out xz for compression and whilst it reduces backup size by a useful approx. 12% it also consumes more runtime CPU. Additionally, the CWP file manager doesn't recognise it as being able to be "descompressed" (yes another spelling mistake). I'll stay clear of pigz/zstd because they aren't available by default on CWP.
Optimal I/O will be where home, tmp_bak and backup are all on different storage - only likely in larger systems. Certainly, keeping root, home and backup on different partitions allows for better filesystem types (ext2 for backup) and allocation of quotas. With anything above say, 10GB storage /home should never be in the same partition as root and is a basic noob/Windoze-blinkered mistake.
During testing, with partitioned drives of course, I could see no temporary files being created during the compression process. However, this was with archives up to only about 120MB. I should imagine some 'streaming' to disc does happen with large data sets and changing directory to begin at another location might help, though that might negate trimming of the archive file structure. When a tar is created from a current directory files can easily have a relative path.
(I don't trust CWP to run my main/larger client sites, with only one live e-commerce site on CWP. I have 3 Pro licences! I still put up with punitive WHM/cPanel costs for most and the unintuitive DirectAdmin for some sites.)