My explanation, not sure if useful, let's say you have to send 32000 bytes with error correction, and block size 32 bytes(it is specific to ECC), where ECC can fix up to 2 bytes errors. And suddenly there is noise happened, then it means for example 100 bytes in row will be wiped out. Problem of all error correction protocols, they can fix very limited amount of packets. In this case almost all data was going on this blocks will be lost, they can fix 2 bytes in 32byte block, but not when all bytes are screwed up :)
So instead of making blocks one by one, data(bits) are mixed over large block, for example 3200 bytes.
Sure DSLAM (ISP side) need to collect all this data, and then add error correction and "mix"(and this is what add latency). This way even this 100 bytes are in row - in reality when data is reassembled - they will be distributed all over the block, so it will be 100/3200=3.12%, or 1 byte per 32 byte ECC block, which is easily correctable.
So all that mean - here we have choice, latency vs error prone setup.
So instead of making blocks one by one, data(bits) are mixed over large block, for example 3200 bytes.
Sure DSLAM (ISP side) need to collect all this data, and then add error correction and "mix"(and this is what add latency). This way even this 100 bytes are in row - in reality when data is reassembled - they will be distributed all over the block, so it will be 100/3200=3.12%, or 1 byte per 32 byte ECC block, which is easily correctable.
So all that mean - here we have choice, latency vs error prone setup.