Current standards recommend a 150-millisecond one-way delay. The standards allow a maximum 400-millisecond one-way delay. The overall delay is composed of several components.
Codec Delay:
The coder/decoder, or codec, at the voice source determines the algorithm used to encode and decode the voice. The codec can introduce a delay of 10 to 50 milliseconds or more.
Transmission Delay:
The transmission delay is the time it takes for the signal to propagate at the speed of light over the circuit. Normally this is small, but on a satellite circuit, this can be 250 milliseconds.
Insertion Delay:
Insertion delay is the amount of time it takes to clock the bits onto the line. An 8000-bit frame takes 800 microseconds to transmit at 10 megabits per second. The same frame takes 125 milliseconds to transmit on a 64-kilobit-per-second line.
Jitter Buffer:
Queueing delays introduce variation in the delay or jitter. A jitter buffer must be used at the receiving codec to smooth the jitter before the sound is recreated. A jitter buffer can add 80 milliseconds or more to the delay.
User Perception:
If the total of all these delays is less that 150 milliseconds, users will probably not notice. If the delay is between 150 and 400 milliseconds, users will notice the delay, but will find the line still usable. A line with a delay of more than 400 milliseconds would be considered unusable by many users.