[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Practical guide to predicting latency effects?

Iâ??m looking for a practical guide â?? i.e. specifically NOT an academic paper, thanks anyway â?? to predicting the effect of increased (or decreased) latency on my userâ??s applications.

Specifically, I want to estimate how much improvement there will be in {bandwidth, application XYZ responsiveness, protocol ABC goodput, whatever} if I decrease the RTT between the user and the server by 10msec, or by 20msec, or by 40msec.

My googling has come up with lots of research articles discussing theoretical frameworks for figuring this out, but nothing concrete in terms of a calculator or even a rule-of-thumb.

Ultimately, this goes into MY calculator â?? we have the usual north-american duopoly on last-mile consumer internet here; Iâ??m connected directly to only one of the two.  Thereâ??s a cost $X to improve connectivity so Iâ??m peered with both, how do I tell if it will be worthwhile?

Anyone got anything at all that might help me?

Thanks in advance,

Adam Thompson
Consultant, Infrastructure Services
[[MERLIN LOGO]]<https://www.merlin.mb.ca/>
100 - 135 Innovation Drive
Winnipeg, MB, R3T 6A8
(204) 977-6824 or 1-800-430-6404 (MB only)
athompson at merlin.mb.ca<mailto:athompson at merlin.mb.ca>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nanog.org/pipermail/nanog/attachments/20200407/91591422/attachment.html>