Vonage and Comcast troubleshooting notes

I had a client recently get Vonage Business.  We’re using a PFSense which is in my honest opinion, “Enterprise Class”.  It scales like no other for the price!

I’m going to document my troubleshooting, more to come.  My client was experiencing voice jitter, voice fading and call dropping on occasion.

The first thing I did was a trace route.  Linux that command is:  traceroute hostname, Windows it’s:  tracert hostname.  In my example I trace routed to the SIP provisioning IP / host name of Vonage which at the time of writing is:  prov.vocalocity.com.

I did the trace route on CTS Telecom (Fiber), Charter and Comcast.  The interesting thing is that on CTS and Charter it worked from start to finish!

I’m removing the first 5 hops so you don’t know where I live :P  Stalkers!

Charter HOME connection trace:

traceroute to prov.vocalocity.com (, 30 hops max, 60 byte packets

6  lag-200.ear3.Chicago2.Level3.net (  50.797 ms  42.233 ms *
7  ae-1-3501.edge4.Chicago3.Level3.net (  41.505 ms ae-14-3604.edge4.Chicago3.Level3.net (  20.111 ms  20.375 ms
8 (  21.629 ms  21.607 ms  21.618 ms
9  cr2-tengig-0-5-3-0.chd.savvis.net (  19.501 ms  50.857 ms  57.366 ms
10 (  76.101 ms  74.702 ms  75.813 ms
11  hr1-te-1-0-0.atlantaat1.savvis.net (  74.757 ms  72.764 ms  74.133 ms
12  das1-v3006.at1.savvis.net (  64.993 ms  64.200 ms  64.097 ms
13 (  62.473 ms  52.542 ms  51.231 ms
14  * * *
15  * * *
16  * * *
17  * * *^C
…it never finishes!

Notice I preface this with a HOME connection…why?  Because the business Charter is DIFFERENT!

Charter BUSINESS trace:

Tracing route to prov.vocalocity.com []
over a maximum of 30 hops:

6    18 ms    15 ms    15 ms  bbr01aldlmi-bue-1.aldl.mi.charter.com []
7     *        *        *     Request timed out.
8    30 ms    22 ms    22 ms  ae-2-3601.edge4.Chicago3.Level3.net []
9    20 ms    22 ms    32 ms
10    21 ms    22 ms    21 ms  cr2-tengig-0-5-3-0.chd.savvis.net []
11    38 ms    38 ms    38 ms
12    37 ms    38 ms    38 ms  hr1-te-1-0-0.atlantaat1.savvis.net []
13    50 ms    45 ms    49 ms  das1-v3005.at1.savvis.net []
14    40 ms    44 ms    55 ms
15    40 ms    38 ms    38 ms
16    37 ms    41 ms    38 ms

Trace complete.

It finishes and the ROUTES are different!

Charter must route their traffic differently based on home user or business.


Comcast was much like the Charter failures for home users:

te-0-4-0-19-ar02.pontiac.mi.michigan.comcast.net (  26.269 ms
te-0-4-0-18-ar02.pontiac.mi.michigan.comcast.net (  18.524 ms  17.141 ms
be-33668-cr01.350ecermak.il.ibone.comcast.net (  29.193 ms  24.221 ms  27.240 ms
c-eth-0-3-0-pe05.350ecermak.il.ibone.comcast.net (  22.148 ms  25.057 ms  22.213 ms
ber1-tengig3-3.chicagoequinix.savvis.net (  24.499 ms  23.066 ms  27.972 ms
cr1-tengig-0-5-3-0.chd.savvis.net (  31.775 ms  31.013 ms  31.473 ms
10 (  52.204 ms  55.201 ms  48.346 ms
11  hr1-te-1-0-0.atlantaat1.savvis.net (  46.763 ms  58.040 ms  48.552 ms
12  das1-v3006.at1.savvis.net (  49.561 ms  49.876 ms  56.578 ms
13 (  53.821 ms  58.421 ms  46.651 ms
14  * * *
15  * * *
16  * * *
17  * * *


I had to actually call Comcast tier 1 tech support – yes, the people who just tell you to reboot your router.  Then, after discussing it with the tier 1 tech he logged into the modem I was on and did his own trace route.  He said, it works for me.  It finishes.  I said, really – what IP address did you resolve to and get to?  He told me and it was some IP address in the Netherlands!  I then insisted he get this escalated because who in their right mind would do VOIP from the midwest to servers across the Atlantic ocean to the Netherlands?  That makes no sense.

You have to push these guys to THINK!

After submitting my ticket, the trace routes FINALLY work like they should.

I can’t say this fixed my call quality issues but I’m still troubleshooting.  These things take time BUT don’t EVER rule out the ISP’s network as possibly causing the problem.  This is the 2nd time and ISP has had the problem with routing on their backbone that has affected service.  The other time I dealt with Frontier about their backbone in Muskegon, MI – they were great about it and I actually talked to the guy who fixed it out of New York.  I did an IP Who Is lookup which lists the maintainers email (that never works) and the telephone of the maintainer – which told me to call another number, then after a brief phone tree battle I got to the guy who fixed it!

Moving on…

I also updated / forced the phones on the network to use Google DNS servers, and  I am trying that rather than the local DNS server and in favor of using Comcast’s which is and  Comcast (believe it or not) DNS servers have 150 ms + ping times OMG!  Google DNS servers; something like 20 ms or less, maybe a touch over BUT they’re fast.  I don’t know if that will make a difference to the phone quality (probably not) but it’s another thing I’m touching and changing.

QoS might be next BUT once your traffic hits the public InternetS all bets for QoS are off.  QoS is really for your local network and routing but my client is barely touching their WAN – they’re too busy working and nobody steams or uses the Internet for more than work.

I’ll update this as I move forward, I thought about SIP proxy with the PFSense and QoS on that SIP Proxy per some tutorials on the PFSense forum but haven’t gone there yet.