Jay Taylor's notes

back to listing index

Why do UDP packets get dropped? | Hacker News

[web search]
Original source (news.ycombinator.com)
Tags: udp network-programming news.ycombinator.com
Clipped on: 2016-08-31

Image (Asset 1/2) alt= Hacker News new | threads | comments | show | ask | jobs | submit jaytaylor (2270) | logout
Why do UDP packets get dropped? (jvns.ca)
50 points by sebg 1 hour ago | unvote | flag | hide | past | web | 40 comments | favorite




Image (Asset 2/2) alt=

Her category 'lost in transit' is perhaps the biggest cause of UDP drops. No matter how big your buffers on the send/receive side, if an intermediary carrier decides UDP is not important or a D/DoS threat to their network, bye bye packet... or in some cases of rate-limiting, see you in a while perhaps.

A few rabbit-holes to dive down:

https://www.us-cert.gov/ncas/alerts/TA14-017A

http://www.christian-rossow.de/articles/Amplification_DDoS.p...

https://tools.ietf.org/id/draft-byrne-opsec-udp-advisory-00....

https://datatracker.ietf.org/wg/taps/charter/


The app can be at fault too. Some traffic is bursty (video frames, large files, images) and some client stacks have tragically small IP buffers (as little as 128K in some OSs). Apps must be prepared to read to exhaustion without pausing to process in those situations, then process once the buffer-storm is over.

I was excited when I saw the title -- UDP is the workhorse for data transfers on many projects I work on. The info, though, was very basic.

TL; DR version: packets get dropped when some buffers, at your local computer or a router between here and there get full.

I do not want to sound too critical -- the info is good for someone who never heard of UDP.

But I was hoping for more information. More substantiation on why full buffers are the main source of UDP drops (e.g., can smart throttling take some/most of the blame -- given the need to drop a packet, dropping a UDP is usually less painful than dropping TCP, etc.)? any quantitative numbers on sample network / hardware? etc.


Dropping TCP is normally preferable, I'd have thought, as it'll cause the TCP socket to back off. Dropping UDP is less likely to lead to such behaviour.

It's more because UDP is designed to be an unreliable protocol, packet losses are expected to happen and applications will have deal with it - meanwhile dropping a TCP packet may cause the congestion control algorithm to back off, but you're guaranteed to waste more traffic while those TCP sessions figure out WTF happened and retransmit.

What we found when working with UDP (multicast) at a previous gig (its been a while).

The messages can be up to around 64KB. I say "around" because if different OS's based on experiment did different things (this caused us much confusion). I think HPUX would drop them silently if they got to 62. at 64 it would return an error. Keep them below 60 to be safe.

Multicast means the router has to be set up correctly. When they muck with the network and add a hop, make sure your TTL (time to live) is set correctly on send. One of our OSs had a strange default to this.

All in all really a good protocol though and although you have no idea if the message got to where it was sent, sometimes you don't need that. Very low overhead.

Multicast was the selling point for us. Send and everyone attached to the group gets the message. You can subscribe to the groups and see all the messages which makes debugging easier.


This is not an original opinion, but I love her style of writing. I can feel the pure unadulterated joy at learning these things. Sometimes I learn along with her, sometimes I am seeing an old subject through new eyes. Always, it's worth the read.

If so many packets get lost in buffer overflows, can you somehow make the system wait for the buffer to empty?

If sending a packet syncronously, the system call would just hang until the buffers have drained enough.

How would you do it while receiving? Does the outside network just bang bits through the cable like a radio station? Then you'd have no choice but to save everything as fast as it comes in, lest you loose packets. Or is there, deep down on the lowest levels of the stack, actually some kind of request/response going on, even for UDP? For example at the level of Ethernet frames, or even individual bits? Like "here are some bytes. got them? - yes. - here are some more. got them? - (waiiiiting....) yes." Then you could just let the next router in the system wait while you drain your input buffer.

Even if there is no request/response going on, you could still view your incoming as the next routers outgoing. Configure the router to wait with sending until your client has drained the router's output buffer enough. (That would require the router to know how much data you can take.)


Buffers is the putative reason. We'd be tempted to suggest bigger buffers would help. But that contributes to the famous buffer-bloat problem that tanked the internet some years back.

The practical solution is to meter traffic (don't send faster than they can be processed on the receive end or faster than the tightest congestion bottleneck enroute). At the same time process as quickly as possible on the receive side - don't let a single buffer sit in the IP stack longer than absolutely necessary. E.g. receive on one higher priority thread and queue to a processing thread.


The guy also misses the point that packets can be dropped because of integrity check failures: if the packet has been altered in some way (cosmic ray, WiFi interfering with microwaves or whatever), then the checksum fails and the packet is dropped.

Actually, the causes of packet dropping are (AFAIK) exactly the same as in TCP. But in TCP, dropped packets are sent again.

One exception though: in UDP, delayed packets won't be waited for long before being dropped.


The ways in which packets are accidentally lost is the same between TCP and UDP, but if you have 150Mbps total coming in from four different ports and you need to route that data over a single 100Mbps port, then the router needs to choose which packets to drop. I bet that TCP packets take priority over UDP in many routers.

> So if you have a network card that's too slow or something, it's possible that it will not be able to send the packets as fast as you put them in! So you will drop packets. I have no idea how common this is.

This is extremely uncommon on 1Gbps NICs but is much more common on 10Gbps+ NICs. Also this type of dropping can happen on both RX and TX.

EDIT:

Also the author missed a set of buffers. The RX/TX rings on the NIC! These store packets on the NIC before they are moved to RAM (for RX) or sent on the wire (for TX). You can see them and configure their size using ethtool on Linux.

   $ ethtool -g ens6
   Ring parameters for ens6: 
   Pre-set maximums:
   RX:		4096
   RX Mini:	0
   RX Jumbo:	0
   TX:		4096
   Current hardware settings:
   RX:		512
   RX Mini:	0
   RX Jumbo:	0
   TX:		512

"This is extremely uncommon on 1Gbps NICs but is much more common on 10Gbps+ NICs. Also this type of dropping can happen on both RX and TX."

Is this backwards, or does the faster NIC really drop more packets? That seems like it would be unfortunate.


Not at "low speeds" but at line rate it requires a lot more CPU power to process all of those packets which often leads to drops. These drops specifically are called RX ring/fifo overflows/overruns and happen when the NIC enqueues more then rx_ring_size packets before the OS initiates a DMA of incoming packets from the NIC to RAM.

I suppose if you point a 10Gbps UDP stream at 1Gbps NIC there will be drops but these drops will happen at the switch not at the interface which is a different type of dropping.


Interesting fact, there's an ARP buffer in the Linux kernel, which holds outbound packets waiting for ARP resolution, which holds 3 packets (not configurable last time I checked).

No. Linux uses the socket buffer to buffer packets while ARP is in progress. (The only sane thing to do imo). I believe the "buffers 3 packets" was how it was done it the early 2.0 kernels.

Windows[1], BSDs[2], OSX[3] only buffers 1 packet per socket while it's ARPing

    [1] Empirical testing.
    [2] https://developer.apple.com/legacy/library/documentation/Darwin/Reference/ManPages/man4/arp.4.html
    [3] http://www.unix.com/man-page/FreeBSD/4/arp/ and similar for other BSDs

The the root cause is always buffers? That's it? Nothing ever gets lost for some other reason, maybe some sort of collision?

Mostly not. But she wrote that she has no idea about how the internet (or probably communications in general) works: Communications channels can be described by certain channel capacity as well as a signal to noise ratio (SNR). The lower the SNR the higher the chance of losing information (packets or parts of them). Encoding and error correction can help to a certain extent, but will lower the throughput. Whatever you do, Theres always a chance > 0℅ to lose something. On wireless connections much higher. If internet on your mobile doesnt work properly it's much more because of this and not because of overloaded buffers. The extreme case would be a cut cable, which leads to a SNR of 0. Depending on the medium and topology other errors can Happen, e.g. routing errors.

Im surprised that nothing like this is mentioned, isnt that undergraduate university stuff?


It could also be data corruption. Some bits flip in the header, and either the UDP fails a checksum validation, or gets misrouted.

There are also software reasons. A network stack is configured for some maximum UDP size and a datagram exceeds that size. On Mac OS X, the default maximum is just a little over 9200 bytes; you have to

   sudo sysctl -w net.inet.udp.maxdgram=65535
The buffer space is there; this is an administrative policy reason for packets being tossed.

Datagrams can be dropped due to being unroutable. Some router gets rebooted, and so its peers drop packets. TCP connections keep going after a little pause; UDP sessions lose datagrams.

Lastly, IP datagrams are dropped if their TTL (time to live) decrements to zero as they cross a router, even if they are otherwise routable, not too big for any buffer, and with a correct checksum.


There's all kinds of other reasons:

- Packet gets corrupted. A bit in the right place could cause it to be dropped or delivered to the wrong destination.

- Something goes wrong on the PHY layer, say a dodgy fiber connection between two nodes. It'll then get dropped due to Ethernet CRC mismatches.

- There are network nodes whose primary reason for existing is selectively dropping packets. For example traffic shapers, firewalls.


I think there are other IP/routing reasons why an IP packet could get dropped for example if it needs fragmenting but has the no fragment bit set. Also if the TTL runs out from the packet being forwarded too many times.

I imagine all the fancy routing protocols people use could end up dropping a packet. Maybe that can be lumped in with traffic shapers and firewalls?

Fun story about randomly dropping UDP: comcast did that when they first started doing traffic shaping and it ruined my team fortress classic games. I assume there must be a better way than dropping packets. Maybe buffers :) ?


Queuing tends not to be a great traffic shaping strategy. The problem is that the sender will almost certainly not react to the extra queuing delay in any way (nobody runs RTT-sensitive congestion control in practice). And if the sender doesn't react to the queuing, it'll fill up the buffer and the same amount of packets will be dropped anyway. So you got increased RTTs and no reduction in packet loss.

There is a better way of doing shaping for TCP traffic, by making the sender reduce the rate of transmission by other means than dropping packets. For example via manipulating the receive window. (And other, more sophisticated variations on that). But these methods aren't applicable to UDP in general, at best to some small subset of UDP-based protocols.


They explicitly don't go into what happens on the routers between sender and receiver. Congestion control is a huge part of TCP's design. Don't know how common it is these days, but that is another place where packets will just get dropped. TCP packets get dropped too, but detecting that, resending, and then throttling back is built into the protocol. With UDP you either don't care or you handle that differently higher in the stack.

Then their article should be "because they do". Pointing out one tiny, seldom the cause of UDP packet drops is literally worse than useless.


They didn't go into the number of ways packets can be lost inbetween the sender and receiver. In my experience it is generally some small corruption at some point that causes the IP or ethernet checksum/CRC to fail and the packet gets dropped. It is extremely uncommon nowdays and generally due to failing hardware, buffers are by far the most common cause.

"lost in transit

"It's possible that you send a UDP packet in the internet, and it gets lost along the way for some reason. I am not an expert on what happens on the seas of the internet, and I am not going to go into this."

Well. Ok.


There aren't a lot of true bus topologies out there any more for collisions to occur on. And Wi-Fi retransmits.

So does wired ethernet.

Well, wired Ethernet isn't a bus topology anymore, so there aren't collisions.

> The the root cause is always buffers? That's it? Nothing ever gets lost for some other reason, maybe some sort of collision?

Indirectly; lots of collisions put stress on the buffer, so the probability that the buffer will be exhausted will increase.

Of course, if an intermediary router fails after sending the message, a UDP packet will also be silently dropped.


[I realize this comment will be Dead. I disagreed with one of Dang's friends once, and via the brilliant moderation of HN...]

There are countless ways UDP packets get dropped, and this submission, currently sitting on top of HN, demonstrates the utter decline of this site. Some veneer of sort of information that doesn't even remotely cover the topic, can be consumed in seconds, and makes everyone feel smarter, despite not being smarter at all. Eh.


UDP is just a thin wrapper on an IP packet that adds port and checksum. This in turn is just a thin wrapper on whatever underlying network packets you use.

So UDP packets are as reliable as TCP packets in principle. TCP just hides the unreliability with flow control and retransmits.


There is also UDT as a reliable UDP-based alternative to TCP. Some day I'd like to hear from someone who used it whether it's worth it and when it makes sense.

"when a few dropped packets here and there don't mean the end of the Universe. Sample applications: tftp (trivial file transfer protocol, a little brother to FTP), dhcpcd (a DHCP client), multiplayer games, streaming audio, video conferencing, etc."

http://beej.us/guide/bgnet/output/html/singlepage/bgnet.html...


You can't put 10 pounds of shit in a 5 pound bag

Though Windows Vista was once installed on 1Gb laptops and sold that way to unsuspecting customers.

Throw shit on the wall and leave to dry for a few hours first.

You can, but it's not generally advisable.

(I wish authors would date articles.)

URL has the date of 2016 08 24 ...



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: