I have a receiver-side NDIS ETL capture. It shows packets in order and all present. I also have an application receiving data off of the socket. The application misses some data, and the data is occasionally in the wrong order.
The application is receiving multicast UDP via a LAN. The loss is apx. 0.01% of data received. While UDP delivery and ordering aren't guaranteed by the protocol and the throughput isn't terrible, I want to understand why the OS would drop the data.
It doesn't need to be lossless, but I want to first identify where the packets are being lost (e.g. in some finite kernel buffer), then find the threshold (e.g. after 64MB of queued data).
Any tips or ideas on steps to take would be greatly appreciated.