You might be surprised how far you can get a 9000-byte packet.
I did some experiments a few years back, when I was an IT Dictator — I mean, IT Director — over an ASN...I wonder if I can figure out where I wrote down the findings. 🤔
@Jima :ivoted: @Alyx :neocat_flag_ace: I had a wonderful situation where I had users in one country able to get jumbo frames to AWS, and the across the border, no jumbo frames to AWS. This lead to a giant argument about how to speed up traffic, because we needed different solutions in different countries.
And despite that, I was getting jumbo frames from Columbia to US-East-2 and it was wild!
@jima is it still noticeable extra effort or cost? Because to me it sounds like a "why not, if someone wants to do something unusual at least we aren't going to be the problem" thing, with little reason not to do it if your equipment supports it?
@HeNeArXn In my experience, it's more about risk than anything else.
By opening your network to MTU 9000, you create a hypothetical, potentially unknowable bottleneck somewhere else, where a packet would need to step down to MTU 1500. For IPv4, that's not the hugest concern (assuming the DF — don't fragment — bit isn't set), but for IPv6, intermediate routers can't fragment a too-large packet.
@HeNeArXn Instead, they have to send an ICMP "packet too big" response, and assuming no idiots have firewalled that specific ICMP type (or worse, all of them!), the sending host "should" receive the response and send a smaller re-attempt.
If everyone is doing what they should (allowing the ICMP traffic through)? No particular risk. If not... 😑
right, but for an exchange it is good to have a bigger value, so whatever limit the peer networks have choosen the exchange network will never be that bottleneck?
And as a network operator I'd rather have the limit happen close to the user edge too vs after having hauled the traffic to the exchange?
Jima
in reply to Alyx :neocat_flag_ace: • • •You might be surprised how far you can get a 9000-byte packet.
I did some experiments a few years back, when I was an IT Dictator — I mean, IT Director — over an ASN...I wonder if I can figure out where I wrote down the findings. 🤔
silverwizard
in reply to Jima • •@Jima :ivoted: @Alyx :neocat_flag_ace: I had a wonderful situation where I had users in one country able to get jumbo frames to AWS, and the across the border, no jumbo frames to AWS. This lead to a giant argument about how to speed up traffic, because we needed different solutions in different countries.
And despite that, I was getting jumbo frames from Columbia to US-East-2 and it was wild!
Sven
Unknown parent • • •Jima
in reply to Sven • • •@HeNeArXn In my experience, it's more about risk than anything else.
By opening your network to MTU 9000, you create a hypothetical, potentially unknowable bottleneck somewhere else, where a packet would need to step down to MTU 1500. For IPv4, that's not the hugest concern (assuming the DF — don't fragment — bit isn't set), but for IPv6, intermediate routers can't fragment a too-large packet.
Jima
in reply to Jima • • •@HeNeArXn Instead, they have to send an ICMP "packet too big" response, and assuming no idiots have firewalled that specific ICMP type (or worse, all of them!), the sending host "should" receive the response and send a smaller re-attempt.
If everyone is doing what they should (allowing the ICMP traffic through)? No particular risk. If not... 😑
Sven
in reply to Jima • • •right, but for an exchange it is good to have a bigger value, so whatever limit the peer networks have choosen the exchange network will never be that bottleneck?
And as a network operator I'd rather have the limit happen close to the user edge too vs after having hauled the traffic to the exchange?
silverwizard
Unknown parent • •Jima
in reply to Sven • • •@HeNeArXn I would absolutely agree with that assessment.
Peer networks can choose, based on their appetite for risk, and the intermediary network DGAF. 😀
...although granted, the other peer in the exchange may we'll be the bottleneck, and hopefully they didn't fuck around with ICMP... 😒