Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> https://en.wikipedia.org/wiki/UTF-7 exists, but was rarely used.

UTF-7 was possible because there was an out-of-band mechanism to signal its use, "Content-Type: text/plain; charset=UTF-7":

* https://datatracker.ietf.org/doc/html/rfc2152

What's the OOB signalling in IP packet transmission between two random nodes on the Internet.

 help



The first thing in the IP header is the version number.

> The first thing in the IP header is the version number.

So you just change the version number… like was done with IPv6?

How would this be any different: all hosts, firewalls, routers, etc, would have to be updated… like with IPv6. So would all application code to handle (e.g.) connection logging… like with IPv6.


I was addressing the narrow claim that you cannot distinguish ASCII from UTF-7. You can distinguish IPv4 from IPv6 by looking at the version field (and I forgot to mention the L2 protocol field is out of band from IP's perspective). Obviously if the receiver doesn't support UTF-7 or IPv6 then it won't be understood. Forward compatibility isn't possible in this case.

Weirdly, the version field is actually irrelevant. You can't determine the type of a packet by looking at its first byte; you must look at the EtherType header in the Ethernet frame, or whatever equivalent your L2 protocol uses. It's redundant, possibly even to the point of being a mistake.

I mean, yes, in practice you can peek at the first byte if you know you're looking at an IP packet, but down that route lies expensive datacenter switches that can't switch packets sent to a destination MAC that starts with a 04 or 06 (looking at you, Cisco and Brocade: https://seclists.org/nanog/2016/Dec/29).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: