I.e. TOR equipment.
You normally have a hot and a cold isle - and, funny enough, because you do not want to mix air, running cables from back to front is not that easy.
It is standard to have switches then mounted in a reverse - patch cables to the back, power supplies to the front. Equipment that may be used in a data center is often also offered in a reverse configuration - basically the airflow going back to front, so cold air is polled in from the back.
And yes, unless you start getting suppliers to provide servers that have the network cards in front (which will not happen) you basically need to run rack internal patching to servers in the back - without creating an air loop.
Cisco and a lot of manufacturers provide switches in reverse configuration as an option - please do the same basically for everything that is enterprise level. 48port gigabit, multiple 25g ports -> that stuff is also used in TOR configurations and right now those are problematic, pulling in hot air and emitting it into the cold aisle.
An optimal solution would be configurable - software setting for the case level switches, alternate power supplies or a small switch on the side for power supplies.