Having a PoE-IN Gigabit port, instead of non-PoE one, would be awesome.- option to have external power so it can run even when server is off (possibility for failsafe or doing ring networking that doesn't go down with server)
How would a Linux host + CCR2004 card combo compare to a single server host with Open vSwitch ?putting it into a Linux/BSD box instead of a 10G NIC or putting it into a server used for virtualization (Proxmox or just Debian).
The first question I had when I saw that fascinating product in the latest newsletter was how it compares to putting a dumb 4x SFP+ card into the machine and attaching a CHR to it? With hardware virtualization, the PCIe card can be dedicated to the VM, so the overhead should be negligible. Given that, why would you bother putting a captive copy of RouterOS onto the card itself?
Someone up-thread mentioned taking the load off the host CPU, but surely the host CPU is hugely more capable. This CRS2004-on-a-card is basically a Raspberry Pi 4 with better Ethernet I/O. Powerful, yes, but not Xeon powerful.
I have some questions about the product that aren't explicit in the product page:
To enable independent operation between the host and the "router" it would be nice to have and option of providing external power to it via a common barrel jack like in the other MT products.
This will also solve the problem of some fast hosts booting faster than the router and needing to wait for it.
Isolated DC/DC converters exist exactly for this reason.
A big part of the smartnic appeal, today, is to do router-on-the-host type operations without giving up any of the host cores. I have 128 cores in the box, and I'd like to be able to sell all of them. They're capable and pricey. For all intents and purposes, the four cores in the NIC are "free", assuming the thing doesn't have an obscene power budget.The first question I had when I saw that fascinating product in the latest newsletter was how it compares to putting a dumb 2x SFP28 card into the machine and attaching a CHR to it? With hardware virtualization, the PCIe card can be dedicated to the VM, so the overhead should be negligible. Given that, why would you bother putting a captive copy of RouterOS onto the card itself?
Someone up-thread mentioned taking the load off the host CPU, but surely the host CPU is hugely more capable. This CCR2004-on-a-card is basically a Raspberry Pi 4 with better Ethernet I/O. Powerful, yes, but not Xeon powerful.
EDIT: Minor factual errors. Question stands regardless.
Well, we are running a CHR as a VM in an ESXi host where we initially also had it do the management routing, and that is a pain.The first question I had when I saw that fascinating product in the latest newsletter was how it compares to putting a dumb 2x SFP28 card into the machine and attaching a CHR to it? With hardware virtualization, the PCIe card can be dedicated to the VM, so the overhead should be negligible. Given that, why would you bother putting a captive copy of RouterOS onto the card itself?
are there drivers on the way for VMware and Hyper-V/Windows?
As far as I know, the router needs to boot before the server in order for the PCIe virtual interfaces to be recognized in the *nix system. I haven't extensively tested this yet as my servers are mostly Windows with the odd TrueNAS box, under which these interfaces DID appear, however they did not appear to be working as intended, presumably because they were not present at system boot? ** I do plan to test this further when I have my new SFP modules next week.Is a boot delay required for the BIOS to see the card and then present it to the OS? or can Linux detect the card later when it comes online? (maybe as hotplug)
Mounting them later is on my to-test list.I remember the days when BIOS would not properly recognize certain disk devices, but still it was no problem mounting them later in Linux, one just could not boot from them.
Is it the same here? (maybe you want to netboot the server, that would be impossible of course)
In my testing the router always shuts down with the server (I haven't actually checked logs to see if it does a graceful shutdown or not). I imagine with the right server hardware and appropriate PCIe device power settings in BIOS that rebooting/shutting down it may be possible to keep the router up. ** I haven't dug into that yet, but it's also on my to-test list.Also it seems the card can reset the server, but it also would be interesting to know if the server will reset the card and if that can be inhibited.
I.e. when you reboot the server OS, will the RouterOS remain running or will it get restarted too?
When the server is 'shut down' in its remote management (ILO, DRAC, IMM) is it still possible to keep the card running (on 'standby power') or will it power down with the server?
That's an interesting scenario. That's got me thinking about how to implement the management of the router on it's MGMT port into my topology.Ok good luck with the experiments, looking forward to see your findings.
Of course the use case would be to have such a card in a colocated server, connected to the internet only via a fast link on one of the fiber modules, and then plug a short cable between the ethernet port and the dedicated ILO/DRAC/IMM port on the server, then being to remotely access the server management via a VPN running on the card.
That will probably not work because boot from network is done as the last step, well after PCI initialisation.Try to add for delay also boot from network...