This card use old fasteoi interrupt.irq=17 owner="[0 0 IO-APIC-fasteoi eth1]"
irq=19 owner="[0 0 IO-APIC-fasteoi eth2]"
# dmesg
Intel(R) Gigabit Ethernet Network Driver - version 1.3.8.6
Copyright (c) 2007-2008 Intel Corporation.
PCI: Enabling device 0000:01:00.0 (0000 -> 0003)
ACPI: PCI Interrupt 0000:01:00.0[A] -> GSI 16 (level, low) -> IRQ 169
PCI: Setting latency timer of device 0000:01:00.0 to 64
igb: 0000:01:00.0: igb_validate_option: Interrupt Mode set to 3
igb: eth0: igb_probe: Intel(R) Gigabit Ethernet Network Connection
igb: eth0: igb_probe: (PCIe:2.5Gb/s:Width x4) 00:1b:21:2e:9c:a4
igb: eth0: igb_probe: Using MSI-X interrupts. 4 rx queue(s), 1 tx queue(s)
PCI: Enabling device 0000:01:00.1 (0000 -> 0003)
ACPI: PCI Interrupt 0000:01:00.1 -> GSI 17 (level, low) -> IRQ 193
PCI: Setting latency timer of device 0000:01:00.1 to 64
igb: 0000:01:00.1: igb_validate_option: Interrupt Mode set to 3
igb: eth1: igb_probe: Intel(R) Gigabit Ethernet Network Connection
igb: eth1: igb_probe: (PCIe:2.5Gb/s:Width x4) 00:1b:21:2e:9c:a5
igb: eth1: igb_probe: Using MSI-X interrupts. 4 rx queue(s), 1 tx queue(s)
# cat /proc/interrupts | grep eth
140: 23 63 0 0 PCI-MSI-X eth0-Q0
148: 23 0 0 63 PCI-MSI-X eth0-Q1
156: 23 43 20 0 PCI-MSI-X eth0-Q2
164: 23 0 0 63 PCI-MSI-X eth0-Q3
172: 2 0 0 0 PCI-MSI-X eth0
180: 21 67 0 0 PCI-MSI-X eth1-Q0
188: 21 0 67 0 PCI-MSI-X eth1-Q1
196: 21 20 0 47 PCI-MSI-X eth1-Q2
204: 21 0 67 0 PCI-MSI-X eth1-Q3
212: 1 0 0 0 PCI-MSI-X eth1
Clearly you must have skipped over those countless reports of multi-core systems only using one core for packet routing using those drivers and enabling multiple queues would let multi-core systems use all available cpu power for routing. It should help a lot with packets per second throughput, and especially with 10G interfaces.I'd even say that for ten years nobody seemed to notice any problems
I thought you said everyone needs it?no one intresting router perfomence
It's funny that, until this thread, there were no problems
well, what I have dealt with is "oops... one core of our quad system is overloaded... sh*t! we should add one more router =("I'd even say that for ten years nobody seemed to notice any problems, so your statement doesn't make any sense (ie. nothing changed)
exactly mean ?- updated SMP support for multicore
I am sad to hear thisWe have no plans to support multiqueue right now.
huh... all our routers have 1 ethernet interface %) and I don't think VLAN interfaces will work on per core basisWe have no plans to support multiqueue right now. However, we are working on the feature where there is one core per interface.
You might want to have more than one interface then and distribute your Vlan's over them. At least it is a solution, that is usable to some extend.huh... all our routers have 1 ethernet interface %) and I don't think VLAN interfaces will work on per core basisWe have no plans to support multiqueue right now. However, we are working on the feature where there is one core per interface.
Me too .huh... all our routers have 1 ethernet interface %) and I don't think VLAN interfaces will work on per core basis
I agree . they implement their strange solutions . What is the matter with a configuration string or some parameters ?im guessing the developers will look at implementing their own solution, if it doesnt work they will find out and implement standard intel configuration switches. Or their solution might be better ; ) Im sure they will get feedback and it will either be good or bad.
My preference, even without multiqueues, would be to just have a string field to allow driver configuration options on Intel NICs right on the interface window in winbox. Let us decide what the best performance parameters are for our situation.
implement standard intel configuration switches
Please tell me which options exactly do you want to see and how will these options help. Martini, Changeip, omidkosari, hedele, Eising, please share your opinion what exactly should be turned on like you say.have a string field to allow driver configuration options on Intel NICs right on the interface window in winbox
new intel driver in v5 beta 2 will use as many cores as there are available, as already written aboveI hope there would be a multi-core per ethernet support in the next router OS version.
Somebody have tested it?new intel driver in v5 beta 2 will use as many cores as there are available, as already written above
Well, maybe we will even have to bend over - we will seeWell.. After reading this thread I understood clearly that MT customers has to get down on their knees beg and cry for a new feature to be only even considered by MT as a "needed feature". LOL
hahaha, are your fingers touching the ground ?Well, maybe we will even have to bend over - we will see
Ah ok, I wasn't quite clear. Thanks!It's already supported, it's automatically used if it's supported by the card. See previous posts.
Normis, how do you know that msi-x works? What messages/logs you see?you won't see it, it's internal driver feature, not RouterOS feature.
and another problem - on intel nics simple queue and queue tree leds router to crash (kernel panic)
I can confirm that my x86 has been rebooted on next sec, after I applied first simple queue rule.on 5v beta4 queue on intel nics work better, if enable queue tree router just reboot, but dont kernel panic
It's only for 5B4. Believe mestrange, here i have about 270 simple queues and no reboot yet ( x86) and intel nics
still strange...It's only for 5B4. Believe mestrange, here i have about 270 simple queues and no reboot yet ( x86) and intel nics
May be it's only for some specific Intel chips, (for example with MSI support)still strange...
Any feedback for 5B5?on 5v beta4 queue on intel nics work better, if enable queue tree router just reboot, but dont kernel panic
It's just drop links or reboot the router also?yes it still drops the links after a while like beta 4
a) V4 do not support new Intel chipsets (igb driver).2 mangust - yes, but this magic command not need on heavy loaded routrer, because CPU load will be 100%. MSI need many cpu core, in other way you can use V4
Really??? Pls export the pci devices list.... It is very bad, if so...Hello,
i've just got a new intel mobo (S1200BTL - http://www.intel.com/products/server/mo ... ations.htm) for testing.
It would be a great for routing (and price also). It has 2 onboard NICs (as viewable on specs): i82574L and i82578M
the first nic is recognized, last one is no. The tech guy at distributor said it should work with same driver, so please include vendor/device id into source!
Details:
Vendor: Intel Corporation
Vendor ID: 0x8086
Name: unknown device (rev: 4)
Device ID: 0x1502
I would be pleased to test in beta ROS version, thanks!
wow... too bad...Confirmed. Second built-in eth controller doesn't work on latest ROS v5.6
Edit: This one doesn't work i82578M.
Using beta version is not recommend in our company policyHave you tried 6.0 beta2?
Just upgrade to 6 beta2. Yes it can detect i82578M. Great! !!Have you tried 6.0 beta2?
In that case, you have to forgot using mikrotik i think. In many cases, the new version is wrong than the previous ones.Using beta version is not recommend in our company policy
v6.0 will be 'stable' =) soon...But I cannot use beta for reallive. I really hope Mikrotik Team can support i82578M in stable version....
Did You test this card (E1G44ETBLK) ?Sorry, for offtipoc, but does MT going to support igb driver in ROS 4.x?
Looks like it better to use stables ROS version. And I've bought the card (E1G44ETBLK) already.
Ready to provide this card for testing to MT Team (if this needed).
Did you ever figure this out?this new drivers supports the Intel 82580EB chipset?