It been running for months without reboots and issues. Rare thing.If you don’t need Router functionality you could switch to SWOS, I have now 3 devices running since 30th of July without problems.
Issue still persists in 6.48.4, ports 17-23 got stuck.
I noticed that there is something that may be related to this in latest rc2 of 6.49:
*) crs3xx - fixed bridge controller and extender packet forwarding for CRS312, CRS326-24S+2Q+ and CRS354 devices
Does anyone have any information if this really fixes the issue?
Hello, still running without issue with 6.48.2 stable ?CRS354-48P-4S+2Q+
uptime 13d 16:39:19
6.48.2 (stable)
so far so good
Hello, I'm runnig 7 of them with Switch OS without Problems (the Version without PoE)Hello, still running without issue with 6.48.2 stable ?CRS354-48P-4S+2Q+
uptime 13d 16:39:19
6.48.2 (stable)
so far so good
We are considering to use the ones we have for non critical stuff (like BMC / IPMI).
Also can anyone confirm that there is no issue when using SwOS ?
Mikrotik staff maybe some official feedback ?
No. Mikrotik continues to irresponsibly ignore this problem. No information or any eta from them on the fix.Is this good now ?
Any news?
I still see the random Ethernet ports turn off. One thing I noticed, it can happen when power went down and the switch did reboot. After that I see unstable work until I turn off the switch, wait 10 minutes and start again. Then it seems to be stable.
It's not port flapping.I ve installed a CRS354 recently, not fully loaded yet, just a couple of ethernet ports and an SFP+ port.
I ve not seen any port flapping...
Versions 6.49.6
VLANs are used on the CRS...
ok.You will see groups of ports stop forwarding traffic AT ALL until reboot.
I read every post in this thread, and it looks like the only working solution is to use SwOS.Is it certain that there is a problem indeed ? That is not fixed on recent ROS releases ?
Anyone made a support ticket ?
That is a general description.But we are having a lot of problems with local devices staying connected
Etcoplasmosis: I dealt with an install last month, that was unfortunate enough to have 3 354... They require daily reboots to work as basic layer 2 switches.Any official, or not, news on this behavior ?
Problem solved with ros7 ??
Can anyone confirm they are absolutely reliable when used in SWOS ??
I have to buy some PoE units, as Mikrotik fan, I wouldn't switch to Unifi or some other brand because of this , it's NOT just a price matter....
Thank you
I did update the switches to whatever was current that day. Ergo routerOS and board firmware.@gotsprings what is the revision version under /system routerboard ?
Revision is not the same as firmware version.I did update the switches to whatever was current that day. Ergo routerOS and board firmware.
My mistake.Revision is not the same as firmware version.I did update the switches to whatever was current that day. Ergo routerOS and board firmware.
The revision i think changes when there is a hardware change on the device.
For example, my CRS354 indicates as Revision : r2.
I didn't notice if there was a revision at the time.So, yours is r2 ? Or the revision is not mentioned?
I have an install with a 328 and 326 linked with the SFP+ DAC.Pfff the problem exists also on Swiss finally. Can something be done soon enough?
So no OS issue (swos or ros) ??Pfff the problem exists also on SwOS finally. Can something be done soon enough?
If it broke right away... Wouldn't have been a problem for so long.I think the problem on mine is on ports eth9-eth16. The moment it boots they work and after a while possible when some data pass through they stop communicating. POE is working on those ports, the data are not passing
Still on my...Same issue here ... allmost 3 years later, not fixed ...
Replace it with 2 24 port units.Hello, has mikrotik already found a solution to the failure to switch individual blocks? I'm having trouble with ports 9-16.
Buy 4 of the CRS328-24PIs this problem still existent, even with SwitchOS?
We are on the process of buying 02 48-port PoE switches with 10GB uplinks and CRS354-48P-4S+2Q+RM is one of the candidates but this bug is a showstopper.
Regards
Mauricio
What is your support ticket number?MIKROTIK SHAME ON YOU!!! IT'S BEEN 3 YEARS AND YET STILL SILENCE ON YOUR END?
In the US... its an Undocumented Feature.They are still selling that switch as if it's perfectly fine, without any shame. This needs to be reported to regulators, other companies have paid large fines, in millions and billions for such behavior in the EU market. What they are doing is a criminal offense.
Regarding revision 3 of this switch, this is what support said:
"Revision 3 comes with larger flash storage - 32MB."
So I'm sure it's not fixed if that's the only thing that has changed on it.
So what happened with your switch purchase?And now the switch is not in stock at the suppliers in Denmark...
I'm thinking of buying 2 pcs. I don't think I will have any problems returning a switch if it shows that some of the ports will have the issue.
But why do you ask here?I've ordered 2 pcs 2 weeks ago, I still haven't received confirmation on delivery date.
Are you going to replace them because of port problems?
I mean... I did ask.But why do you ask here?I've ordered 2 pcs 2 weeks ago, I still haven't received confirmation on delivery date.
Are you going to replace them because of port problems?
Contact your distributor, instead of asking these useless questions on the user forum.
They are still not in stock. I've tried searching at all major suppliers in Denmark, strange... Our main supplier can't tell anything about delivery date, because the import company don't have a date... I wonder if it's related with the issues, or it's just shortages.I mean... I did ask.
But why do you ask here?
Contact your distributor, instead of asking these useless questions on the user forum.
And yes they want the switches replaced due to the reboots required to make the switches pass packets again.
Yes, the POE version is also available in Denmark, but for Edge switching / AP's we are into the Unifi ecosystem at the moment, so we don't need POE. CRS354's QSFP+ and SFP+ ports combined with the RJ45 ports, matches exactly what we need for our 2x "Server / Distribution" switches in our setup.Mikrotik EU Store has 5pcs on stock:
https://www.mikrotik-store.eu/en/mikrot ... 48p-4s2qrm
No need to by from a distributor having them from an importer, both putting their own margin on top of the price for doing nothing else then forwarding your order.
Can you paste a link to the other threadSince there are 2 threads out there about people having just bought this switch and are having issues...
Time to bump this thread to the top again.
viewtopic.php?t=202646Can you paste a link to the other threadSince there are 2 threads out there about people having just bought this switch and are having issues...
Time to bump this thread to the top again.
Maybe they will release a new model instead of fixing the old oneThis thread is 1341 days old.
Other threads and Facebook posts come up from time to time.Hard to have sympathy reading this thread as Rextended alluded to ........ where are the 1000s supout reports..........???
not fixed, today (couple days after update to 7.13.1) strikes this problem againHave you guys tried 7.13.1? Those switch changes should fix the CRS354 issues described in the topic
hi, here is my exportrushlife could I please see your configuration on these switches? does this happen to all of them or just specif ones? and what is usually your uptime without the failure? Thank you.
/interface bridge add admin-mac=macFROMether1 auto-mac=no name=bridge1
/interface ethernet set [ find default-name=ether1 ] loop-protect=on
/interface ethernet set [ find default-name=ether2 ] loop-protect=on
/interface ethernet set [ find default-name=ether3 ] loop-protect=on
/interface ethernet set [ find default-name=ether4 ] loop-protect=on
/interface ethernet set [ find default-name=ether5 ] loop-protect=on
/interface ethernet set [ find default-name=ether6 ] loop-protect=on
/interface ethernet set [ find default-name=ether7 ] loop-protect=on
/interface ethernet set [ find default-name=ether8 ] loop-protect=on
/interface ethernet set [ find default-name=ether9 ] loop-protect=on
/interface ethernet set [ find default-name=ether10 ] loop-protect=on
/interface ethernet set [ find default-name=ether11 ] loop-protect=on
/interface ethernet set [ find default-name=ether12 ] loop-protect=on
/interface ethernet set [ find default-name=ether13 ] loop-protect=on
/interface ethernet set [ find default-name=ether14 ] loop-protect=on
/interface ethernet set [ find default-name=ether15 ] loop-protect=on
/interface ethernet set [ find default-name=ether16 ] loop-protect=on
/interface ethernet set [ find default-name=ether17 ] loop-protect=on
/interface ethernet set [ find default-name=ether18 ] loop-protect=on
/interface ethernet set [ find default-name=ether19 ] loop-protect=on
/interface ethernet set [ find default-name=ether20 ] loop-protect=on
/interface ethernet set [ find default-name=ether21 ] loop-protect=on
/interface ethernet set [ find default-name=ether22 ] loop-protect=on
/interface ethernet set [ find default-name=ether23 ] loop-protect=on
/interface ethernet set [ find default-name=ether24 ] loop-protect=on
/interface ethernet set [ find default-name=ether25 ] loop-protect=on
/interface ethernet set [ find default-name=ether26 ] loop-protect=on
/interface ethernet set [ find default-name=ether27 ] loop-protect=on
/interface ethernet set [ find default-name=ether28 ] loop-protect=on
/interface ethernet set [ find default-name=ether29 ] loop-protect=on
/interface ethernet set [ find default-name=ether30 ] loop-protect=on
/interface ethernet set [ find default-name=ether31 ] loop-protect=on
/interface ethernet set [ find default-name=ether32 ] loop-protect=on
/interface ethernet set [ find default-name=ether33 ] loop-protect=on
/interface ethernet set [ find default-name=ether34 ] loop-protect=on
/interface ethernet set [ find default-name=ether35 ] loop-protect=on
/interface ethernet set [ find default-name=ether36 ] loop-protect=on
/interface ethernet set [ find default-name=ether37 ] loop-protect=on
/interface ethernet set [ find default-name=ether38 ] loop-protect=on
/interface ethernet set [ find default-name=ether39 ] loop-protect=on
/interface ethernet set [ find default-name=ether40 ] loop-protect=on
/interface ethernet set [ find default-name=ether41 ] loop-protect=on
/interface ethernet set [ find default-name=ether42 ] loop-protect=on
/interface ethernet set [ find default-name=ether43 ] loop-protect=on
/interface ethernet set [ find default-name=ether44 ] loop-protect=on
/interface ethernet set [ find default-name=ether45 ] loop-protect=on
/interface ethernet set [ find default-name=ether46 ] loop-protect=on
/interface ethernet set [ find default-name=ether47 ] loop-protect=on
/interface ethernet set [ find default-name=ether48 ] loop-protect=on
/interface ethernet set [ find default-name=ether49 ] loop-protect=on
/interface ethernet set [ find default-name=qsfpplus1-1 ] loop-protect=on
/interface ethernet set [ find default-name=qsfpplus1-2 ] loop-protect=on
/interface ethernet set [ find default-name=qsfpplus1-3 ] loop-protect=on
/interface ethernet set [ find default-name=qsfpplus1-4 ] loop-protect=on
/interface ethernet set [ find default-name=qsfpplus2-1 ] loop-protect=on
/interface ethernet set [ find default-name=qsfpplus2-2 ] loop-protect=on
/interface ethernet set [ find default-name=qsfpplus2-3 ] loop-protect=on
/interface ethernet set [ find default-name=qsfpplus2-4 ] loop-protect=on
/interface ethernet set [ find default-name=sfp-sfpplus1 ] loop-protect=on
/interface ethernet set [ find default-name=sfp-sfpplus2 ] loop-protect=on
/interface ethernet set [ find default-name=sfp-sfpplus3 ] loop-protect=on
/interface ethernet set [ find default-name=sfp-sfpplus4 ] loop-protect=on
/interface ethernet switch set 0 l3-hw-offloading=yes
/interface list add name=mgmtPORT
/interface list add exclude=mgmtPORT include=all name=withoutMGMTport
/interface bridge port add bridge=bridge1 interface=withoutMGMTport
/ip neighbor discovery-settings set discover-interface-list=all
/interface list member add interface=ether49 list=mgmtPORT
/ip address add address=192.168.xx.xxx/24 interface=bridge1 network=192.168.xx.0
/ip dns set servers=192.168.xx.xxx
/ip route add disabled=no dst-address=0.0.0.0/0 gateway=192.168.xx.xxx routing-table=main suppress-hw-offload=no
/ip service set telnet disabled=yes
/ip service set ftp disabled=yes
/ip service set www disabled=yes
/ip service set api disabled=yes
/ip service set api-ssl disabled=yes
/snmp set enabled=yes
/system clock set time-zone-name=Europe/Prague
/system identity set name=crs354-xxx
/system note set show-at-login=no
/system ntp client set enabled=yes
You have no concrete method of comparison,it's ridiculous that these are still being sold with so many problems.
as I posted in the thread... I was brought in on a job where they already had the switches. And they were having to reboot the switches DAILY!. As soon as they told me the model of the Switch I described exactly what was going wrong.You have no concrete method of comparison,it's ridiculous that these are still being sold with so many problems.
Only those who complain write here on the forum,
not all those satisfied who had no problems...
Also may I know your topology and what kind of traffic is mostly going through your switches(unicast,multicast,broadcast etc.)?two of my units (I have around 15pcs totally) are affected with this issue and both are failing on ether1-8, I can confirm that, no any other port show problem...
I have been connecting CRS328-24P+4S-RMs together with DAC cables.Hello,
any progress with version 7.13.4 or are there still problems?
We are thinking of buying 2 switches for the company and after reading this thread I have no confidence in CRS354.
It's a sick joke from Mikrotik guys:Is it the fact that the 24 ports are arm, and this one is not?
https://mikrotik.com/product/crs354_48p_4s_2q_rm
CPU: QCA9531 (MIPSBE)
Switch chip model: 98DX3257
Size of RAM: 64 MB
Storage size: 16 MB
https://mikrotik.com/product/crs328_24p_4s_rm
CPU: 98DX3236 (ARM 32bit)
Switch chip model: 98DX3236
Size of RAM: 512 MB
Storage size: 16 MB
We have heard prior firmwares corrected this.I saw a mention that 7.14 fixed the issue, can someone confirm ?
Looking for "Does" not interested in "should".Have you guys tried 7.13.1? Those switch changes should fix the CRS354 issues described in the topic
Everybody looked something else already, even used equipment from ebay (other vendors equipment "works" as it should be).Looking for "Does" not interested in "should".Have you guys tried 7.13.1? Those switch changes should fix the CRS354 issues described in the topic
I experienced the exact same problem you're describing. This issue has been resolved in RouterOS, but not yet in SwOS. Mikrotik support informed me that they are currently working on implementing the same fix in SwOS as well.I just found this topic and read everything that was written before me and realized that I’m not alone)
At my work I have 2xCRS354-48P-4S+2Q+ installed running on SwOS, one with firmware 2.13 and the other with 2.16. For half a year there were no problems, and then a month ago problems similar to what was written here began, sometimes a block of ports and sometimes all ports freeze at the same time. Only a reboot helps, and not always right away, sometimes after a reboot the traffic is up for a few seconds and then down again. Sometimes the switch does not freeze for weeks, sometimes it freezes several times a day. When the problem started, I installed a new switch of the same type, same problem(
The problem is specifically on the switch with SwOS 2.16 firmware where 17x Poe access points and 1x Poe camera are connected, and in the switch with firmware 2.13 there are no problems yet, but maybe this is due to the fact that this one is less loaded - 13x Poe access points and 1x Poe camera.
When is this going to be fixed?We bought 8 of these switches and had the same issue on one of them. We RMA it and the replacement did not have this problem with identical config. I'd contact support in the first instance.
Why nobody wants to tell us if it is SW or HW problem ?We bought 8 of these switches and had the same issue on one of them. We RMA it and the replacement did not have this problem with identical config. I'd contact support in the first instance.
Still waiting for problems: No problems yet...After reading this topic, I was expecting similar problems, but I've been waiting for 3 years for something to stop working...
Probably users who have problems have caused the device to overheat because it is in a place with little replacement or covered by other peripherals...
So we are sure THIS TIME?I can confirm that the issue has been fully resolved. It took some time, though..
After reading this topic, I was expecting similar problems, but I've been waiting for 3 years for something to stop working...
Probably users who have problems have caused the device to overheat because it is in a place with little replacement or covered by other peripherals...