[user@Router] > /system resource print
uptime: 2w10h57m37s
version: 6.4
build-time: Sep/12/2013 13:52:41
free-memory: 95.3MiB
total-memory: 128.0MiB
cpu: MIPS 74Kc V4.12
cpu-count: 1
cpu-frequency: 600MHz
cpu-load: 1%
free-hdd-space: 110.6MiB
total-hdd-space: 128.0MiB
write-sect-since-reboot: 24631
write-sect-total: 106765
bad-blocks: 0%
architecture-name: mipsbe
board-name: RB2011UAS
platform: MikroTik
[user@Router] > ip route cache print
cache-size: 16383
max-cache-size: 16384
[user@Router] > ping 8.8.8.8
HOST SIZE TTL TIME STATUS
8.8.8.8 timeout
8.8.8.8 timeout
8.8.8.8 timeout
132 (No buffer space available)
132 (No buffer space available)
132 (No buffer space available)
sent=6 received=0 packet-loss=100%
[user@Router] >
[user@Router] > /ip route cache print
cache-size: 16383
max-cache-size: 16384
[user@Router] > /system resource print
uptime: 1w4d14h11m49s
version: 6.6
build-time: Nov/07/2013 13:04:08
free-memory: 96.3MiB
total-memory: 128.0MiB
cpu: MIPS 74Kc V4.12
cpu-count: 1
cpu-frequency: 600MHz
cpu-load: 0%
free-hdd-space: 108.6MiB
total-hdd-space: 128.0MiB
write-sect-since-reboot: 54179
write-sect-total: 168056
bad-blocks: 0%
architecture-name: mipsbe
board-name: RB2011UAS
platform: MikroTik
uptime: 2d20h16m
version: 6.10
cache-size: 16384
max-cache-size: 16384
uptime: 2w4d17h15m15s
version: 6.10
cache-size: 716
max-cache-size: 16384
uptime: 2w3d15h19m13s
version: 6.10
cache-size: 53
max-cache-size: 16384
I now have this problem on a RouterBoard that provides backhaul to a motel via 5GHz. It's running OSPF just like the rest of our equipment, including devices on other buildings doing the same thing with the same configuration.Recently I experience this issue on the ccr 1009 7G and the ccr 1009 8G. I had a number of tunnels flapping due to service providers instabilities. I am running OSPF so that routes come back in and out automatically. The cache grows quickly when this happens and the router stops responding on all ports. Disabling IP route cache resolves the issue. Tried different router os versions. no change. Currently running on the latest Bugfix version. With tunnels enabled and cache enabled the router cache sits at around 90. if the tunnels drop in and out it quickly grows. I disabled the cache again once I saw it go above 1000 in less than 30 seconds. It would be great if a solution could be found for this.
We have a couple hundred MT devices with OSPF enabled--a couple hundred examples working perfectly well. With the exception of our edge routers, all our MTs have <1K routes cached (some <20) with route-cache enabled. And this one isn't even an especially busy one. The motel is at the end of its particular branch, so the link isn't carrying traffic for any other location: just for the motel itself. Couple Mbps each way, nothing much. (It's directly on the beach, so most guests aren't there for the Wifi.)My routers with OSPF max at around 154 - 200 routes cached with ip route cache disabled. How many devices do you have on either side sharing ospf routes.
In fact, in my experience setting it to "no" tends to make the route cache fill faster.With /ip settings set route-cache=no, why is the cache-size not always zero?
/ ip route cache clear
Replacing the hardware with similar or identical devices has not eliminated the problem. It's also unclear whether this problem started as soon as the devices were installed, or whether it developed over time. We also have devices that used to have this problem, that no longer do, without any hardware or configuration changes. For these reasons, we can't rule out interactions with other devices (e.g., OSPF) nearby, as a cause.If you change the hardware, keeping the same configuration, do you have the same issue?
Can you reproduce this in a VM?
My idea to address the issue was to move from a VM to a physical hardware, but I have the issue in both cases.If you change the hardware, keeping the same configuration, do you have the same issue?
Can you reproduce this in a VM?
:local uptime [/system resource get uptime];
:local date [/system clock get date];
:local time [/system clock get time];
:local identity [/system identity get name];
:local freememory [/system resource get free-memory];
:local cpuload [/system resource get cpu-load];
:local freehdspace [/system resource get free-hdd-space];
:local cache [/ip route cache get cache-size];
:local maxcache [/ip route cache get max-cache-size];
:global datum [/system clock get date];
:local percentused ((100 * [/ip route cache get cache-size]) / [/ip route cache get max-cache-size]);
:log info "RouteCacheUsed: $percentused %";
:if ($percentused > 17) do={
/tool e-mail send server="smtp.gmail.com" port=587 start-tls=yes user="XXXXXXXXX" password="XXXXXXX" from="XXXXXX@gmail.com" to="XXXXXXX@gmail.com" subject="Cache ip4you server full" body="Date: $date, Time: $time\n\nResources\n\nuptime: $uptime\nfree memory:$freememory\ncpu load: $cpuload\nfree hdd space: $freehdspace\ncache: $cache\nmax cache: $maxcache\nPercent used: $percentused %"
:log info "reboot";
/system reboot;
};
Our other 150+ ROS devices don't have this problem either. Most of them operate for weeks and months at a time, and their route caches never come close to 1,000.I have several routers, including CCR1009, operating in various different scenario's which do lots of routing and also VPN (GRE and L2TP) and I have never seen this problem.
Checking the CCRs in operation the use of the route cache is very small compared to the size.
I think there must be something particular to the configuration or usage of the affected routers.
this is a very good news, I was not aware of this, thanks.Currently it is known that OVPN interface reconnects are responsible for route cache leaks.
Unfortunately in our case, we don't use OVPN anywhere.Currently it is known that OVPN interface reconnects are responsible for route cache leaks.
Hello, how to fix this issue?Currently it is known that OVPN interface reconnects are responsible for route cache leaks.
Is there any plans to resolve it in near future?Currently it is known that OVPN interface reconnects are responsible for route cache leaks.