+12. Raspberry Pi replacement
I know a lot of IT-Enthusiast wo use RPi's at home as some sort of low performance Server.
like ioT-Server, Web-Server, Data and Logging, Authentication (freeradius) , DNS (pihole) etc...
This is what I am planning on using container support for.small voip pbx
With an Openwrt install, you could choose from: Asterisk, Fresswitch, Kamaillo, Siproxd, Yate and a few more... (I hope they all are available on Arm64)This is what I am planning on using container support for.small voip pbx
Best of both worlds... Can winbox talk to, and be used to configure OpenWrt ?Enlighten me please...
Why would one run Openwrt in Docker on Routeros when it can be run natively on lots of Tik HW ??
Natively performance should be better ?
No plans myself, just wondering why.
You keep talking about some limitations but even in your blogpost and in here you didn't mention ONE limitation that MikroTik should remove(?).
unifi ?Ubiquiti controller
Herectics!unifi ?Ubiquiti controller
Herectics!
unifi ?
So.. what's wrong with RB5009 ?Also, you seem to have overlooked this quote: "If MikroTik ever releases my dream device — an ARM-based multi-core hEX S+, including a microSD slot and an SFP+ port…" That addresses points 1 and 4 on the list above, and by being newer than the current hEX line, it should also address point 3.
Apart from the fact it is nowadays more rare then Kryptonite ?
So.. what's wrong with RB5009 ?
what's wrong with RB5009 ?
I have successfully launched unifi controllerHerectics!
unifi ?
Half 4011 + USB = hAP ac3 ???? (ARM OK, storage OK, 1/2 price OK, disable WLAN). No SFP. No L5 license
Docker container, which run linux, which run VirtualBox, which run ESXI and on top of this setup, PhotonOS and docker host ...so what would be the thing you will use Docker support for on a router and why?
To bring us back on topic, look on this proposal as a badass Raspberry Pi replacement with strong networking, to replace all those headless Pi boards sitting around doing networky stuff despite the crappy networking subsystem they're saddled with. (Thus Docker.)
The Marvell SOC from the nRay would be much better suited to the RB750Gr4, or even better one of the newer 64bit Qualcomm IPQ'smaybe a redo of hEX S with the same CPU of hAP ac² with 256mb of ram
or
maybe a redo of RB450Gx4 with 1 x SFP and USB port for external storage (maybe removing console or/and one rj-45 eth to make room) reducing to 512mb of ram and only 128mb of nand
i think this ideas can let to what can be seen as RB750Gr4 of maybe hEX Sr2
nray cpu is dual ARM cortex a53 at 1ghz in-order execution light core, i think it does not provide too much advantage over ipq-4018-4019 which is quad ARM cortex a7 at 712mhz (works ok at 896mhz)The Marvell SOC from the nRay would be much better suited to the RB750Gr4, or even better one of the newer 64bit Qualcomm IPQ'smaybe a redo of hEX S with the same CPU of hAP ac² with 256mb of ram
or
maybe a redo of RB450Gx4 with 1 x SFP and USB port for external storage (maybe removing console or/and one rj-45 eth to make room) reducing to 512mb of ram and only 128mb of nand
i think this ideas can let to what can be seen as RB750Gr4 of maybe hEX Sr2
Please tell if you build this.Stateful DHCPv6 Server? KEA based ?
I'll have to give this one a shot. Could not get the 3CX images started on here. I should probably just build a new FusionPBX image.https://hub.docker.com/r/andrius/asterisk
should run on RB5009/RB4011 and similar arm/arm64 MT devices, but did not try it yet.
Thats awesome!I am able to build my debian based asterisk and migrate the existing SIP setting to the docker. It works without issues.
Interesting. I was looking into documentation and I didn't see how one would do port mapping with Containers in ROS 7.+1 for OpenWRT minimal container. It's very lightweight and there is already package manager and other good features available for all architectures ROS runs on.
Also maybe make minimal containers with just busybox for different architectures. Something that can be easily used for doing scripting or port mapping jobs that ROS can't, with little impact on memory/flash.
I was looking into documentation and I didn't see how one would do port mapping with Containers in ROS 7.
Most arguments that I read are "something is missing/not good enough on the mikrotik device and I want to add/replace it".
The cost argument is not valid either as cheap and power efficient devices can be found on the market.
lowers Mikrotik bandwidth for fixing real issues (bgp for instance, pim-sm)
I really see no "mindbreaking" argument so far.
It is a bit sad that RouterOS does not allow to use the resources... e.g. I have a RB4011 that has 1GB RAM which is sitting 90% unused.Some of us have RouterOS devices with enough free RAM, flash, and CPU to do this for "free." Buying another device to run the container is not free.
If you do not possess a device yet that will do this in a sensible fashion, there's a fair chance your next RouterOS device will. What will you do with your spare slice of free compute power?
"something is missing/not good enough on the mikrotik device and I want to add/replace it"
Then what do you do with two containers both using port 80? They're both on the same VETH.I was looking into documentation and I didn't see how one would do port mapping with Containers in ROS 7.
How did you miss this? It's precisely the same thing as "docker create --publish 80:80".
Most arguments that I read are "something is missing/not good enough on the mikrotik device and I want to add/replace it".
What's wrong with that argument? It's a perfectly valid use case.
The cost argument is not valid either as cheap and power efficient devices can be found on the market.
Some of us have RouterOS devices with enough free RAM, flash, and CPU to do this for "free." Buying another device to run the container is not free.
The cost argument is not valid either as cheap and power efficient devices can be found on the market.
If you do not possess a device yet that will do this in a sensible fashion, there's a fair chance your next RouterOS device will. What will you do with your spare slice of free compute power?
Even when cost is no object, the ability to deliver a complete, integrated, custom solution may be compelling.
Even when cost is no object, the ability to deliver a complete, integrated, custom solution may be compelling.
It's part of my job to work on skills development so I know that you don't "turn a pig into a dog just by telling him to bark". The problem here is that these people were hired in the first place, they didn't appear out of nowhere... There was a decision taken at a given point time to hire people to work on what appears to me as "wrong priorities" (away from what the voice of customers would ask for).lowers Mikrotik bandwidth for fixing real issues (bgp for instance, pim-sm)
Development talent isn't fungible. The people working on containers likely weren't pulled off your pet projects, nor could they be reassigned to them and be immediately productive.
Furthermore, most of this container stuff is out there, off the shelf, ready to repurpose. A good bit of it lives in the kernel they had to update to produce v7 already. Initial development on what would become the modern container infrastructure goes back to kernel 3.10.
Indeed, but apart from your "integrated solution" which is not really precise either. I still didn't read what makes it a "must have" for ROS users. If I only knew, maybe it would be as well for me don't you think ?I really see no "mindbreaking" argument so far.
Perhaps it is not for you, then.
That doesn't mean it isn't for anyone else, though.
1 vETH for EACH container!Then what do you do with two containers both using port 80? They're both on the same VETH.
It is not very interesting (read: of no interest at all) what you would or would not install on your router or elsewhere.
Everyone is free to decide what service they want to deploy where, and what method they want to use for it.
The possibility of containers is useful in many use cases, even if you cannot imagine one of them.
Well, I think several of them have already been mentioned above. Anything that is network-related and that you could run on a router.That's my point, please share them.The possibility of containers is useful in many use cases, even if you cannot imagine one of them.
AdguardhomeIt is not very interesting (read: of no interest at all) what you would or would not install on your router or elsewhere.
This is not what my post was about. It was about why *other people* find it useful to figure out if it could be useful *for me*.
Everyone is free to decide what service they want to deploy where, and what method they want to use for it.
Sure. Asking why they do it this way helps going out of ignorance.
The possibility of containers is useful in many use cases, even if you cannot imagine one of them.
That's my point, please share them.
if you consider that as a customer you should "help yourself" that's inded fine.
fixing / adding what's missing
Ok, but how much free RAM/CPU to do what exactly ? For instance I don't see a single usefull service in my 20+ VMs and containers deployed on my home network which could be a candidate for deployment on a mikrotik device.
Security
performance
increased heat
separation of concerns
am I the only one in this situation ?
I won't deploy DMZ services on an network infrastructure appliance, nor will deploy security related services on it, nor will deploy potentially high RAM/CPU/I/O consuming services, nor critical services.
Even when cost is no object, the ability to deliver a complete, integrated, custom solution may be compelling.
This is an argument which may be of interest. But again what services are you talking about
And "integrated" with external storage ? hmm...
these people were hired in the first place, they didn't appear out of nowhere...
ability to declare a mikrotik device as a NUT client would help rationalize UPS provisionning
Curious, is there more detailed documentation other than this page? I've searched around but I haven't found any details on expected behavior from MT's implementation.Yeah I made the same wrong assumption some time ago.
And with the different vETH's , you have full flexibility with things like DNAT etc if you want to expose to the outside world etc.
No, you're right: the current RouterOS docs on containers positively suck compared to what you get for other container platforms.
Your next-best option is SSHing into a box running the containers.npk package, typing "/container", and then pressing the F1 and Tab keys a lot. Between that, the Docker docs, and a general understanding of the facilities of RouterOS, you can often piece together how it must work and how you can make it do what you want.
But if you want cookie-cutter guides, it's way too early to be expecting that.
I am also testing Ruter OS 7 with a level 1 license
I did not make any assertion that the documentation "sucks."
For a small user like me, paying for an extra VM is an overhead I don't need.It's far better to either run another VM on the same host.
For a small user like me, paying for an extra VM is an overhead I don't need.It's far better to either run another VM on the same host.
I understand the argument you made. Furthermore, I agree with you. However, where should one install the Docker Engine? doesn't it require another Linux/Win VM, or perhaps I misunderstand you? Recently I played a lot with docker to build my own image still a work in progress.Docker Engine is far lighter than a single VM
where should one install the Docker Engine? doesn't it require another Linux/Win VM…?
My AdGuardHome runs fine with an IP from the subnet sitting on the main bridge, and veth added to that bridge. You're not forced to use NAT, probably you should take another look into itFor another, you will notice that the current implementation requires NAT, not allowing direct access to the host's bridge. That's a sensible default, though I hope MikroTik eventually lifts it, as there are services you can only provide when bound to real hardware.
My AdGuardHome runs fine with an IP from the subnet sitting on the main bridge
Now that Wireguard is supported... Why bother with OpenVPN?1+ for Openwrt as a container Guest in Mikrotik, with lot of possibilities regarding routing, very light weight & best for OpenVPN Configurations.
I think Mikrotik doesn't like "opensource of things" much because of complicated implementation of OVPN in Mikrotik.
Not familiar at all.@gotsprings
I live in Iran. I don't know how much you are familiar with our current government. Due to the latest movements, 2/3 of The internet is down including WG protocol, but OPVN is working.
I actually just built an image to run netinstall on ARM/ARM64/x86 tiks for this exact purposeOr, do the same thing to a disused RB3011, running a separate netinstall container behind each port, one for each RouterOS NPK type you have in use, and you've got a benchtop version of the same facility. Someone brings in a flatlined router, you pull over a labeled cable matching the CPU type of the victim, call "clear!" and zap the patient back to life.
The only thing is, I suspect netinstall won't work through NAT, so even if they do produce ARM builds of the program for us, it still won't work. Still, it's a nice dream, and it's within reach.
Because i am using openvpn server in ubuntu & a lot of clients on different devices including Android Windows & IOS. setting up wireguard for each individual using public keys etc is a hectic task. openvpn uses pre configured files just import & bump...Now that Wireguard is supported... Why bother with OpenVPN?1+ for Openwrt as a container Guest in Mikrotik, with lot of possibilities regarding routing, very light weight & best for OpenVPN Configurations.
I think Mikrotik doesn't like "opensource of things" much because of complicated implementation of OVPN in Mikrotik.
I tried this on a hAP AC3 with ext usb drive but it failed every time, it is a really big container (over 1GB) so suspect you will need a powerful tik to handle it, if I could get access to an RB5009 or something more powerful I could test furtherHome Assistant. If anyone has got it right, let me know,
To reply to the 'why would we' people, one good reason is that this is how we learn.
But my reason is to have fewer devices running off an inverter to prolong battery life during loadshedding in South Africa.
What's loadshedding? - Eskomplicated.
Well, if it is possible to have a container (or even a package) to have the v6 BGP+BFD code on v7, it would certainly be useful here!A container to run RouterOS v6 on v7 for all missing features.
Awe bru, the struggle is real !But my reason is to have fewer devices running off an inverter to prolong battery life during loadshedding in South Africa.
What's loadshedding? - Eskomplicated.
Simple scheduler script could solve this. But a watchdog option alongside the start-on-boot functionality is a good suggestion.One missing feature is there is no auto restart if the container stops for any reason, which could be pretty bad for said service
If I read your chart right, the memory consumption increased by about 4MB in about 6h and seems to stabilize towards the end of the available data.I'm going to evaluate for the coming days...luckily an RB5009 has 1Gbytes so there is some headroom ... but stil....
RAM usage can be limited by using:
[*]/container/config/set ram-high=200M
this will soft limit RAM usage - if a RAM usage goes over the high boundary, the processes of the cgroup are throttled and put under heavy reclaim pressure.
There are limitations on containers limiting them for NFS and if you would use a user-space implementation you would loose a lot of things in terms of performance.I looked at my ESXi server and the VM's it's running, and I'm considering moving what I can over to my CCR2116.
- pi-hole - Already moved
- Asterisk/FreePBX
And with 2-4TB NVMe SSD, I could do
- Beta Unifi/UISP servers
- OwnTone (DAAPd) to replace macOS 12 running iTunes 24/7
- ownCloud/NextCloud
For NFS, this user-space implementation looks promising (needs to be containerized): https://github.com/unfs3/unfs3
- NAS (NFS, SMB, or worst case scenario WebDAV)
A very simple ssh container can do that.NMAP container ?
It's something missing on routerOS to easily scan client LAN (check if a port is open on a device or not).
Or if someone have a easy way for this (ssh tunnel ?)
For another, you will notice that the current implementation requires NAT, not allowing direct access to the host's bridge. That's a sensible default, though I hope MikroTik eventually lifts it, as there are services you can only provide when bound to real hardware.
My AdGuardHome runs fine with an IP from the subnet sitting on the main bridge, and veth added to that bridge. You're not forced to use NAT, probably you should take another look into it
They only recommend keeping containers on another subnet, you know, for .. containerization purposes.
I have a separate bridge and use a separate IP-block with various vETH's in that range.For another, you will notice that the current implementation requires NAT, not allowing direct access to the host's bridge. That's a sensible default, though I hope MikroTik eventually lifts it, as there are services you can only provide when bound to real hardware.
My AdGuardHome runs fine with an IP from the subnet sitting on the main bridge, and veth added to that bridge. You're not forced to use NAT, probably you should take another look into it
They only recommend keeping containers on another subnet, you know, for .. containerization purposes.
Does the network setup of the containers (bridge, VLAN, subnet, etc) have any special considerations compared to just a baremetal server running that same container's services but plugged into an ether port?
Apart from the normal considerations with hosts sharing a broadcast domain, are there any container-implementation-specific downsides with putting the containers on a separate VLAN/subnet but on the same bridge as all my other subnets/VLANs?
Or with putting the containers on the same bridge/VLAN/subnet as other trusted network services hosts?
## Use your preferred veth settings here; I chose this subnet because 172.17.x.x is in use elsewhere
/interface veth
add address=172.16.0.2/24 gateway=172.16.0.1 name=veth1
/container mounts
add dst=/etc/samba/ name=samba-etc src=/disk1/samba-etc
add dst=/Shared/ name=samba-share src=/disk1/samba-share
/container envs
add key=TZ name=samba value=America/Denver
/container config
set registry-url=https://registry-1.docker.io tmpdir=disk1/docker-pull
/container
add cmd="/bin/sh -c \"while ((true)); do sleep 1; done\"" envlist=samba interface=veth1 logging=yes mounts=samba-etc,samba-share root-dir=disk1/samba-root start-on-boot=yes remote-image=library/alpine:latest
## The cmd above keeps the container running so I can shell into it. In production, it would start the SMB daemon:
## smbd --foreground --no-process-group
/container
start 0
shell 0
apk add --no-cache --update samba-common-tools samba-client samba-server
[global]
map to guest = Bad User
log file = /var/log/samba/%m
log level = 10
[guest]
path = /Shared/
read only = no
guest ok = yes
# Run this inside the container
smbd --foreground --no-process-group --debug-stdio --debug-level <whatever level you want for testing, or don't use this flag>
I wasn't successful at building that container on my Mac and uploading it, so instead...
$ git clone https://github.com/fschuindt/docker-smb
$ cd docker-smb
$ docker build -t smb --platform=linux/arm/v7 . # for 32-bit ARM
$ docker build -t smb --platform=linux/arm64 . # for 64-bit ARM
$ docker create --publish 139:139 --publish 445:445 --name smb smb
$ docker export smb > smb.tar
$ scp smb.tar myrouter:
FROM alpine:3.12.0
RUN apk add --no-cache --update \
samba-common-tools \
samba-client \
samba-server
COPY smb.conf /etc/samba/smb.conf
EXPOSE 139/tcp 445/tcp
CMD ["smbd", "--foreground", "--log-stdout", "--no-process-group"]
$ git clone https://github.com/fschuindt/docker-smb
$ cd docker-smb
# Make any desired changes to Dockerfile and smb.conf
$ docker buildx build -t samba-arm --platform=linux/arm/v7 . # for 32-bit ARM targets, built on any PC/Mac hosts
$ docker save samba-arm > samba-arm.tar
$ docker buildx build -t samba-arm64 --platform=linux/arm64 . # for 64-bit ARM targets, built on non-M1/M2 hosts
$ docker build -t samba-arm64 --platform=linux/arm64 . # for 64-bit ARM targets built on M1/M2 (the buildx example above works too, honestly)
$ docker save samba-arm64 > samba-arm64.tar
# Copy to your ARM router, like a hAP AC2/AC3
$ scp samba-arm hAPac3:/path/to/disk/
# Copy to your ARM64 router, like CCR2216, CCR2116, CCR2004, RB5009
$ scp samba-arm64 ccrARM64:/path/to/disk/
# Configure your veth1 however you like; mine is successfully bridged to the router's main bridge, putting the container in the same LAN as the other hosts.
# This alleviates the need for NAT to the container
#
/container mounts
add dst=/etc/samba/ name=samba-etc src=/disk1/samba-etc
add dst=/Shared/ name=samba-share src=/disk1/samba-share
/container config
set registry-url=https://registry-1.docker.io tmpdir=disk1/docker-pull
/container envs
add key=TZ name=samba value=America/Denver
/container
# My USB/NVMe disks are labeled "disk1"
add envlist=samba interface=veth1 logging=yes mounts=samba-etc,samba-share root-dir=disk1/samba-root start-on-boot=yes file=disk1/samba-arm64.tar
# Assuming this is the first container
start 0
well ... as somehow "hackey" and seemingly silly that sounds, but in fact that could be (a quite dirty, yes....but) a workaround until MIKROTIK finally addresses the the importance of a router being able to properly do BGP (especially a device named "CLOUD CORE ROUTER").
Eg. Run FRRouting in a container, run it as a route reflector or just a big digestor of all the IX/Transit sessions etc, then establish a BGP session with the host router
VETH can be bridged to any of the other ports. The containers don't have to have their own bridge.well ... as somehow "hackey" and seemingly silly that sounds, but in fact that could be (a quite dirty, yes....but) a workaround until MIKROTIK finally addresses the the importance of a router being able to properly do BGP (especially a device named "CLOUD CORE ROUTER").
Eg. Run FRRouting in a container, run it as a route reflector or just a big digestor of all the IX/Transit sessions etc, then establish a BGP session with the host router
but how would one expose the containter to bgp partners?
thx, i that i know. i have 2 container running (adG and frrouting) on veth1 and veth2 which are bridged to "br0" (main bridge) and PVID 10
VETH can be bridged to any of the other ports. The containers don't have to have their own bridge.
Even if they did, you can make the subnet anything you like. Then you use OSPF to announce the container subnet and loopbacks.
I now have a project for the afternoon...
This would be a great purpose if Zabbix proxy would just query and forward the SNMP data, however there's a DB in there taking up a bit of storage space.Big one for us is a light weight Zabbix proxy for remote sites without needing to add another device onsite
.This would be a great purpose if Zabbix proxy would just query and forward the SNMP data, however there's a DB in there taking up a bit of storage space.
As it stands, the containers seem to be running in some kind of unprivileged mode, which limits what one can do. For my own part, I'd like to be able to control the kernel capabilities that a container can use to enable more use-cases. For my own part, I'd like access to the underlying host's network configuration (interfaces, route tables, netns, etc) through something like Zebra. This would give me the ability to enable advanced use-cases that recent upstream Linux kernels have native support for, namely iOAM6 (which was upstreamed in v5.16). Additional use-cases would be something like enabling SRMPLS SRv6, ISIS, EVPN.So when 7.1rc3 with Docker support came out, i instantly jumped to obvious things to use it to run stand alone DNS server - feature that is missing in RouterOS itself.
But i must admit i strugle to find any usage of this feature on a router, most of the things i would run in container i would run on x64 server, that has much more resources available.
so what would be the thing you will use Docker support for on a router and why?
Agree. But still really curious on what drives any use case for WINS...WINS is a 31-year-old, obsolete Microsoft legacy service