I have specified it, 7.18beta4Which version is this?
/file/remove [ find where name="file/with/path" ];
Wait... What!?Changelog tells you about how to do it, use the new parameters (not a bug)
:local FileID [ /file/find where name="file/with/path" ];
/file/remove path="file/with" $FileID;
/file/remove file/with/path
/file/remove [ /file/find here name="file/with/path" ]
Swap appears to be working great since I'm now able to spin up a minecraft server on my ax3 :D- Swap
container/envs/add key=EULA name=mcsrv value=TRUE
container/envs/add key=MEMORY name=mcsrv value=512M
container/add remote-image=itzg/minecraft-server:latest interface=veth1 envlist=mcsrv root-dir=/usb1-part1/containers/store/mcsrv comment=mcsrv logging=yes
container/start [find comment=mcsrv]
ip/firewall/nat/add action=dst-nat chain=dstnat disabled=yes dst-address=192.168.41.1 dst-port=25565 protocol=tcp to-addresses=172.17.0.2 to-ports=25565
I'll take a guess..you want to enter the NAS market? tell the truth :D
Why are all these features there? Well, soon you will find out, but if you can test some of them, that would help, thanks
Yes, and you can definitely run more useful things on a router if you so desire.Minecraft on container on network security gateway... it's a joke ?
Don't want to sound rude and sorry for offtopic, but having hardware on remote location without having additional devices (like GSM-sockets or something similar) that allow to remotely power-cycle your hardware is very strange at least. Anything could happen and power-cycling often helps. In case of device-mode changing it would also solve the problem. So, as to me, having such devices is a mandatory thing and I don't know how to explain when people don't have them. May be they like to drive (or even fly) to their remote sites each time something happens...need the physical button press to be changed. When router are far, it's a great feature, for sure !
I wouldn't dare any more than you would dare to run it on Junos :> Possible? Yes, absolutely! Good idea? In hell.You can run Greylog or Grafana.
SWAP over USB is a horrible idea and recipe for kernel deadlocks and freezes - definitely not something you want on a device that is also processing your network traffic. Will it work? Yes, and its also likely you won't see any issues... until you hit them hard and waste days debugging it.SWAP for memory heavy containers is all I need
My bet: fast stable shared resource to provide true HA....
Why are all these features there? Well, soon you will find out, but if you can test some of them, that would help, thanks
Please read- Storage on RAM (tmpFS) when you don't have storage space, but have free RAM
Since containers share kernel with host, there is no distinction here. Only consider it for devices which have PCI-E lanes exposed over physical slot (RBM33G does for example, but it's hardly a good base for it anyway because of MMIPS and 16M of storage).I got an impression that their SWAP implementation is only for containers and otherwise unused (and isolated) from network functionality.
/disk/add slot=tmp type=tmpfs;
Decided against. While this breaks my scripts, I can not find a really good reproducer. This drove me crazy - every time I though I am near it changed behavior. 🤪yes, please make a ticket with details how to replicate, how many files you have, which file you removed etc.
/file/add type=directory name=$dir;
/file/add name=$dir/file; /file/add name=$dir/file;
I 1000% agree with you and echo this. MikroTik, please LISTEN -- Cut this bullshit with storage on RouterOS as a network appliance hardware used for routing. Move it to StorageOS and let it be a fork and then let people test the features.This is RouterOS, not StorageOS. If you desire to add file management and make a NAS out of a router then just spinoff ROS into another branch that is developed separately. Storage device does not need BGP and MPLS and router/switch does not need NFS, Btrfs support, RAID, NVMe over TCP or fancy UI for collaborative file sharing. You already have strong base with basic networking, just cut unnecessary slack, integrate rose-storage into base and sell it as separate OS to be ran on new class of devices.
The naming of ROSE is also pretty poor, "RouterOS Enterprise" - 100% enterprise storage, 0% enterprise networking.
---
I wouldn't dare any more than you would dare to run it on Junos :> Possible? Yes, absolutely! Good idea? In hell.You can run Greylog or Grafana.
So long as these features are modularized, i don't see the problem.network storage protocols dont belong on a router
As this still broke with my workarounds in place I digged deeper. I think I have a reproducer now... Please have a look at SUP-179200 for details. Thanks!Decided against. While this breaks my scripts, I can not find a really good reproducer. This drove me crazy - every time I though I am near it changed behavior. 🤪yes, please make a ticket with details how to replicate, how many files you have, which file you removed etc.
Swap is evil, and in a ROUTER, largelly indicates device-abuseThe latest beta does have support for swap. 😉
Swap is not for the host system. Please read the manual about it before complaining: https://help.mikrotik.com/docs/spaces/R ... -SwapspaceSwap is evil, and in a ROUTER, largelly indicates device-abuseThe latest beta does have support for swap.![]()
exposing some swap for *containers*, and the applications hosted therein is one thing
if those crash, who cares.
but if the host-system (which is a ROUTER) is forced into swapping ,
then trough device-abuse and feature-creep it's no longer a router and the network topology needs to be reviewed... (or a beefier router with actual RAM needs to be purchased)
@normis, thank you and please thank Druvis Timma for making that update Feb 07, 2025 10:17. IMO a super addition.Swap is not for the host system. Please read the manual about it before complaining: https://help.mikrotik.com/docs/spaces/R ... -Swapspace
This still happens with RouterOS 7.18beta6. Is that version supposed to have the fix?We found the issue you describe, it was triggered if the file was in a subdirectory, looks like the next release will have a fix
This is not how is mentioned in documentation:Swap is not for the host system. Please read the manual about it before complaining: https://help.mikrotik.com/docs/spaces/R ... -Swapspace
It stated as "useful when using containers" - ROS will not go OOM if containers consume more memory than is physically available, but is still can confuse reader if for other system processes memory pages are swapped or not.... This is useful when using containers on RouterOS to be able to run containers that require much more RAM than you RouterOS device has. ...
Please explain how do you achieve separation of SWAP usage between host and containers, given that both share same kernel via namespaces and cgroups. For container to use SWAP this space must be physically present to kernel on host. Similarly, you can not adjust vm.swappiness value for container without affecting host.Swap is not for the host system. Please read the manual about it before complaining: https://help.mikrotik.com/docs/spaces/R ... -Swapspace
I don't see how this would work given ROS documentation since there is no ability to configure hard memory limit for container, just soft limit. There also does not seem to be any way to dedicate specific swapfile to given container, so this would exclude per-cgroup swapfile too. My best guess so far is that any SWAP created is simply attached to host and is shared with containers in same way host memory is.If is really dedicated only for containers, it should be stated like that, I guess then this is implemented with Linux kernel Per-cgroup swap file only for container ROS process.
https://help.mikrotik.com/docs/spaces/R ... -Swapspace
What's new in 7.18 (2025-Feb-24 10:47):
*) disk - allow to add swap space without container package;
Swap is not for the host system. Please read the manual about it before complaining: https://help.mikrotik.com/docs/spaces/R ... -Swapspace
Model Price Strg-MB RAM-GB Ports ======================================================================== CCR2116-12G-4S+ $995 128 16 12xGbE, 4x10GbE SFP+ RDS2216-2XG-4S+4XS-2XQ $1950 128 32 4x25G SFP28, 2x100G QSFP28, 4xSFP+, 2x10GbE CCR2216-1G-12XS-2XQ $2795 128 16 12x25G SFP28, 2x100G QSFP28
They won't call this enterprise storage for no reason let them cook this to their heart desire and let the early adopter cast the fate of this product, who knows one of these days when they are done experimenting they figured they would rather go back and focus on routing and switching againWell, this new ”cool toy” RDS2216 isn’t even playing the same sport as WAFL, let alone competing in 24x7x365 business-critical operations.
I’d say MikroTik is in way over its head on this one and in a different galaxy than NetApp. 😉
When talking about "next-gen" things in a corporate space then any third-party network/security audit has one sure question in it - "What UTM/NGFW solution is in use and does it have HA?"Honestly, those resources would’ve been way better spent on developing the new controller or a proper next-gen network management and monitoring tool.
Larsa, you are confusing this with a home NAS. It's not
Larsa, you are confusing this with a home NAS. It's not
Normis, does indeed not! 😉
Just to be clear, I'm comparing the RouterOS v7 special ROSE edition, running on the ROSE Enterprise Data Server RDS2216 ("designed for enterprise environments. Secure, scalable, and under your control"), with SMB entry-level NAS solutions like DSM or QTS. From that perspective, the special ROSE edition doesn’t even come close. Do you want me to put together a comparison table of the missing key features in RouterOS v7 special ROSE edition?
Have a nice weekend!
P.S.
According to the product page, you can still run virtual machines on the RDS2216 ("Run VMs, automate workloads...") and take advantage of "mission-critical workloads with near-instant failover and future-proof scalability as NVMe technology advances." 😁
.Larsa, you are confusing this with a home NAS. It's not
Even vanilla Btrfs RAID 1 can be a real headache, for example, if a disk intermittently disconnects or fails for some reason and then gets marked as unreliable. Restoring a Btrfs RAID 1 is a pretty complicated process and requires expert knowledge, as @Petch1 pointed out in another thread. In other words, Btrfs RAID 1 is nothing like a normal RAID 1 with automatic resync. There are plenty of examples of this online.
I've mentioned it multiple times and no one cares. It will never work normally without fully changing approach to work with files.The current 'Files' option in winbox tries to download the complete list of files first, then creates a treeview of it.
But with millions of files, this takes ages and I've never actually seen it finish.
In the CLI, something similar happens, you just get a dump of all the files.
I suppose you (MikroTik) experience this as well.
The obvious solution is to change this behavior into getting a list of files/dirs in the current directory first, then when you click a plus sign, it will get the list of that folder, etc. Like in a real file-explorer on macos or windows.
I hope this will become a reality in the future.
Maybe now is the time for them to start taking this issue seriously?This is offtopic, but I've mentioned it multiple times in other topics and no one cares. It will never work normally without fully changing approach to work with files.
+Maybe now is the time for them to start taking this issue seriously?This is offtopic, but I've mentioned it multiple times in other topics and no one cares. It will never work normally without fully changing approach to work with files.
It makes more sense now, right?
Sorry for posting it in the wrong topic, I searched for a rose topic, but couldn't easily find it.Post about ROSE by sszbv moved to the right topic.
+1 for your idea!I, for one, welcome our Storage overlords.
When ROSE came out, I put an NVMe to SATA adapter in a 2116, punched a hole, ran some thin SATA cables out, and hooked up six SATA drives in a bay the size of a CDROM. It works pretty well.
The advantage MikroTik has over a PC is wirespeed switching/routing (on Marvell platforms). I run a bunch of containers on my 2116. Going to have to add the Minecraft one (didn't know it existed for ARM64).
Not a problem. I should have written "Moved to dedicated to ROSE topic.".Sorry for posting it in the wrong topic, I searched for a rose topic, but couldn't easily find it.Post about ROSE by sszbv moved to the right topic.
Second best was Winbox 4 in my opinion.....
At the risk of being off topic, could you describe that further? Works fine in terms of just getting power, or does it also appear as network interfaces on the CCR2116? Any details on the physical setup?I allready tried the CCR2004 PCIE card on the CCR2116 with a m2 to pcie converter :)
That works fine
It shows up in the interfaces. Just use a m2/nvme to pcie 16x slot converter from aliexpress + powersupply.At the risk of being off topic, could you describe that further? Works fine in terms of just getting power, or does it also appear as network interfaces on the CCR2116? Any details on the physical setup?I allready tried the CCR2004 PCIE card on the CCR2116 with a m2 to pcie converter :)
That works fine
I used normal raid first and then used BTRFS as a filesystem, because I thought it would be safe enough. But now that the example has changed...Even vanilla Btrfs RAID 1 can be a real headache, for example, if a disk intermittently disconnects or fails for some reason and then gets marked as unreliable. Restoring a Btrfs RAID 1 is a pretty complicated process and requires expert knowledge, as @Petch1 pointed out in another thread. In other words, Btrfs RAID 1 is nothing like a normal RAID 1 with automatic resync. There are plenty of examples of this online.
They have updated the docs and change the file system in the RAID examples to ext4
https://help.mikrotik.com/docs/pages/di ... ersions=91
I'm starting to think that SWAP partition is not only used by containershttps://help.mikrotik.com/docs/spaces/R ... -Swapspace
What's new in 7.18 (2025-Feb-24 10:47):
*) disk - allow to add swap space without container package;
Sorry, it's not clear to me if I can use the SWAP partition only for containers or also with ROS.
I'm asking because I have several hAP ac2 that can't use SMB (router freezes and restarts itself when transfers are large) and containers due to low RAM.
Before changing disk partitions i would like to make sure it is useful...
Thanks
EDIT: I found the answer, Normis wrote:
Swap is not for the host system. Please read the manual about it before complaining: https://help.mikrotik.com/docs/spaces/R ... -Swapspace
Thanks, I hadn't seen the video...Even Druvis mentions the SWAP can be used not only for containers: https://youtu.be/wJw50I7STck?t=131
SWAP file it's much slower (I only tried with hAp ac2), if you can add a small partition to use as SWAP.I tried setting up a swap file
/disk add type=file file-path=usb1/swapfile file-size=1G swap=yes
Just format the partition as "wipe" on ROS and use something like:I'm preparing an external SSD for use as storage (ext4) and swap in RouterOS.
What’s the correct way to set up the swap partition? Should I format it as Linux swap (gdisk type 8200) and run mkswap?
I'm confused because RouterOS doesn’t seem to recognize swap partitions.
disk/set usb1-part2 swap=yes
Can you clarify what you mean by "restructuring" and "error handling"?Managing storage requires administrative tools for restructuring, error handling, and backup/restore. All of these are missing in ROS. In other words, this solution is not suitable for mission-critical applications.
I am not sure about that, but it seems that adding all these feature increased the RouterOS package substantially so many of us wont be able to use 16MB devices at all...Is there any hope we will be able to use external storage to offload local storage ? I think about routers with only 16MB.
+---------------+------------+------------+------------+-------+------+----------------+-----------------+
| Solution | Filesystem | HA Cluster | Docker/K8s | iSCSI | RoCE | Backup Options | Best Suited For |
+---------------+------------+------------+------------+-------+------+----------------+-----------------+
| TrueNAS Scale | ZFS | Limited | Yes | Yes | No | Snapshots, S3 | SMB, SOHO, Pros |
| OpenMediaVault| ext4, XFS | No | Yes (addon)| Yes | No | Rsync,Snapshots| SMB, Home Users |
| Ceph | CephFS | Yes | Yes | Yes | Yes | Multi-layered | Enterprise, HPC |
| BeeGFS | BeeGFS | Yes | Limited | No | Yes | External Tools | HPC,AI Workloads|
| Rockstor | Btrfs | No | Yes | Yes | No | Snapshots | SMB, SOHO |
+---------------+------------+------------+------------+-------+------+----------------+-----------------+
While 100% agree on the release management and docs, which is frustrating. And the current docs on ROSE storage features border on engineering malfeasance. I do like the green, and like the work of their graphics artist/firm — but they needed a doc writer, not a designer.@Normis I believe @Larsa makes some good points.
[...]
I see @Larsa suggestion as best chance MikroTik has to salvage RDS2216 development investment. Opening RDS2216 for hosting existing ready today solutions @Larsa identified allows returning limited software developer resources back onto the core networking competency.
Great points. Yes, I also do feel MikroTik has a good relationship with their hardware supply chain - and have good engineers designing the hardware.While 100% agree on the release management and docs, which is frustrating. And the current docs on ROSE storage features border on engineering malfeasance. I do like the green, and like the work of their graphics artist/firm — but they needed a doc writer, not a designer.@Normis I believe @Larsa makes some good points.
[...]
I see @Larsa suggestion as best chance MikroTik has to salvage RDS2216 development investment. Opening RDS2216 for hosting existing ready today solutions @Larsa identified allows returning limited software developer resources back onto the core networking competency.
I did watch @dru's video with RDS and some YouTuber. Once again ton of good information that should have been written down in docs - not in 30 minute video. But it good to hear Druvis seems to acknowledge the UI needed improved — which is just an absolute mess and very poorly organized. But improving PCI bus was also discussed — that was annoying to hear — since agree focus on making the software usable and well tested before even thinking about making more "storage" models – the speed will be what it is & one seem enough to test the concept.
Now I agree with other commentary that with storage losing data is unforgivable sin & the lack of attention to detail doesn't give one the "warm fuzzies" their critical data will be safe.
But I think WAY too early to suggest the need to "salvage" their investment. And I actually don't think they invested much... which may part of the complaints I suspect. To me, it looks like largely wrapped the Linux drivers/apis for "storage" into the RouterOS schema & called it ROSE. And MikroTik has good supply chain for the hardware.
And for the internal needs like Dude, UM, or /containers... snapshots and RAID0/1/5 seems like the main feature one would need beyond what was already. They seem to like btrfs - I don't know enough to suggest that's good/bad but it support snapshots which I think was the only thing the "old" ext4-only ROSE was missing for "basic" needs. What I think be bad is trying to support MULTIPLE "high end" volume management schemes. If they want to focus on making btrfs work, that seem like a good idea than worrying about ZFS vs XXXFS.
Plus if one ignores the over-focus on "storage" in the RDS. And look at more like low-cost, network-centric ARM server... it doesn't seem stupid. it does seem like a good deal as a next step up if your beyond a couple RB5009 or needed disks for containers or even home media server/NVR. And I've seen plenty of Dell/IBM/HP/etc with a shit ton of unused drive bays, and RDS likely end up same in 90% of cases in future be my bet. But folks, including me, can easily justify "well we don't know if we'll need more storage".
Anyway, if one ignore usage as a classic "NAS" / storage server, and rather a RouterOS device that has a ton of disk available, and they already have features that use storage... Now if you do want to use it as a generic storage server for critical applications in a data center, it's clearly not designed for that nor should it be for all the reason above about MT's core capacities. But wrapping LVM-like API into RouterOS does seem within their wheelhouse.
do you mean from RDS2216 <-> RDS2216 ?Would be interesting to see the same tests run on the RDS2216. Any chance you could try that?
/Shared/tests # fio --name=fiotest --ioengine=sync --rw=randwrite --bs=4k --numjobs=1 --size=5G --runtime=1m --time_based
fiotest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.33
Starting 1 process
fiotest: Laying out IO file (1 file / 5120MiB)
Jobs: 1 (f=1): [w(1)][100.0%][w=192MiB/s][w=49.1k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=1): err= 0: pid=365: Mon Mar 24 18:28:07 2025
write: IOPS=43.6k, BW=170MiB/s (179MB/s)(9.99GiB/60001msec); 0 zone resets
clat (usec): min=12, max=1003, avg=16.56, stdev= 2.66
lat (usec): min=13, max=1003, avg=16.88, stdev= 2.67
clat percentiles (nsec):
| 1.00th=[13632], 5.00th=[14144], 10.00th=[14528], 20.00th=[15168],
| 30.00th=[15552], 40.00th=[15936], 50.00th=[16192], 60.00th=[16512],
| 70.00th=[17024], 80.00th=[17792], 90.00th=[18816], 95.00th=[20352],
| 99.00th=[22656], 99.50th=[23680], 99.90th=[27520], 99.95th=[31360],
| 99.99th=[43776]
bw ( KiB/s): min=99888, max=226752, per=100.00%, avg=197555.43, stdev=17473.05, samples=105
iops : min=24972, max=56688, avg=49388.86, stdev=4368.26, samples=105
lat (usec) : 20=94.18%, 50=5.82%, 100=0.01%, 250=0.01%, 500=0.01%
lat (usec) : 1000=0.01%
lat (msec) : 2=0.01%
cpu : usr=17.38%, sys=79.70%, ctx=248524, majf=0, minf=17
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,2618329,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=170MiB/s (179MB/s), 170MiB/s-170MiB/s (179MB/s-179MB/s), io=9.99GiB (10.7GB), run=60001-60001msec
/Shared/tests # fio --name=fiotest --ioengine=sync --rw=randread --bs=4k --numjobs=1 --size=5G --runtime=1m --time_based
fiotest: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=35.0MiB/s][r=8967 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=1): err= 0: pid=384: Mon Mar 24 18:30:31 2025
read: IOPS=8528, BW=33.3MiB/s (34.9MB/s)(1999MiB/60001msec)
clat (usec): min=86, max=10078, avg=108.36, stdev=27.63
lat (usec): min=87, max=10078, avg=108.68, stdev=27.64
clat percentiles (usec):
| 1.00th=[ 92], 5.00th=[ 100], 10.00th=[ 101], 20.00th=[ 102],
| 30.00th=[ 102], 40.00th=[ 103], 50.00th=[ 103], 60.00th=[ 106],
| 70.00th=[ 117], 80.00th=[ 119], 90.00th=[ 120], 95.00th=[ 122],
| 99.00th=[ 125], 99.50th=[ 126], 99.90th=[ 133], 99.95th=[ 176],
| 99.99th=[ 1483]
bw ( KiB/s): min= 7248, max=35968, per=100.00%, avg=35300.31, stdev=2654.96, samples=115
iops : min= 1812, max= 8992, avg=8825.08, stdev=663.74, samples=115
lat (usec) : 100=5.47%, 250=94.49%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%, 20=0.01%
cpu : usr=4.20%, sys=27.12%, ctx=511865, majf=0, minf=19
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=511745,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=33.3MiB/s (34.9MB/s), 33.3MiB/s-33.3MiB/s (34.9MB/s-34.9MB/s), io=1999MiB (2096MB), run=60001-60001msec
/Shared/tests # fio --name=fiotest --ioengine=sync --rw=write --bs=4k --numjobs=1 --size=5G --runtime=1m --time_based
fiotest: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=234MiB/s][w=60.0k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=1): err= 0: pid=406: Mon Mar 24 18:35:09 2025
write: IOPS=54.5k, BW=213MiB/s (223MB/s)(12.5GiB/60001msec); 0 zone resets
clat (usec): min=12, max=994391, avg=13.79, stdev=551.02
lat (usec): min=12, max=994392, avg=14.12, stdev=551.02
clat percentiles (nsec):
| 1.00th=[12736], 5.00th=[12736], 10.00th=[12864], 20.00th=[12864],
| 30.00th=[12992], 40.00th=[12992], 50.00th=[12992], 60.00th=[13120],
| 70.00th=[13248], 80.00th=[13376], 90.00th=[14144], 95.00th=[17024],
| 99.00th=[17792], 99.50th=[18304], 99.90th=[20352], 99.95th=[21376],
| 99.99th=[24960]
bw ( KiB/s): min= 2960, max=254616, per=100.00%, avg=239736.44, stdev=42952.28, samples=108
iops : min= 740, max=63654, avg=59934.13, stdev=10738.07, samples=108
lat (usec) : 20=99.88%, 50=0.12%, 100=0.01%, 250=0.01%, 500=0.01%
lat (usec) : 750=0.01%
lat (msec) : 4=0.01%, 100=0.01%, 1000=0.01%
cpu : usr=19.13%, sys=78.71%, ctx=104546, majf=0, minf=23
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,3267120,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=213MiB/s (223MB/s), 213MiB/s-213MiB/s (223MB/s-223MB/s), io=12.5GiB (13.4GB), run=60001-60001msec
/Shared/tests # fio --name=fiotest --ioengine=sync --rw=read --bs=4k --numjobs=1 --size=5G --runtime=1m --time_based
fiotest: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=732MiB/s][r=187k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=1): err= 0: pid=425: Mon Mar 24 18:36:44 2025
read: IOPS=147k, BW=573MiB/s (600MB/s)(33.5GiB/60001msec)
clat (nsec): min=1506, max=5270.1k, avg=4363.37, stdev=105224.35
lat (nsec): min=1613, max=5270.2k, avg=4481.51, stdev=105224.40
clat percentiles (nsec):
| 1.00th=[ 1800], 5.00th=[ 1848], 10.00th=[ 1864],
| 20.00th=[ 1912], 30.00th=[ 1928], 40.00th=[ 1944],
| 50.00th=[ 1976], 60.00th=[ 2008], 70.00th=[ 2024],
| 80.00th=[ 2096], 90.00th=[ 2224], 95.00th=[ 2320],
| 99.00th=[ 2608], 99.50th=[ 3120], 99.90th=[ 12352],
| 99.95th=[ 31104], 99.99th=[4751360]
bw ( KiB/s): min=637600, max=770048, per=100.00%, avg=748540.22, stdev=27540.44, samples=93
iops : min=159400, max=192512, avg=187135.05, stdev=6885.13, samples=93
lat (usec) : 2=59.32%, 4=40.29%, 10=0.17%, 20=0.16%, 50=0.01%
lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.05%
cpu : usr=17.64%, sys=82.19%, ctx=246, majf=0, minf=21
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=8794147,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=573MiB/s (600MB/s), 573MiB/s-573MiB/s (600MB/s-600MB/s), io=33.5GiB (36.0GB), run=60001-60001msec
/Shared/fiotests # fio --name=fiotest --ioengine=sync --rw=randwrite --bs=4k --numjobs=1 --size=5G --runtime=1m --time_based
fiotest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.33
Starting 1 process
fiotest: Laying out IO file (1 file / 5120MiB)
Jobs: 1 (f=1): [w(1)][100.0%][eta 00m:00s]
fiotest: (groupid=0, jobs=1): err= 0: pid=307: Mon Mar 24 19:22:08 2025
write: IOPS=84.9k, BW=332MiB/s (348MB/s)(20.0GiB/61733msec); 0 zone resets
clat (usec): min=2, max=6470.0k, avg= 9.69, stdev=4865.35
lat (usec): min=2, max=6470.0k, avg= 9.76, stdev=4865.35
clat percentiles (nsec):
| 1.00th=[ 3312], 5.00th=[ 3568], 10.00th=[ 3696], 20.00th=[ 3792],
| 30.00th=[ 3920], 40.00th=[ 4016], 50.00th=[ 4128], 60.00th=[ 4192],
| 70.00th=[ 4320], 80.00th=[ 4448], 90.00th=[ 4704], 95.00th=[ 5024],
| 99.00th=[ 6368], 99.50th=[ 7200], 99.90th=[11072], 99.95th=[11584],
| 99.99th=[14016]
bw ( KiB/s): min= 8, max=884432, per=100.00%, avg=665762.54, stdev=299519.35, samples=63
iops : min= 2, max=221108, avg=166440.63, stdev=74879.84, samples=63
lat (usec) : 4=36.00%, 10=63.79%, 20=0.21%, 50=0.01%, 100=0.01%
lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : >=2000=0.01%
cpu : usr=6.78%, sys=46.42%, ctx=6386, majf=0, minf=11
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,5242881,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=332MiB/s (348MB/s), 332MiB/s-332MiB/s (348MB/s-348MB/s), io=20.0GiB (21.5GB), run=61733-61733msec
/Shared/fiotests # fio --name=fiotest --ioengine=sync --rw=randread --bs=4k --numjobs=1 --size=5G --runtime=1m --time_based
fiotest: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=20.4MiB/s][r=5227 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=1): err= 0: pid=375: Mon Mar 24 19:32:25 2025
read: IOPS=5674, BW=22.2MiB/s (23.2MB/s)(1330MiB/60001msec)
clat (usec): min=128, max=3369, avg=171.98, stdev=35.87
lat (usec): min=128, max=3369, avg=172.13, stdev=35.88
clat percentiles (usec):
| 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 149], 20.00th=[ 151],
| 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 176],
| 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 204],
| 99.00th=[ 277], 99.50th=[ 318], 99.90th=[ 367], 99.95th=[ 392],
| 99.99th=[ 1516]
bw ( KiB/s): min= 5120, max=26320, per=100.00%, avg=22910.17, stdev=2371.42, samples=118
iops : min= 1280, max= 6580, avg=5727.54, stdev=592.86, samples=118
lat (usec) : 250=98.64%, 500=1.32%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.04%, 4=0.01%
cpu : usr=0.89%, sys=8.22%, ctx=340549, majf=0, minf=9
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=340496,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=22.2MiB/s (23.2MB/s), 22.2MiB/s-22.2MiB/s (23.2MB/s-23.2MB/s), io=1330MiB (1395MB), run=60001-60001msec
fio --name=fiotest --ioengine=sync --rw=write --bs=4k --numjobs=1 --size=5G --runtime=1m --time_based
fiotest: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=430MiB/s][w=110k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=1): err= 0: pid=442: Mon Mar 24 19:38:23 2025
write: IOPS=120k, BW=469MiB/s (491MB/s)(27.6GiB/60359msec); 0 zone resets
clat (usec): min=2, max=549718, avg= 6.96, stdev=739.55
lat (usec): min=2, max=549718, avg= 7.03, stdev=739.55
clat percentiles (nsec):
| 1.00th=[ 3216], 5.00th=[ 3376], 10.00th=[ 3536], 20.00th=[ 3696],
| 30.00th=[ 3792], 40.00th=[ 3888], 50.00th=[ 4016], 60.00th=[ 4128],
| 70.00th=[ 4256], 80.00th=[ 4512], 90.00th=[ 4704], 95.00th=[ 4960],
| 99.00th=[ 6112], 99.50th=[ 6688], 99.90th=[ 9024], 99.95th=[10048],
| 99.99th=[18560]
bw ( KiB/s): min= 3224, max=909424, per=100.00%, avg=512688.71, stdev=187898.83, samples=113
iops : min= 806, max=227356, avg=128172.16, stdev=46974.77, samples=113
lat (usec) : 4=48.49%, 10=51.46%, 20=0.04%, 50=0.01%, 100=0.01%
lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%
lat (msec) : 20=0.01%, 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%
lat (msec) : 750=0.01%
cpu : usr=7.56%, sys=58.17%, ctx=1413, majf=0, minf=12
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,7241729,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=469MiB/s (491MB/s), 469MiB/s-469MiB/s (491MB/s-491MB/s), io=27.6GiB (29.7GB), run=60359-60359msec
/Shared/fiotests # fio --name=fiotest --ioengine=sync --rw=read --bs=4k --numjobs=1 --size=5G --runtime=1m --time_based
fiotest: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=477MiB/s][r=122k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=1): err= 0: pid=509: Mon Mar 24 19:40:46 2025
read: IOPS=141k, BW=552MiB/s (579MB/s)(32.3GiB/60001msec)
clat (nsec): min=480, max=37171k, avg=5864.05, stdev=118007.43
lat (nsec): min=520, max=37171k, avg=5938.61, stdev=118018.51
clat percentiles (nsec):
| 1.00th=[ 524], 5.00th=[ 524], 10.00th=[ 564],
| 20.00th=[ 564], 30.00th=[ 564], 40.00th=[ 604],
| 50.00th=[ 604], 60.00th=[ 604], 70.00th=[ 604],
| 80.00th=[ 644], 90.00th=[ 724], 95.00th=[ 884],
| 99.00th=[ 3152], 99.50th=[ 37120], 99.90th=[2768896],
| 99.95th=[2899968], 99.99th=[3981312]
bw ( KiB/s): min=48000, max=950928, per=100.00%, avg=580760.56, stdev=183028.69, samples=116
iops : min=12000, max=237732, avg=145190.10, stdev=45757.14, samples=116
lat (nsec) : 500=0.44%, 750=90.74%, 1000=4.50%
lat (usec) : 2=3.18%, 4=0.19%, 10=0.25%, 20=0.10%, 50=0.21%
lat (usec) : 100=0.10%, 250=0.06%, 500=0.02%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.05%, 4=0.12%, 10=0.01%, 20=0.01%, 50=0.01%
cpu : usr=5.07%, sys=22.61%, ctx=114176, majf=0, minf=11
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=8476501,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=552MiB/s (579MB/s), 552MiB/s-552MiB/s (579MB/s-579MB/s), io=32.3GiB (34.7GB), run=60001-60001msec
/Shared/fiotests # fio --name=fiotest --ioengine=sync --rw=randwrite --bs=4k --numjobs=1 --size=5G --runtime=1m --time_based
fiotest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.33
Starting 1 process
fiotest: Laying out IO file (1 file / 5120MiB)
Jobs: 1 (f=1): [w(1)][100.0%][w=581MiB/s][w=149k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=1): err= 0: pid=75: Mon Mar 24 19:45:27 2025
write: IOPS=104k, BW=405MiB/s (425MB/s)(23.7GiB/60001msec); 0 zone resets
clat (usec): min=3, max=1920.2k, avg= 7.36, stdev=1410.73
lat (usec): min=3, max=1920.2k, avg= 7.43, stdev=1410.73
clat percentiles (nsec):
| 1.00th=[ 4048], 5.00th=[ 4384], 10.00th=[ 4576], 20.00th=[ 4832],
| 30.00th=[ 5024], 40.00th=[ 5216], 50.00th=[ 5472], 60.00th=[ 5664],
| 70.00th=[ 6112], 80.00th=[ 6560], 90.00th=[ 7328], 95.00th=[ 8032],
| 99.00th=[10176], 99.50th=[11584], 99.90th=[15808], 99.95th=[19328],
| 99.99th=[34048]
bw ( KiB/s): min= 8, max=763840, per=100.00%, avg=528835.61, stdev=210957.88, samples=93
iops : min= 2, max=190960, avg=132208.90, stdev=52739.47, samples=93
lat (usec) : 4=0.63%, 10=98.24%, 20=1.09%, 50=0.04%, 100=0.01%
lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 50=0.01%, 100=0.01%, 250=0.01%
lat (msec) : 500=0.01%, 750=0.01%, 1000=0.01%, 2000=0.01%
cpu : usr=8.49%, sys=72.24%, ctx=201540, majf=0, minf=11
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,6220993,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=405MiB/s (425MB/s), 405MiB/s-405MiB/s (425MB/s-425MB/s), io=23.7GiB (25.5GB), run=60001-60001msec
/Shared/fiotests # fio --name=fiotest --ioengine=sync --rw=randread --bs=4k --numjobs=1 --size=5G --runtime=1m --time_based
fiotest: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=39.8MiB/s][r=10.2k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=1): err= 0: pid=142: Mon Mar 24 19:47:56 2025
read: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(2357MiB/60001msec)
clat (usec): min=75, max=320, avg=96.94, stdev=17.98
lat (usec): min=75, max=321, avg=97.04, stdev=17.98
clat percentiles (usec):
| 1.00th=[ 80], 5.00th=[ 89], 10.00th=[ 89], 20.00th=[ 90],
| 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 91], 60.00th=[ 92],
| 70.00th=[ 105], 80.00th=[ 106], 90.00th=[ 108], 95.00th=[ 109],
| 99.00th=[ 131], 99.50th=[ 255], 99.90th=[ 293], 99.95th=[ 302],
| 99.99th=[ 314]
bw ( KiB/s): min= 9424, max=41128, per=100.00%, avg=40572.75, stdev=2893.22, samples=118
iops : min= 2356, max=10282, avg=10143.19, stdev=723.31, samples=118
lat (usec) : 100=66.00%, 250=33.44%, 500=0.56%
cpu : usr=1.21%, sys=11.20%, ctx=603539, majf=0, minf=9
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=603499,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=2357MiB (2472MB), run=60001-60001msec
/Shared/fiotests # fio --name=fiotest --ioengine=sync --rw=write --bs=4k --numjobs=1 --size=5G --runtime=1m --time_based
fiotest: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=982MiB/s][w=251k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=1): err= 0: pid=209: Mon Mar 24 19:52:57 2025
write: IOPS=185k, BW=721MiB/s (757MB/s)(42.3GiB/60001msec); 0 zone resets
clat (usec): min=2, max=47866, avg= 3.63, stdev=14.39
lat (usec): min=2, max=47866, avg= 3.70, stdev=14.39
clat percentiles (nsec):
| 1.00th=[ 2992], 5.00th=[ 3088], 10.00th=[ 3120], 20.00th=[ 3248],
| 30.00th=[ 3312], 40.00th=[ 3440], 50.00th=[ 3600], 60.00th=[ 3696],
| 70.00th=[ 3760], 80.00th=[ 3856], 90.00th=[ 4048], 95.00th=[ 4384],
| 99.00th=[ 5600], 99.50th=[ 6368], 99.90th=[ 7904], 99.95th=[ 8160],
| 99.99th=[ 9152]
bw ( KiB/s): min=39648, max=1051880, per=100.00%, avg=885232.73, stdev=229704.93, samples=99
iops : min= 9912, max=262970, avg=221308.14, stdev=57426.23, samples=99
lat (usec) : 4=87.88%, 10=12.12%, 20=0.01%, 50=0.01%, 100=0.01%
lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%
lat (msec) : 50=0.01%
cpu : usr=11.85%, sys=87.56%, ctx=93042, majf=0, minf=11
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,11082206,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=721MiB/s (757MB/s), 721MiB/s-721MiB/s (757MB/s-757MB/s), io=42.3GiB (45.4GB), run=60001-60001msec
/Shared/fiotests # fio --name=fiotest --ioengine=sync --rw=read --bs=4k --numjobs=1 --size=5G --runtime=1m --time_based
fiotest: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=1168MiB/s][r=299k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=1): err= 0: pid=276: Mon Mar 24 19:54:58 2025
read: IOPS=377k, BW=1474MiB/s (1546MB/s)(86.4GiB/60002msec)
clat (nsec): min=480, max=2754.5k, avg=1644.97, stdev=35804.10
lat (nsec): min=520, max=2754.5k, avg=1695.82, stdev=35804.16
clat percentiles (nsec):
| 1.00th=[ 524], 5.00th=[ 564], 10.00th=[ 564],
| 20.00th=[ 604], 30.00th=[ 604], 40.00th=[ 604],
| 50.00th=[ 604], 60.00th=[ 644], 70.00th=[ 644],
| 80.00th=[ 644], 90.00th=[ 684], 95.00th=[ 724],
| 99.00th=[ 1080], 99.50th=[ 3696], 99.90th=[ 191488],
| 99.95th=[ 493568], 99.99th=[1597440]
bw ( MiB/s): min= 42, max= 1969, per=100.00%, avg=1635.02, stdev=552.67, samples=107
iops : min=10778, max=504266, avg=418564.39, stdev=141483.76, samples=107
lat (nsec) : 500=0.06%, 750=95.36%, 1000=3.33%
lat (usec) : 2=0.44%, 4=0.41%, 10=0.14%, 20=0.07%, 50=0.04%
lat (usec) : 100=0.02%, 250=0.04%, 500=0.03%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.05%, 4=0.01%
cpu : usr=17.03%, sys=74.23%, ctx=61584, majf=0, minf=11
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=22640469,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=1474MiB/s (1546MB/s), 1474MiB/s-1474MiB/s (1546MB/s-1546MB/s), io=86.4GiB (92.7GB), run=60002-60002msec
/disk test nvme1,nvme2,nvme3,nvme4,nvme5,nvme6 block-size=4K direction=read thread-count=2 type=device
AVG TOT 54.9Gbps of disk traffic over few minutes
/interface/ethernet/set qsfp28-1-1 l2mtu=9570 mtu=9000
/ip address add address=172.16.3.2/24
/disk
set nvme1 nvme-tcp-export=yes
set nvme2 nvme-tcp-export=yes
set nvme3 nvme-tcp-export=yes
set nvme4 nvme-tcp-export=yes
set nvme5 nvme-tcp-export=yes
set nvme6 nvme-tcp-export=yes
/disk
add nvme-tcp-address=172.16.3.2 nvme-tcp-host-name=MikroTik nvme-tcp-name=nvme1 type=nvme-tcp
add nvme-tcp-address=172.16.3.2 nvme-tcp-host-name=MikroTik nvme-tcp-name=nvme2 type=nvme-tcp
add nvme-tcp-address=172.16.3.2 nvme-tcp-host-name=MikroTik nvme-tcp-name=nvme3 type=nvme-tcp
add nvme-tcp-address=172.16.3.2 nvme-tcp-host-name=MikroTik nvme-tcp-name=nvme4 type=nvme-tcp
/disk test block-size=64K direction=read type=device disk=nvme-tcp-172-16-3-2-nvme1,nvme-tcp-172-16-3-2-nvme2,nvme-tcp-172-16-3-2-nvme3,nvme-tcp-172-16-3-2-nvme4,nvme-tcp-172-16-3-2-nvme5,nvme-tcp-172-16-3-2-nvme6 thread-count=4
Show me your config and I'll show you mine.Thank you for the effort, but about the SMB, something is wrong in your setup.
/ip smb users
add name=bscott
/ip smb
set enabled=yes
/ip smb shares
add directory=nvme-raid name=Shared valid-users=bscott
/disk
set nvme5 raid-master=nvme-raid raid-role=0
set nvme6 raid-master=nvme-raid raid-role=1
add media-interface=*2000000 raid-chunk-size=128K raid-device-count=2 raid-type=0 slot=nvme-raid type=raid
/ # iperf3 -c 192.168.3.24
Connecting to host 192.168.3.24, port 5201
[ 5] local 192.168.3.232 port 36242 connected to 192.168.3.24 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.09 GBytes 9.38 Gbits/sec 18 1.38 MBytes
[ 5] 1.00-2.00 sec 1.09 GBytes 9.32 Gbits/sec 58 850 KBytes
[ 5] 2.00-3.00 sec 1.06 GBytes 9.12 Gbits/sec 108 1.30 MBytes
[ 5] 3.00-4.00 sec 1.09 GBytes 9.39 Gbits/sec 4 1.16 MBytes
[ 5] 4.00-5.00 sec 1.09 GBytes 9.38 Gbits/sec 32 1.33 MBytes
[ 5] 5.00-6.00 sec 1.09 GBytes 9.37 Gbits/sec 21 925 KBytes
[ 5] 6.00-7.00 sec 1.09 GBytes 9.38 Gbits/sec 23 1.34 MBytes
[ 5] 7.00-8.00 sec 1.09 GBytes 9.37 Gbits/sec 18 1.28 MBytes
[ 5] 8.00-9.00 sec 1.09 GBytes 9.38 Gbits/sec 24 1.11 MBytes
[ 5] 9.00-10.00 sec 1.09 GBytes 9.37 Gbits/sec 49 1.36 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 10.9 GBytes 9.35 Gbits/sec 355 sender
[ 5] 0.00-10.00 sec 10.9 GBytes 9.35 Gbits/sec receiver
iperf Done.
/ # iperf3 -c 192.168.3.24 -R
Connecting to host 192.168.3.24, port 5201
Reverse mode, remote host 192.168.3.24 is sending
[ 5] local 192.168.3.232 port 36250 connected to 192.168.3.24 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 746 MBytes 6.25 Gbits/sec
[ 5] 1.00-2.00 sec 604 MBytes 5.07 Gbits/sec
[ 5] 2.00-3.00 sec 854 MBytes 7.16 Gbits/sec
[ 5] 3.00-4.00 sec 632 MBytes 5.30 Gbits/sec
[ 5] 4.00-5.00 sec 906 MBytes 7.60 Gbits/sec
[ 5] 5.00-6.00 sec 901 MBytes 7.56 Gbits/sec
[ 5] 6.00-7.00 sec 634 MBytes 5.32 Gbits/sec
[ 5] 7.00-8.00 sec 879 MBytes 7.37 Gbits/sec
[ 5] 8.00-9.00 sec 659 MBytes 5.53 Gbits/sec
[ 5] 9.00-10.00 sec 870 MBytes 7.30 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 7.51 GBytes 6.45 Gbits/sec sender
[ 5] 0.00-10.00 sec 7.50 GBytes 6.45 Gbits/sec receiver
iperf Done.
That's fine and all, but these disks are rated for much higher numbers and I'm not getting anywhere near that when running fio directly on the RDS2216 itself, where there should be just raw PCIe speeds on sequential reads and writes.disk test provides similar functionality as dd utility in linux.
when testing locally CPU may be a bit capped due to generation of data (random or zeroes)
with 6 nvme disks (entry level U.2/U.3 drives Micron_7400 - 960GB):
.....
speeds should be around 55Gbps or higher in shorter bursts
[admin@rds2216-office] /disk> test nvme-raid block-size=4K direction=read thread-count=2 type=device
Flags: R - RUNNING
Columns: SEQ, RATE, IOPS, BYTES, DISK, THREAD, TYPE, PATTERN, DIR, BSIZE, STATE
SEQ RATE IOPS BYTES DISK THREAD TYPE PATTERN DIR BSIZE STATE
R TOT 9.2Gbps 282 511 171.4GiB nvme-raid 0 device sequential read 4096 run
R TOT 9.4Gbps 287 560 174.4GiB nvme-raid 1 device sequential read 4096 run
TOT 18.6Gbps 570 072 345.8GiB nvme-raid TOT device sequential read 4096
/disk test utility clears caches before running tests.Just a quick note on the short bursts: those are most likely hitting cached data rather than reflecting sustained transfer speeds from storage to user space.
Larger MUT is to lower CPU usage for a bit and in direct device to device connection there is no reason not to increase MTU, for 50Gbps+ transfers over nvme-tcp CPU use will be around 30% with larger MTUs and around 25-40 with 1500, so there is enough headroom for background router tasks.The NVMe-TCP throughput is actually pretty impressive with the 100G interfaces and a performant x86 server. That said, real-world results will likely depend heavily on tuning, available CPU headroom and network conditions (e.g. mtu=9000). It might help to add a bit more context here, so others don’t assume they’ll see the same results on something like the RDS2216.
[admin@rocket80-ros] /disk> test block-size=4K direction=read thread-count=2 type=device tcp-raid-1
Flags: R - RUNNING
Columns: SEQ, RATE, IOPS, BYTES, DISK, THREAD, TYPE, PATTERN, DIR, BSIZE, STATE
SEQ RATE IOPS BYTES DISK THREAD TYPE PATTERN DIR BSIZE STATE
R TOT 4.7Gbps 144 506 86.5GiB tcp-raid-1 0 device sequential read 4096 run
R TOT 4.7Gbps 144 773 86.7GiB tcp-raid-1 1 device sequential read 4096 run
TOT 9.4Gbps 289 279 173.3GiB tcp-raid-1 TOT device sequential read 4096
Increase thread-count to give device more parallel streams, each stream is single core bond.I ran a disk test from the Ampere ROS 7 to the RDS2216, with two disks exported. The results are exactly half of what I get when running it locally.
/disk test utility clears caches before running tests.
[admin@rocket80-ros] /disk> test block-size=4K direction=read thread-count=1 type=device nvme4
SEQ RATE IOPS BYTES DISK THREAD TYPE PATTERN DIR BSIZE STATE
R TOT 24.1Gbps 738 172 1289.7GiB nvme4 0 device sequential read 4096 run
[admin@rds2216-office] /disk> test block-size=4K direction=read thread-count=1 type=device nvme1
R TOT 11.4Gbps 349 301 704.9GiB nvme1 0 device sequential read 4096 run
You should be able to get 12-16Gbps on RDS - PCIe connection is just to NVMe controller, not memory chip - it does not mean that its automatically half speed.
If these PM963 drives are capable of up to 1800-2000MB/s (14-16Gbps) with 4 PCIe lanes, that means that I should see roughly 7-8Gbps in straight reads from one drive with 2 lanes on the RDS2216.
I bought 8 960's. Two are in the Ampere and six in the RDS.edit: not sure why are you getting such good results on ampere, but test seems to be running correctly, is it same 960GB disk? larger ones usually have much better performance.
1.92TB disk usually has double performance than 960GB counterpart.
/ip smb users
add name=normis
/ip smb
set enabled=yes
/ip smb shares
set [ find default=yes ] valid-users=guest
add directory=raid1 name=shrek valid-users=normis
/disk print
23 BM raid1 raid1 RAID6 2-parity-disks
i have same hardware, i will check whats going on, on RDS test seems to be correct and provides expected results.I bought 8 960's. Two are in the Ampere and six in the RDS.
Where might one find that information in your documentation? If it's bug in macOS, do you have an open case with Apple about the issue?there is not much to configure for SMB:
[...]
I know macOS has a bug where you can't set MTU above 8000 and if your adapter says it needs 9000, it will glitch in various ways. I have a Sonnettech Twin25 and have set MTU to 8000
MTU is 1500. But that doesn't appear to be the problem in this case.make sure your macOS computer has correct MTU for the adapter you are using, I know macOS has a bug where you can't set MTU above 8000 and if your adapter says it needs 9000, it will glitch in various ways. I have a Sonnettech Twin25 and have set MTU to 8000
I tested it earlier and I believe I posted some results. It works similarly (i.e. works fine), depending on which machine I'm connected to and which set of drives I'm testing.Can you instead test NFS performance over SMB. At times, Windows SMB has its own performance limitations.
Unraid is a subscription, which would go against the "you buy it, you own it" way of mikrotik.Why doesnt MikroTik investigate creating a partnership with the folks at "unRAID" and work to create a solution...... Deploy "unRAID" as a container or managing overlay
It's always good to listen to a wise and experienced person.I'm still old school where a router is a router and a switch is a switch and storage is storage and a nas is a nas ...
Well... for the RDS, one random idea be to have some small-form versions of the CCR2004-1G-2XS-PCIe full-sized card (perhaps different specs / switch / CPU / etc offerings), to be one of RDS "disks" on the PCIe bus. i.e. RDS acts a "chassis" for a bunch of individual router (or switch) "line cards".If your functionality is spread across multiple devices and one stops working, the rest will still work. No nightmares and pissed off users.