Reduce RAM fragmentation
In four weeks of uptime, the Inactive RAM on my FreeNAS system has steadily increased from ~700 MB to over 55 GB. My ARC has correspondingly decreased in size from over 100 GB to under 60 GB, with a hit ratio of 12-13%. I don't have 128 GB of RAM in this system, just for half of it to sit unused. What's going on here?
#3 Updated by Dan Brown over 1 year ago
I have a test system on 11-U1, but don't have the uptime or usage on it yet to say. But the most recent posts in this thread indicate that the problem continues in 11: https://forums.freenas.org/index.php?threads/swap-with-9-10.42749/page-4
#4 Updated by Kris Moore over 1 year ago
- Assignee changed from Release Council to Alexander Motin
The inactive memory counter works a bit different in FreeBSD. Inactive memory is technically available / free, just hasn't been reclaimed by the kernel (yet) until it is needed.
I'm unsure how this relates to your issue with ARC shrinkage, but it may (or may not). Or perhaps ARC is evicting in-active pages and those get tossed into "inactive" bucket until needed again.
Sending this over to Alexander Motin to confirm my suspicions here.
#5 Updated by Alexander Motin over 1 year ago
Inactive memory should not create pressure for ARC. Something else probably did, and inactive memory just filled the gap. My guess is that it can be a high level of kernel address space fragmentation, which does not allow ARC allocate sufficiently large blocks of kernel address space. Could you try to increase KVA size to 1.5-2x of your physical RAM (by adding loader tunable vm.kmem_size=206158430208 -- 192GB) and report whether it help?
#6 Updated by Dan Brown about 1 year ago
I haven't forgotten about this, but the increase in inactive takes place over the space of weeks. I recently upgraded my server to 11.0-U2, and will update this further once I've had a change to see what happens. Right now, the system is reported 119.4G wired, 4.0G inactive, and an ARC size of 107.8G.
#8 Updated by Alexander Motin about 1 year ago
- Status changed from 46 to Ready For Release
- Priority changed from No priority to Important
- Target version set to 11.1
Still my best guess is KVA fragmentation. FreeNAS 11.1 should get recent ZFS improvement storing ARC content in 4KB chunks, that should radically reduce the fragmentation problem. Dan, if you have any more important input, please let us know, otherwise lets hope 11.1 will help.
#16 Updated by Marco Pfeiffer about 2 months ago
I see the problem in 11.1-U6, I just have 12 GB of RAM (for 9TB of storage) and my arc shrinks over about 2 days until it reaches arc_min of 1.5GB while inactive ram is about 10GB.
Restarting the NAS makes it really fast for about a day, after which the arc cache is at arc_min again.
I defined the tunable "vfs.zfs.arc_min" and set it to 8589934592 (8GB) which resolves this problem for me but I shouldn't need to do that.