Project

General

Profile

Bug #34486

11.2 Wired Memory maxed while idle

Added by Joe Langa over 1 year ago. Updated over 1 year ago.

Status:
Closed
Priority:
No priority
Assignee:
Marcelo Araujo
Category:
OS
Target version:
Severity:
Low Medium
Reason for Closing:
Duplicate Issue
Reason for Blocked:
Other: make a note in comments
Needs QA:
Yes
Needs Doc:
Yes
Needs Merging:
Yes
Needs Automation:
No
Support Suite Ticket:
n/a
Hardware Configuration:
ChangeLog Required:
No

Description

Rebooted the machine before bed. Wired memory was 4g. Woke up and my free ram is now at .25g.

Setup is 3x2tb zfs1
16gb ram
I5 4590s

Plex installed in an iocage jail. A VM created but not started.
Only started since using 11.2 nightlies and upgraded pool.


Related issues

Related to FreeNAS - Bug #26480: Add a seatbelt for the amount of memory on the host machine available for VM guestsDone

History

#1 Updated by Joe Langa over 1 year ago

Update 6-4: Issue continued, Freenas became unresponsive overnight. I ran hard drive tests in my Storage pool, all tests passed. However I feel something changed when i upgraded my Pool when i went to 11.2. On my share i created (via Windows Explorer) a New Folder, when i delete that folder Freenas completely crashes and hard reboots machine. Attempted RMDIR in Shell and same thing happens. I am going to delete the pool and recreate to see if that fixes the issue.

#2 Updated by Dru Lavigne over 1 year ago

  • Private changed from No to Yes
  • Reason for Blocked set to Need additional information from Author

Joe: please attach a debug (System -> Advanced -> Save Debug) to this ticket.

#3 Updated by Joe Langa over 1 year ago

  • File debug.tgz added

#4 Updated by Joe Langa over 1 year ago

Dru Lavigne wrote:

Joe: please attach a debug (System -> Advanced -> Save Debug) to this ticket.

Thanks. I added a debug.

Please note i have reinstalled a nightly via ISO on a new USB boot drive (mirror), did a hard drive test on each disk (short / conveyance) and RAM test. I went as far as removing my SSD Pool and redid it, to make sure none of the jails created were doing it.
Trying to rule out hardware and issue still persists

#5 Updated by Dru Lavigne over 1 year ago

  • Assignee changed from Release Council to Alexander Motin

#6 Updated by Alexander Motin over 1 year ago

  • Assignee changed from Alexander Motin to Marcelo Araujo

What I see in debug, is that out of 16GB of your memory ZFS ARC ate almost 12GB for data, and some more for random metadata, that explains 15GB of Wired memory. That pushed out bhyve VM wanting 4GB, not very small Plex and other things to swap, which overflowed and started kill random applications.

Forcing ZFS to shrink its caches on low memory is a known pain point, unfortunately. That is what I guess could change in 11.2. It worth investigation. Meanwhile, Marcello, do I remember correctly that we were going to automatically reduce maximal ARC size on VM startup? It is kind of workaround, but it would help us for some time.

#7 Updated by Marcelo Araujo over 1 year ago

  • Status changed from Unscreened to Ready for Testing
  • Target version changed from Backlog to 11.2-RC2
  • Severity changed from New to Low Medium
  • Reason for Blocked changed from Need additional information from Author to Other: make a note in comments

Alexander Motin wrote:

What I see in debug, is that out of 16GB of your memory ZFS ARC ate almost 12GB for data, and some more for random metadata, that explains 15GB of Wired memory. That pushed out bhyve VM wanting 4GB, not very small Plex and other things to swap, which overflowed and started kill random applications.

Forcing ZFS to shrink its caches on low memory is a known pain point, unfortunately. That is what I guess could change in 11.2. It worth investigation. Meanwhile, Marcello, do I remember correctly that we were going to automatically reduce maximal ARC size on VM startup? It is kind of workaround, but it would help us for some time.

Yes, it is correct Alexander, there is a seat-belt for VM vs ZFS ARC. It is on MASTER already, but if the ZFS ARC is using memory already and we tuning the 'vfs.zfs.arc_max' to a lower value to give memory to a VM, it takes a bit of time for ZFS to give this memory back to the system.

But we have this workaround there in place.

#8 Updated by Dru Lavigne over 1 year ago

  • Related to Bug #26480: Add a seatbelt for the amount of memory on the host machine available for VM guests added

#9 Updated by Dru Lavigne over 1 year ago

  • File deleted (debug.tgz)

#10 Updated by Dru Lavigne over 1 year ago

  • Status changed from Ready for Testing to Closed
  • Target version changed from 11.2-RC2 to N/A
  • Private changed from Yes to No
  • Reason for Closing set to Duplicate Issue

Also available in: Atom PDF