Prevent large file delete from starving out write operations
We are definitely going to want this once it's in OpenZFS.
#3 Updated by Alexander Motin about 3 years ago
We are indeed just a few commits after illumos this moment, but this commit is not one of them. It is still on stage of open pull request asking for review. The review process became terribly slow last time. My own prefetcher patch hangs in the queue from about autumn. :(
#5 Updated by Alexander Motin almost 3 years ago
- Status changed from 15 to Closed: Third party to resolve
Kris Moore wrote:
Is this merged in?
No. There is nothing merge. It was recently closed on GitHub: "alek-p commented 17 days ago: Looks like the async delete part needs more work. I'll try to upstream the rest in pieces, hopefully this week."
#6 Updated by Josh Paetzel almost 3 years ago
- Status changed from Closed: Third party to resolve to Investigation
- Assignee changed from Alexander Motin to Josh Paetzel
- Target version changed from 9.10.1-U1 to 9.10.2
We really need this. If it won't be coming from openzfs we need to investigate a solution.
Doing an rm -rf on a large directory of an NFS server will take it to it's knees, as the txgs get immediately filled with deletes and write I/O gets starved.
#23 Updated by Nick Wolff over 1 year ago
Started working on testing this and having issues creating a work load that will show this behavior. A few hundred thousand small files of random data(100m) being deleted don't seem to make a noticeable difference to an async rand write fio test. This is true even when vfs.zfs.per_txg_dirty_frees_percent=0 which from my understanding should display the older behavior.