Bug #23893

Enhance compressed ARC performance

Added by Jason Keller about 4 years ago. Updated almost 3 years ago.

Nice to have
Alexander Motin
Target version:
Seen in:
Reason for Closing:
Reason for Blocked:
Needs QA:
Needs Doc:
Needs Merging:
Needs Automation:
Support Suite Ticket:
Hardware Configuration:
ChangeLog Required:


It seems that with the trunking in of compressed ARC into FreeBSD 10, the maximum IOPS that can be achieved on a LUN regardless of CPU power is now roughly 130,000 8k read IOPS in FreeNAS 9.10 and higher. For high transaction databases like Elasticsearch this could be negatively impacting. Can currently be worked around by setting vfs.zfs.compressed_arc_enabled=0 in /boot/loader.conf in 11-RC (due to a bug in 9.10.2, this workaround does not function).

More details here...

I didn't know which category to put this in exactly so I put it in iSCSI - please move as necessary.

builder-labstor02.svg (1.1 MB) builder-labstor02.svg Nick Wolff, 03/12/2018 12:45 PM

Related issues

Copied to FreeNAS - Bug #32736: Change default ZFS indirect block size from 128 to 32 for new files and zvolsDone

Associated revisions

Revision 1878a3c6 (diff)
Added by Alexander Motin about 3 years ago

MFC r331711: MFV 331710: 9188 increase size of dbuf cache to reduce indirect block decompression illumos/illumos-gate@268bbb2a2fa79c36d4695d13a595ba50a7754b76 With compressed ARC (6950) we use up to 25% of our CPU to decompress indirect blocks, under a workload of random cached reads. To reduce this decompression cost, we would like to increase the size of the dbuf cache so that more indirect blocks can be stored uncompressed. If we are caching entire large files of recordsize=8K, the indirect blocks use 1/64th as much memory as the data blocks (assuming they have the same compression ratio). We suggest making the dbuf cache be 1/32nd of all memory, so that in this scenario we should be able to keep all the indirect blocks decompressed in the dbuf cache. (We want it to be more than the 1/64th that the indirect blocks would use because we need to cache other stuff in the dbuf cache as well.) In real world workloads, this won't help as dramatically as the example above, but we think it's still worth it because the risk of decreasing performance is low. The potential negative performance impact is that we will be slightly reducing the size of the ARC (by ~3%). Reviewed by: Dan Kimmel <> Reviewed by: Prashanth Sreenivasa <> Reviewed by: Paul Dagnelie <> Reviewed by: Sanjay Nadkarni <> Reviewed by: Allan Jude <> Reviewed by: Igor Kozhukhov <> Approved by: Garrett D'Amore <> Author: George Wilson <> Ticket: #23893 (cherry picked from commit 3b7774b01772fe050d4f69bf97497815f3010af9)

Revision a8c26e0e (diff)
Added by Dru Lavigne about 3 years ago

Mention OS and ZFS improvements. Ticket: #28544 Ticket: #23893


#1 Updated by Alexander Motin about 4 years ago

  • Tracker changed from Umbrella to Feature
  • Subject changed from Enhance compressed ARC / iSCSI locking to Enhance compressed ARC performance
  • Category changed from 89 to 200
  • Status changed from Unscreened to Screened
  • Target version set to 11.1
  • % Done set to 0

From your data I haven't seen significant lock congestion, so lets narrow the topic down to testing compressed ARC performance.

#2 Updated by Alexander Motin over 3 years ago

  • Priority changed from No priority to Nice to have
  • Target version changed from 11.1 to 11.2-BETA1

Compressed ARC performance is more question to upstream FreeBSD or even OpenZFS, it it out of scope of FreeNAS, but I'll leave it here, hoping to play with it later.

#3 Updated by Dru Lavigne over 3 years ago

  • Status changed from Screened to Not Started
  • Target version changed from 11.2-BETA1 to 11.3
  • Reason for Blocked set to Dependant on a related task to be completed

#4 Updated by Nick Wolff over 3 years ago

CTL source code list a potential performance improvement(see below) in pulling data directly from to avoid a second buffer. This get's complicated by compressed arc but we need to make sure we are not adding additional memcpys into code path and ideally it would be nice to do this when arc isn't compressed.

ZFS ARC backend for CTL. Since ZFS copies all I/O into the ARC
(Adaptive Replacement Cache), running the block/file backend on top of a
ZFS-backed zdev or file will involve an extra set of copies. The
optimal solution for backing targets served by CTL with ZFS would be to
allocate buffers out of the ARC directly, and DMA to/from them directly.
That would eliminate an extra data buffer allocation and copy.

Attached is flamegraph of a system doing about 400MBPS sequential read (light load). The entire left half can be ignored as it's all local VMs but the memcpy above icl_sof_conn are related to either this ticket or a memcpy that can be removed by iscsi dma offload of chelsio cards as being worked on here #17698

#5 Updated by Alexander Motin about 3 years ago

  • Tracker changed from Feature to Bug
  • Status changed from Not Started to In Progress
  • Target version changed from 11.3 to 11.2-RC2
  • Severity set to Low
  • Reason for Blocked deleted (Dependant on a related task to be completed)
  • Seen in set to 11.1-U4
  • ChangeLog Required set to No

I've merged patch (will be in 11.1-U5 and 11.2), removing limit on maximal amount of decompressed ARC data of 100MB, allowing it to reach up to 3% of ARC. It should reduce the decompression overhead for metadata and some small part of very hot data.

We may also consider tuning of default indirect block size, but that is still being investigated.

#7 Updated by Dru Lavigne about 3 years ago

  • Status changed from In Progress to Ready for Testing
  • Target version changed from 11.2-RC2 to 11.2-BETA1

#8 Updated by Dru Lavigne about 3 years ago

  • Copied to Bug #32736: Change default ZFS indirect block size from 128 to 32 for new files and zvols added

#9 Updated by Dru Lavigne about 3 years ago

  • Needs Doc changed from Yes to No
  • Needs Merging changed from Yes to No

#10 Updated by Eric Turgeon almost 3 years ago

  • Status changed from Ready for Testing to Passed Testing
  • Needs QA changed from Yes to No

Looks to works as expected

#11 Updated by Dru Lavigne almost 3 years ago

  • Status changed from Passed Testing to Done

Also available in: Atom PDF