Bug #32736: Change default ZFS indirect block size from 128 to 32 for new files and zvols
Measure performance effects of different indirect block sizes
We need to know how different indirect block sizes affect random rewrite, sequential rewrite and large delete/unmap. May be IBS set to 128KB to optimize delete performance is too high price to pay for random access, since compressed ARC introduction. May be 64KB or 32KB could be more reasonable.
#2 Updated by Nick Principe about 2 years ago
- Status changed from Unscreened to Blocked
- Reason for Blocked set to Dependent on a related task to be completed
Blocked pending completion of either:
- spectre/meltdown testing
- m-series write wall investigations
Mav, do you think all-flash testing would be illuminating enough for this testing? If so I can use my AFA FreeNAS and not pend on a gap in TrueNAS testing.
#5 Updated by Nick Principe about 2 years ago
- Status changed from In Progress to Blocked
- Reason for Blocked changed from Dependent on a related task to be completed to On hold
Initial transactional SMB performance had little variation due to IBS setting. I've lost my testbed for a week or so while new H/W is shipped. Upon setup of new hardware, testing will resume.
#7 Updated by Nick Principe about 2 years ago
- Status changed from Blocked to In Progress
- Reason for Blocked deleted (
During some other testing, we evaluated default_ibs random 4k write IOPS performance and unmap performance after said random write workload. It seems that going to default_ibs=15 has few downsides.
Performance after 1 hour of 4k random writes
Ops/sec R/T (ms)
ibs14 38601.98 0.613
ibs15 35033.48 0.676
ibs16 27600.02 0.86
ibs17 18524.93 1.284
LUN unmap performance, one LUN at a time