Project

General

Profile

Bug #23386

Event Out Stripe After Pool Expansion

Added by Kevan Brown over 3 years ago. Updated about 3 years ago.

Status:
Closed: Third party to resolve
Priority:
No priority
Assignee:
-
Category:
OS
Target version:
Seen in:
Severity:
New
Reason for Closing:
Reason for Blocked:
Needs QA:
Yes
Needs Doc:
Yes
Needs Merging:
Yes
Needs Automation:
No
Support Suite Ticket:
n/a
Hardware Configuration:
ChangeLog Required:
No

Description

After adding another pair of disks to a ZFS RAID10 pool, is there a way to "even out" the stripe for existing data across all of the drives? I see asymmetric usage of all of the disks after the expansion, which leads me to believe that I'm not benefiting from the full performance of the RAID10 configuration. The configuration was originally 6 disks and is now 8.

History

#1 Updated by Kevan Brown over 3 years ago

  • File debug-nas-20170414132502.txz added

#2 Updated by Sean Fagan over 3 years ago

  • Status changed from Unscreened to Closed: Third party to resolve

No. ZFS cannot migrate blocks around.

#3 Avatar?id=14398&size=24x24 Updated by Kris Moore about 3 years ago

  • Seen in changed from Unspecified to N/A

#4 Updated by Dru Lavigne about 3 years ago

  • File deleted (debug-nas-20170414132502.txz)

#5 Updated by Dru Lavigne about 3 years ago

  • Target version set to N/A
  • Private changed from Yes to No

Also available in: Atom PDF