Project

General

Profile

Bug #8079

Replacing a RAID-Z disk using Web Interface not working.

Added by Dimitar Boyn over 5 years ago. Updated about 4 years ago.

Status:
Resolved
Priority:
Nice to have
Assignee:
William Grzybowski
Category:
-
Target version:
Seen in:
Severity:
New
Reason for Closing:
Reason for Blocked:
Needs QA:
Yes
Needs Doc:
Yes
Needs Merging:
Yes
Needs Automation:
No
Support Suite Ticket:
n/a
Hardware Configuration:
ChangeLog Required:
No

Description

This looks a lot like what is described in https://bugs.freenas.org/issues/4017

I will attach screenshots and additional info from shell.
I have 3 Spares configured but replace procedure from GUI seems not capable to pick the right failed drive gptid (Available in shell)
or/and the spare drives.

One recommendation if I may: We should be more consistent and do only one of possible naming conventions through out - glables, gptids, etc.

Failed Drive Screen.JPG (124 KB) Failed Drive Screen.JPG This is Failed Drive Placement in GUI Dimitar Boyn, 02/18/2015 12:54 PM
Impossible to Replace.JPG (92.6 KB) Impossible to Replace.JPG Empty List of Drives to use for replacement Dimitar Boyn, 02/18/2015 12:55 PM
Pool in GUI after shell replace.JPG (117 KB) Pool in GUI after shell replace.JPG Now spare disk in Use is shown as "spare5" and a number Dimitar Boyn, 02/18/2015 01:35 PM
2209
2210
2211

Associated revisions

Revision 23de9af1 (diff)
Added by William Grzybowski over 5 years ago

Allow use of spares to replace a disk Ticket: #8079

Revision 97506c3b (diff)
Added by William Grzybowski over 5 years ago

Allow use of spares to replace a disk Ticket: #8079 (cherry picked from commit 23de9af149810c9f95eb05de4d0513ab03adaced)

Revision aac3bdb7 (diff)
Added by William Grzybowski over 5 years ago

Allow use of spares to replace a disk Ticket: #8079

History

#2 Updated by Dimitar Boyn over 5 years ago

[root@xxx] ~# zpool status
  pool: xpool
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-2Q
  scan: scrub repaired 0 in 0h0m with 0 errors on Sun Jan 25 00:00:02 2015
config:

        NAME                                            STATE     READ WRITE CKSUM
        dpool                                           DEGRADED     0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/43d5067f-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/447ca76d-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/45270b9b-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/45d25972-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/467cd7e5-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/47281d07-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/47d4bcfc-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
          raidz1-1                                      DEGRADED     0     0     0
            gptid/48958267-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/493f55ad-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/49ec58dd-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/4a96e59a-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/4b41bd42-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            14782184810721386656                        UNAVAIL      6     4     0  was /dev/gptid/4beaf14b-6608-11e4-8ec2-0060dd459a62
            gptid/4c94b3b9-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
          raidz1-2                                      ONLINE       0     0     0
            gptid/4d567427-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/4dfd8eca-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/4ea8879f-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/4f54d4d0-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/5000886c-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/50ab25b8-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/5154b9aa-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
        spares
          gptid/5214024e-6608-11e4-8ec2-0060dd459a62    AVAIL
          gptid/52bd6ae8-6608-11e4-8ec2-0060dd459a62    AVAIL
          gptid/c1e45861-6629-11e4-a476-0060dd459a62    AVAIL

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Fri Feb  6 14:50:17 2015
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          ada0p2    ONLINE       0     0     0

errors: No known data errors

#3 Updated by Dimitar Boyn over 5 years ago

2211

Doing replace from shell seemed to work but confuses the WebGUI report even more:

[root@xxx] ~# zpool replace xpool gptid/4beaf14b-6608-11e4-8ec2-0060dd459a62 gptid/5214024e-6608-11e4-8ec2-0060dd459a62
[root@xxx] ~# zpool status xpool
  pool: xpool
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-2Q
  scan: resilvered 480K in 0h0m with 0 errors on Wed Feb 18 12:51:38 2015
config:

        NAME                                              STATE     READ WRITE CKSUM
        dpool                                             DEGRADED     0     0     0
          raidz1-0                                        ONLINE       0     0     0
            gptid/43d5067f-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            gptid/447ca76d-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            gptid/45270b9b-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            gptid/45d25972-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            gptid/467cd7e5-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            gptid/47281d07-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            gptid/47d4bcfc-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
          raidz1-1                                        DEGRADED     0     0     0
            gptid/48958267-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            gptid/493f55ad-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            gptid/49ec58dd-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            gptid/4a96e59a-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            gptid/4b41bd42-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            spare-5                                       UNAVAIL      0     0     0
              14782184810721386656                        UNAVAIL      6     4     0  was /dev/gptid/4beaf14b-6608-11e4-8ec2-0060dd459a62
              gptid/5214024e-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/4c94b3b9-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
          raidz1-2                                        ONLINE       0     0     0
            gptid/4d567427-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            gptid/4dfd8eca-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            gptid/4ea8879f-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            gptid/4f54d4d0-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            gptid/5000886c-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            gptid/50ab25b8-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
            gptid/5154b9aa-6608-11e4-8ec2-0060dd459a62    ONLINE       0     0     0
        spares
          5280964285202710711                             INUSE     was /dev/gptid/5214024e-6608-11e4-8ec2-0060dd459a62
          gptid/52bd6ae8-6608-11e4-8ec2-0060dd459a62      AVAIL
          gptid/c1e45861-6629-11e4-a476-0060dd459a62      AVAIL

errors: No known data errors

#4 Updated by Dimitar Boyn over 5 years ago

Additional detach in shell is needed to normalize the situation both in GUI and shell:

[root@xxx] ~# zpool detach xpool 14782184810721386656
[root@xxx] ~# zpool status
  pool: xpool
 state: ONLINE
  scan: resilvered 480K in 0h0m with 0 errors on Wed Feb 18 12:51:38 2015
config:

        NAME                                            STATE     READ WRITE CKSUM
        dpool                                           ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/43d5067f-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/447ca76d-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/45270b9b-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/45d25972-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/467cd7e5-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/47281d07-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/47d4bcfc-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
          raidz1-1                                      ONLINE       0     0     0
            gptid/48958267-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/493f55ad-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/49ec58dd-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/4a96e59a-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/4b41bd42-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/5214024e-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/4c94b3b9-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
          raidz1-2                                      ONLINE       0     0     0
            gptid/4d567427-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/4dfd8eca-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/4ea8879f-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/4f54d4d0-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/5000886c-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/50ab25b8-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
            gptid/5154b9aa-6608-11e4-8ec2-0060dd459a62  ONLINE       0     0     0
        spares
          gptid/52bd6ae8-6608-11e4-8ec2-0060dd459a62    AVAIL
          gptid/c1e45861-6629-11e4-a476-0060dd459a62    AVAIL

errors: No known data errors
<pre>

#5 Updated by William Grzybowski over 5 years ago

This is working as designed as far as I can see.

Replace disk only shows disks that are not currently used by the pool. The spares disks are attached to the pool and should be job of zfsd (?) to auto replace them.

I am not 100% sure spare drives of a pool should show in the replace disk dialog.

#6 Updated by Jordan Hubbard over 5 years ago

  • Category set to 21
  • Assignee set to Xin Li
  • Target version set to Unspecified

#7 Updated by Xin Li over 5 years ago

  • Category changed from 21 to 91
  • Assignee changed from Xin Li to William Grzybowski

spare drives of a pool should absolutely show up as candidates in the replace disk dialog unless they are 'INUSE' (for instance, when it's being used to replace another disk already).

#8 Updated by William Grzybowski over 5 years ago

  • Status changed from Unscreened to Screened

Why do we have spares disks at all if they are not actually used for anything other than hold the device?

#9 Updated by William Grzybowski over 5 years ago

  • Status changed from Screened to Ready For Release

Appreciate some testing in nightlies.

#10 Updated by Jordan Hubbard over 5 years ago

  • Status changed from Ready For Release to Resolved

#11 Updated by Dimitar Boyn over 5 years ago

Hey, William,
Was any work actually done one this?
I see discussion here on "what actual requirements would be" and my point is as Xin Li explains.
I fixed my problem by hacking in shell - which should not be expected by users as a lot can go wrong...

#12 Avatar?id=14398&size=24x24 Updated by Kris Moore about 4 years ago

  • Target version changed from Unspecified to N/A

Also available in: Atom PDF