Add hot spare description to Guide
Setup (general): 10 x 3TB drives in 5 x mirrored vdevs (production), 2 x 1TB SSD in 1 x mirrored vdev (production), 5 x 3TB in 1 x raidz vdev (backups), 1 x 3TB spare, 1 x 400GB PCI (slog), 1 x 240GB L2
Motherboard: Tyan S7012
CPU: 2 x E5620
Controller: 3 x IBM M1015 (16 x SATA III & 3 SSD)
Case: Norco RPC-2212
Drives: 16 x 7.2K SATA III (3TB)
1 x SSD (240GB L2ARC)
1 x Intel SSD 750 (400GB SLOG)
After a failed drive and successful resilver, the failed/unavailable drive is still attached to the volume and volume status is degraded. The system alert notification is not showing any failed drives. Spare drive is still attached to vdev.
Related FreeNas forumns: https://forums.freenas.org/index.php?threads/not-sure-what-hard-drive-has-failed.57133/
#6 Updated by Alexander Motin about 1 year ago
I haven't used spare drives for some time, but if my memory serve me well, that may be a correct operation. Spare drive may be just borrowed by the pool until user makes final decision. You can detach lost drive and then spare will become permanent part of the pool, or you can replace failed drive with another, and then after another rebuild spare drive will become spare again. Though somebody could check that.
#7 Updated by Alexander Motin about 1 year ago
I think my memory was right. Here is a guide how it works on Solaris (should be all the same aside of device naming): http://docs.oracle.com/cd/E19253-01/819-5461/6n7ht6qvt/index.html
#10 Updated by Dru Lavigne about 1 year ago
- Category changed from 129 to Documentation
- Status changed from Closed: Behaves correctly to Unscreened
- Assignee changed from William Grzybowski to Warren Block
- Target version set to 11.1
- Private changed from Yes to No
Reopening as this could be clearer in the documentation.
- Status changed from Unscreened to Resolved
- Target version changed from 11.1 to 11.1-BETA1