Project

General

Profile

Bug #34056

Unmount datasets before deletion

Added by Brandon Schneider 11 months ago. Updated 6 months ago.

Status:
Done
Priority:
No priority
Assignee:
Brandon Schneider
Category:
Middleware
Target version:
Severity:
Medium
Reason for Closing:
Reason for Blocked:
Needs QA:
No
Needs Doc:
No
Needs Merging:
No
Needs Automation:
No
Support Suite Ticket:
n/a
Hardware Configuration:
ChangeLog Required:
No

Description

Not doing so will result in a dataset being busy exception if they are mounted.

cannot umount device busy.png (34.8 KB) cannot umount device busy.png Rishabh Chauhan, 06/06/2018 07:03 AM
dataset deleted.png (42 KB) dataset deleted.png Rishabh Chauhan, 06/06/2018 07:03 AM
18172
18175

Related issues

Related to FreeNAS - Feature #36303: officially supported kubernetes provisionerClosed

Associated revisions

Revision 9eb9a493 (diff)
Added by Brandon Schneider 11 months ago

fix(zfs/dataset): Umount before deletion

Ticket: #34056

Revision 55ea9925 (diff)
Added by Brandon Schneider 11 months ago

fix(zfs/dataset): Umount before deletion

Ticket: #34056

Revision 58b68be7 (diff)
Added by Brandon Schneider 11 months ago

fix(zfs/dataset): Umount before deletion

Ticket: #34056

Revision d2974262 (diff)
Added by Brandon Schneider 11 months ago

fix(zfs/dataset): Umount before deletion (#1293)

Ticket: #34056

History

#1 Updated by Brandon Schneider 11 months ago

  • Status changed from Unscreened to In Progress

#2 Updated by Brandon Schneider 11 months ago

  • Status changed from In Progress to Ready for Testing

#3 Updated by Dru Lavigne 11 months ago

  • Subject changed from middlewared zfs.dataset.delete does not umount datasets before deletion to Unmount datasets before deletion
  • Needs Doc changed from Yes to No
  • Needs Merging changed from Yes to No

#4 Updated by Rishabh Chauhan 11 months ago

18172
18175

FreeNAS-11.2-MASTER-201806050447: I tried to delete a dataset which worked without the busy exception, and when I tried to delete a dataset which was busy, it attempted to umount it before deleting. I think it passes the test
Refer the attached images

#5 Updated by Dru Lavigne 11 months ago

  • Status changed from Passed Testing to Done

#6 Updated by Travis Hansen 6 months ago

Does this logic happen with the v1 API?

#7 Updated by Brandon Schneider 6 months ago

No, V2 only.

#8 Updated by Travis Hansen 6 months ago

Anyway we can get this landed in v1 code path as well? It would be very helpful for the work mentioned here: https://redmine.ixsystems.com/issues/36303

#9 Updated by Travis Hansen 6 months ago

I did a bit of digging and this seems to do the trick for me (forgive me, not a python guy and not at all familiar with FreeNAS code base):


--- /usr/local/www/freenasUI/middleware/notifier.py-backup    2018-08-04 16:21:00.178637316 -0600
+++ /usr/local/www/freenasUI/middleware/notifier.py    2018-10-21 21:30:18.719109927 -0600
@@ -925,6 +925,9 @@
                 except:
                     log.warn('Failed to delete plugins', exc_info=True)

+            zfsproc = self._pipeopen("zfs unmount -f '%s'" % (mp))
+            retval = zfsproc.communicate()[1]
+
             if recursive:
                 zfsproc = self._pipeopen("zfs destroy -r '%s'" % (path))
             else:

#10 Updated by William Grzybowski 6 months ago

Forcing unmount is not a practice we are going to use.

If you are facing issues with the v1 API please create a ticket with reproduction case so we can investigate and find a proper fix.

#11 Updated by Travis Hansen 6 months ago

Are you referring to the -f option? I'm unclear with the purpose of this ticket is if not that. Datasets already delete fine when mounted but not busy...so why unmount a not busy dataset if it will already delete?

In any case, my use case is relatively simple. In the context of the kubernetes provisioner it's purpose is to automate the provisioning of storage and de provisioning of storage with FreeNAS. In essence you say (cluster = kubernetes in this example), 'hey cluster, give me space' and it does. When you're done you say 'hey cluster, I'm done with this' and it gets removed. Again, all automated.

With that context my specific example is this:

1. Hey cluster, give me space
2. Space is provisioned, nfs share created, details returned to kubernetes
3. kubernetes node(s) attaches to nfs share and uses space
4. at least 1 kubernetes node using the share dies uncleanly
5. Hey cluster, I'm done with this
6. nfs share is removed, dataset removal is attempted but because some stale nfs handle is laying around it fails with 'device busy'

If #4 doesn't happen then all is well, but I'm trying to make the thing as bullet-proof as possible and have hit this exact scenario a handful of times already with relatively short period of semi-production use.

#12 Updated by William Grzybowski 6 months ago

Travis Hansen wrote:

Are you referring to the -f option? I'm unclear with the purpose of this ticket is if not that. Datasets already delete fine when mounted but not busy...so why unmount a not busy dataset if it will already delete?

Look at the PR, its for libzfs usage in v2 internal API. It mimics zfs destroy behavior.

In any case, my use case is relatively simple. In the context of the kubernetes provisioner it's purpose is to automate the provisioning of storage and de provisioning of storage with FreeNAS. In essence you say (cluster = kubernetes in this example), 'hey cluster, give me space' and it does. When you're done you say 'hey cluster, I'm done with this' and it gets removed. Again, all automated.

With that context my specific example is this:

1. Hey cluster, give me space
2. Space is provisioned, nfs share created, details returned to kubernetes
3. kubernetes node(s) attaches to nfs share and uses space
4. at least 1 kubernetes node using the share dies uncleanly
5. Hey cluster, I'm done with this
6. nfs share is removed, dataset removal is attempted but because some stale nfs handle is laying around it fails with 'device busy'

If #4 doesn't happen then all is well, but I'm trying to make the thing as bullet-proof as possible and have hit this exact scenario a handful of times already with relatively short period of semi-production use.

The way I see it its doing what its supposed to do. Its preventing deleting a dataset that is in use. Blindly forcing deleting and ignoring there is a process accessing without giving the user proper warning is wrong IMO.

You could put it in a feature request to allow a force option to be passed through.

#13 Updated by Travis Hansen 6 months ago

OK, I've created the feature request here: https://redmine.ixsystems.com/issues/53039

I agree pushing it down from the API level is ideal. I may look at the code a bit deeper and see if it's possible in v1. IMHO that what's this ticket should be about since unmounting an unused dataset before deletion is meaningless in my experience. Maybe there are situations where it's applicable though.

#14 Updated by William Grzybowski 6 months ago

Travis Hansen wrote:

OK, I've created the feature request here: https://redmine.ixsystems.com/issues/53039

I agree pushing it down from the API level is ideal. I may look at the code a bit deeper and see if it's possible in v1. IMHO that what's this ticket should be about since unmounting an unused dataset before deletion is meaningless in my experience. Maybe there are situations where it's applicable though.

You missed the point. libzfs is different than zfs cli tool.

libzfs destroy = destroy dataset
zfs cli destroy = libzfs umount + libzfs destroy

We are handling libzfs here to behave like zfs destroy.

#15 Updated by Travis Hansen 6 months ago

Ah! Understood on the differentiation. That makes more sense, particularly if the legacy UI was switched to use v2 API (ie: the attached screenshots). Thanks for the clarification.

#16 Updated by Dru Lavigne 6 months ago

  • Related to Feature #36303: officially supported kubernetes provisioner added

Also available in: Atom PDF