Project

General

Profile

Bug #15039

UNMAP error on VMWare ESXi 6.0

Added by Xuridisa Support about 5 years ago. Updated about 4 years ago.

Status:
Resolved
Priority:
Nice to have
Assignee:
Alexander Motin
Category:
OS
Target version:
Seen in:
Severity:
New
Reason for Closing:
Reason for Blocked:
Needs QA:
Yes
Needs Doc:
Yes
Needs Merging:
Yes
Needs Automation:
No
Support Suite Ticket:
n/a
Hardware Configuration:
ChangeLog Required:
No

Description

FreeNAS 9.10 (with latest updates applied) running as a VM on VMware ESXi 6.0 Update 2. Storage is being provided by VMDK virtual disks, and tried using both LSI Parallel and LSI SAS virtual HBA adapters. Tried varying vDisk sizes for the disk that will be used to create a volume and same results.

When attempting to create a simple volume (single disk) when clicking on the button to complete the action errors are shown on the VM console and the volume is not created.

Error message screenshot is attached.

I don't recall the exact version but this has worked OK previously, though would also have been an earlier ESXi version.

Screen Shot 2016-04-29 at 9.04.20 AM.PNG (140 KB) Screen Shot 2016-04-29 at 9.04.20 AM.PNG Console Error Message Xuridisa Support, 04/28/2016 02:19 PM
Screen Shot 2016-05-03 at 8.43.22 AM.PNG (108 KB) Screen Shot 2016-05-03 at 8.43.22 AM.PNG Xuridisa Support, 05/02/2016 02:05 PM
5838
5874

History

#1 Updated by Xuridisa Support about 5 years ago

I can add a bit more to this after some further testing. Basically what I have done just now is step up the vDisk size that is used to create a volume.

32GB OK
256GB OK
512GB OK
1TB OK
1.75GB OK
2.0TB FAIL

#2 Updated by Xuridisa Support about 5 years ago

Out of interest I created a vDisk specified as 1.99TB and that also worked. So absolutely something to do with the 2TB boundary.

Anyway, hope this helps and you can get it resolved. I'm wanting to create at least an 8TB volume and don't want to specify multiple vDisks as they are all on the same array and will just cause multiple IOPS.

#3 Updated by Alexander Motin about 5 years ago

  • Project changed from 9 to FreeNAS
  • Category changed from 184 to 129
  • Status changed from Unscreened to 15
  • Seen in changed from to 9.10-STABLE-201604261518

Attached screenshot reports non-fatal error -- ZFS should just step-back and don't try to use UNMAP any more. I don't think it should cause a problem for volume creation. Do you receive any other errors there? I don't have VMware 6.0 access right now to check, so it would be good if you could attach debug information archive from your FreeNAS instance after you reproduced the problem.

#4 Updated by Xuridisa Support about 5 years ago

5874

There are no other errors reported to the console at all after those in the screenshot. What seems to happen thought is 1) the CPU jumps up to approx. 50% (only 2 vCPUs allocated at the moment), 2) the reported disk operations is high on the disk (see attachment) and 3) the systems seems to basically become unresponsive. For instance a Save debug doesn't complete after this error.

In order to Save Debug after the event I had to reboot the host. Let me know if there is anything else I can provide. Very keen to help get this solved.

#5 Updated by Xuridisa Support about 5 years ago

  • File debug-freenas-20160502140037.tgz added

Forgot to attache debug.

#6 Updated by Xuridisa Support about 5 years ago

Just wondering if I can provide anything else here? Thought this might have been a high priority case so keen to help if I can?

#7 Updated by Jordan Hubbard about 5 years ago

Not sure how much headway we'll be able to make on this considering that it's running under virtualization. Yeah, it's possible to do this for testing purposes, but not production.

#8 Updated by Xuridisa Support about 5 years ago

Seriously? There would be hundreds of Production deployments of this in totally virtualised environments, without passing through virtualised hardware adapters etc.

#9 Updated by Jordan Hubbard about 5 years ago

Seriously. The entire point of FreeNAS is ZFS data integrity, using physical drives and the SMART tests which indicate when physical media failure is imminent (and well as features like drive hot-swap). Running FreeNAS on top of what is essentially a linux ext2fs filesystem (or whatever VMWare is using these days) negates all of those advantages, and as you have seen, also adds an additional failure path which requires debugging VMWare's behavior as well as FreeNAS'. We do NOT recommend running FreeNAS virtualized for production scenarios where data integrity is important at all.

#10 Updated by Josh Paetzel about 5 years ago

Try setting a loader tunable in the GUI with:

vfs.zfs.trim.enabled=0

Then reboot the system and try creating the pool again.

It looks like it's trying to TRIM the disks and VMWare is having none of that.

(ZFS does have a TRIM on init feature where it zeros out any block devices you give it that claim they support TRIM)

#11 Updated by Xuridisa Support about 5 years ago

Josh Curnes

I can confirm that setting that tuneable did result in the volume being created. I tested with both a 2TB and 8TB disk, both working perfectly. Thanks very much, and pleased it was such a simple config change.

@jordan

I do appreciate what you're saying here. I have historically worked with ZFS on Sun Enterprise. Consider the use case however where you already have an enterprise array in the back end of a virtualisation platform. In that case the fault tolerance, provided by ZFS, is not required at all. Rather the use case may be to have a very functional and capable virtual appliance that can provide file services etc. to other consumers. For me specifically in this case I am looking to leverage the snapshot and replication capability for instance.

#12 Updated by Josh Paetzel about 5 years ago

  • Status changed from 15 to Closed: Third party to resolve

ZFS can be safely virtualized, but it does require some steps. It's not just not needing the ZFS features, if the back end disks are changing under ZFS that can actually cause ZFS to blow up the volume.

I've written a blog post on this that you can find at http://www.freenas.org/blog/yes-you-can-virtualize-freenas/ that helps from inadvertently doing things that could be hazardous to your pool's health.

#13 Updated by Jordan Hubbard about 5 years ago

Please note that the cited blog post is currently marked "broken" and is queued for deletion due to a web migration issue which damaged a number of our blog posts, sorry.

#14 Updated by Andy Stenger over 4 years ago

I had the same issue with unmap errors on the two FreeNAS OS disks.

I fixed it by going from "thin" to "thick (lazy zeroed)" provisioned.

Here are the gory details:

starting configuration that generated the unmap errors:
FreeNAS 9.10.2U2 OS disks on thin provisioned VMDK disks (16 GB each) on ESXi 6.5 local direct attached SSD vmfs6 formatted. all data (non-os) disks are on PCI passthrough LSI controllers. There were no unmap errors on the data disks, only on the FreeNAS OS vmdk disks.

I figured unmap on thin provisioned disk makes logical sense to work and keeps the disk slim, but obviously I was wrong.

Steps to resolve:
0) safety first: evacuate all datastores that use the FreeNAS VM as the underlying storage
1) shut down FreeNAS VM - vSphere does not allow you to do storage vmotion on a live VM that uses PCI passthrough
2) migrate VM to a different datastore - away from the local SSD; change storage only, "same format as source" virtual disk format since this is temporary only
3) migrate VM back to the original SSD datastore - change storage only, "Thick Provision Lazy Zeroed" virtual disk format
4) start FreeNAS VM

I am assuming that "Thick Provision Eager Zeroed" might work as well, but I have not tried that.

Hope this helps!

#15 Updated by Alexander Motin over 4 years ago

  • Subject changed from FreeNAS 9.10 on VMWare ESXi 6.0 unable to create Volume to UNMAP error on VMWare ESXi 6.0
  • Status changed from Closed: Third party to resolve to Resolved
  • Priority changed from No priority to Nice to have
  • Target version set to 11.0
  • Seen in changed from 9.10-STABLE-201604261518 to 9.10.2-U3

Present nightly builds and coming soon FreeNAS 11.0 have this workarounded, allowing UNMAP to function for VMs backed by thin-provisioned disks, really freeing some thin-provisioned disk space on deletion.

#16 Updated by Vaibhav Chauhan about 4 years ago

  • Target version changed from 11.0 to 11.0-RC

#17 Updated by Dru Lavigne over 3 years ago

  • File deleted (debug-freenas-20160502140037.tgz)

Also available in: Atom PDF