Project

General

Profile

Bug #81498

Cannot view installed plugins; cannot install plugins. 'mountpoint' error

Added by Brandon Roberts over 1 year ago. Updated over 1 year ago.

Status:
Done
Priority:
No priority
Assignee:
Waqar Ahmed
Category:
Middleware
Seen in:
Severity:
New
Reason for Closing:
Reason for Blocked:
Needs QA:
No
Needs Doc:
No
Needs Merging:
No
Needs Automation:
No
Support Suite Ticket:
n/a
Hardware Configuration:
ChangeLog Required:
No

Description

Noticed plex ui was checking for updates for a day. Decided to restart freenas. plugins did not start back up. In freenas ui I get a 'mountpoint' error and no plugins show up. If I try to install a plugin I get a similar error message

2019-03-19 18_12_37-FreeNAS - freenas.png (32.1 KB) 2019-03-19 18_12_37-FreeNAS - freenas.png error message Brandon Roberts, 03/19/2019 03:13 PM
61875

Related issues

Copied to FreeNAS - Bug #83283: Warn when a custom dataset or zvol exists under templates/jails Ready for Testing
Copied to FreeNAS - Bug #83291: Cannot view installed plugins; cannot install plugins. 'mountpoint' errorClosed

History

#1 Updated by Brandon Roberts over 1 year ago

  • File debug-freenas-20190317161236.txz added
  • Private changed from No to Yes

#2 Updated by Brandon Roberts over 1 year ago

Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.6/concurrent/futures/process.py", line 175, in process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.6/site-packages/middlewared/worker.py", line 128, in main_worker
res = loop.run_until_complete(coro)
File "/usr/local/lib/python3.6/asyncio/base_events.py", line 468, in run_until_complete
return future.result()
File "/usr/local/lib/python3.6/site-packages/middlewared/worker.py", line 88, in _run
return await self._call(f'{service_name}.{method}', serviceobj, methodobj, params=args, job=job)
File "/usr/local/lib/python3.6/site-packages/middlewared/worker.py", line 81, in _call
return methodobj(*params)
File "/usr/local/lib/python3.6/site-packages/middlewared/worker.py", line 81, in _call
return methodobj(*params)
File "/usr/local/lib/python3.6/site-packages/middlewared/schema.py", line 668, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/middlewared/plugins/jail.py", line 504, in list_resource
resource_list = iocage.list("all", plugin=True)
File "/usr/local/lib/python3.6/site-packages/iocage_lib/iocage.py", line 1179, in list
quick=quick
File "/usr/local/lib/python3.6/site-packages/iocage_lib/ioc_list.py", line 75, in list_datasets
_all = self.list_all(ds)
File "/usr/local/lib/python3.6/site-packages/iocage_lib/ioc_list.py", line 160, in list_all
mountpoint = jail.properties["mountpoint"].value
File "libzfs.pyx", line 2063, in libzfs.ZFSPropertyDict.
_getitem__
KeyError: 'mountpoint'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 165, in call_method
result = await self.middleware.call_method(self, message)
File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 1141, in call_method
return await self._call(message['method'], serviceobj, methodobj, params, app=app, io_thread=False)
File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 1078, in call
return await self._call_worker(serviceobj, name, *args)
File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 1105, in _call_worker
job,
File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 1036, in run_in_proc
return await self.run_in_executor(self.
_procpool, method, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 1021, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
KeyError: 'mountpoint'

#3 Updated by Dru Lavigne over 1 year ago

  • Category changed from GUI (new) to Middleware
  • Assignee changed from Release Council to William Grzybowski

#5 Updated by William Grzybowski over 1 year ago

  • Assignee changed from William Grzybowski to Waqar Ahmed
  • Target version changed from Backlog to 11.2-U4

#6 Updated by Brandon Roberts over 1 year ago

61875

Few notes, probably red herrings but better to give you too much info than none at all

  • FreeNAS was originally running on a HP DL380 G7; dual x5650, 48gb ram
  • water damage blew out the server; drives were unharmed
  • temporary replacement is a desktop; i5-6600k, 16gb ram
  • I did not move one storage arrays over to the temp replacement - this pool was called 'superfast' (raid 10 with cheap 15k drives) and housed my VMs and legacy plugins
  • new plugins and VMs were created on my SSD and were running fine for two weeks (pool is called 'solid')
  • 'superfast' was left in an unknown state for two weeks until I recently removed it - this was 2 days before this bug occurred
  • I tried creating a new pool named superfast and restarted, no change
  • I downgraded to 11.2, no change. Upgraded back to U2.1

Screenshot of error attached

#7 Updated by Bug Clerk over 1 year ago

  • Status changed from Unscreened to In Progress

#8 Updated by Waqar Ahmed over 1 year ago

Hello Brandon, I have added some safety checks for the mountpoint property where you get the exception - but the great question is how did you come by it ?
Looking at the debug i can see that you have a volume solid/iocage/jails/telly-mis5lr which i am quite sure iocage did not create. It is faulty here and iocage tries to read it's mountpoint prop as it lies under jails dataset but of course fails. Please remove that and your jails should work just fine.
Thank you!

#9 Updated by Brandon Roberts over 1 year ago

Ah, so it was me playing around that got me into this mess. I created that jail through the UI.

However, since the error started happening, my jails page contains no jails. I also get the mountpoint error if I try to manually create another jail.

if I go onto shell and do jls -v I see nothing

#10 Updated by Waqar Ahmed over 1 year ago

Just to be sure that i follow you correctly, you created that jail via UI and then by any chance did not do any manipulations wrt to it's type, mount point etc ? I was thinking you manually created that volume - please correct me if i got this wrong

#11 Updated by Brandon Roberts over 1 year ago

I did create it through the UI. I don't recall changing anything.

All I did inside the jail was grab this file and unzip/install and then forgot about it: https://github.com/tellytv/telly/releases/download/v1.1.0-Beta4/telly-1.1.0.3.linux-amd64.tar.gz

Forgive me as it's been over a decade since I've poked around in linux so maybe I'm looking in the wrong spots, but if I go to /mnt/solid/iocage/jails, I don't see telly. I see plex, sonarr, radarr, and sabnzbd-customer . if I run 'mount' I don't see anything with telly in it

#12 Updated by Bug Clerk over 1 year ago

  • Status changed from In Progress to Ready for Testing

#13 Updated by Bug Clerk over 1 year ago

  • Copied to Bug #83283: Warn when a custom dataset or zvol exists under templates/jails added

#14 Updated by Bug Clerk over 1 year ago

  • Copied to Bug #83291: Cannot view installed plugins; cannot install plugins. 'mountpoint' error added

#15 Updated by Waqar Ahmed over 1 year ago

Hello Brandon, i have been going through our iocage code base to find any proof where we create a volume instead of a regular dataset for a jail and we don't.
So I searched the debug to see what has been going on and i found that you created a telly VM and specified the address for the volume under jails dataset

`./log/middlewared.log:413:[2019/03/15 22:54:19] (DEBUG) VMService.do_create():977 - ===> Creating ZVOL solid/iocage/jails/telly-mis5lr with volsize 10240000000`

That is the source of your issue. If you would like to use your jails again for now, please destroy that volume with zfs destroy path.

Also I would highly recommend that you read what jails/vms are ( differences etc ) and also how zfs interacts with these. Else you might end up losing precious data :P

#16 Updated by Waqar Ahmed over 1 year ago

Just a gentle reminder, if you destroy that volume, your VM would probably fail to boot ( haven't checked logs again to see which volumes it is using apart from this, just came to my mind ) - So a better idea is to rename it - zfs rename ( current path ) ( new path ) - please make sure that new path does not lie under iocage dataset as it will interfere with iocage operations

#17 Updated by Brandon Roberts over 1 year ago

You are my hero! Up and running now.

One last question - If the VM created the volume, shouldn't deleting the VM cascade and destroy the volume as well? I had deleted the VM as part of troubleshooting and since then nowhere in the UI could I find reference to telly-mis5lr

Thanks again for the help!

PS, it was fun to be the greater fool to trump your foolproof design :)

#18 Updated by Waqar Ahmed over 1 year ago

Nope it doesn't. You should be able to see that volume from Storage-> pools and then navigating the path. It can be deleted from there and you're welcome!

#19 Updated by Dru Lavigne over 1 year ago

  • File deleted (debug-freenas-20190317161236.txz)

#20 Updated by Dru Lavigne over 1 year ago

  • Status changed from Ready for Testing to Done
  • Target version changed from 11.2-U4 to Master - FreeNAS Nightlies
  • Private changed from Yes to No
  • Needs QA changed from Yes to No
  • Needs Doc changed from Yes to No
  • Needs Merging changed from Yes to No

#22 Updated by Dru Lavigne over 1 year ago

Also available in: Atom PDF