Project

General

Profile

Bug #1198

Make alert dialog show multiple lines for messages

Added by rawtaz - over 7 years ago. Updated over 7 years ago.

Status:
Closed
Priority:
Nice to have
Assignee:
-
Category:
Middleware
Target version:
-
Seen in:
Severity:
New
Reason for Closing:
Reason for Blocked:
Needs QA:
Yes
Needs Doc:
Yes
Needs Merging:
Yes
Needs Automation:
No
Support Suite Ticket:
n/a
Hardware Configuration:
ChangeLog Required:
No

Description

The following is a text representation of an alert dialog I have in my [[FreeNAS]] 8.0.3 right now:

o WARNING: The volume z64 (ZFS) status is
o OK: The volume u64 (UFS) status is HEALTHY
o OK: The volume u128 (UFS) status is HEALTHY

The first message is cut off and semi-useless (or the problem is that the status isn't evaluated correctly?). The dialog should perhaps allow for longer messages.

History

#1 Updated by Anonymous over 7 years ago

The bug shown above is that it fails to parse the zpool status properly. It's a legacy issue [from 8.0.1] that's still present in trunk IIRC.

#2 Updated by Josh Paetzel over 7 years ago

Can you paste the output of zpool status from the command line to the ticket please?

#3 Updated by rawtaz - over 7 years ago

Sorry, should have clarified.

I can not do that, because the pools are long gone now :) However, the status message I mentioned came from the following situation:

I had created a zpool on an mfid0 HW RAID device, and when doing so I checked the 4096 block size checkbox (didn't really know whether it applied to RAID VDs or not). Apparently this is not something that [[FreeNAS]] likes, because later on when I went to remove this volume in the GUI, the dialog box processing the removal just kept spinning. After some minutes I reloaded the interface, and I also rebooted the machine.

The status now was in the GUI the errors in the original post here, but there was no zpool listed when doing zpool status. So the zpool was removed it seems. I then simply went to delete it in the database and it's fine.

I did see that there were some errors in the console when I created the zpool initially, see http://grab.by/bzH3 . However it did work fine from what I could tell by some very little shuffling of files to and from it. But I can't say for sure.

Probably the 4096 block forcing shouldn't be done with an mfid device.

#4 Updated by Anonymous over 7 years ago

One scenario that can cause this is if I blow away a zpool on the backend and the config db is totally out of synch with reality. I think this is fixed in 8.2, but I forget...

#5 Updated by rawtaz - over 7 years ago

If you guys want me to (i.e. are not seeing or can reproduce this locally) I can always make another run like it in a few days. Just let me know, its really easy to test once you have a HD RAID VD available.

#6 Updated by William Grzybowski over 7 years ago

  • Status changed from Unscreened to Closed

This was a problem in the parser, partially fixed in 8423 and will be available in 8.2

Also available in: Atom PDF