Project

General

Profile

Bug #50291

Get correct disks list for locked pool

Added by Christian FitzGerald Forberg almost 2 years ago. Updated over 1 year ago.

Status:
Done
Priority:
No priority
Assignee:
William Grzybowski
Category:
Middleware
Target version:
Seen in:
Severity:
New
Reason for Closing:
Reason for Blocked:
Needs QA:
No
Needs Doc:
No
Needs Merging:
No
Needs Automation:
No
Support Suite Ticket:
n/a
Hardware Configuration:
ChangeLog Required:
No

Description

Hi,
i plugged a new backup disc into my system but it does not appear under 'Available Disks'.
CamControl shows the new disk as da7:
<ATA ST8000DM004-2CX1 0001> at scbus0 target 12 lun 0 (da7,pass7)

The new disk is completly zeroed and has no geometry.
I tried executing 'gpart create -s GPT da7' which creates a partition table but it does not help making the disk appearing in the gui.

My usecase for a single volume with a single disk is rotating backups. I have two big backup discs which I swap once a week, one disk always beeing offsite the remaining days.
Every disc belongs to its own volume and I have snapshots beeing synced periodically to 'backupdisc1' or 'backupdisc2', depending which disc is plugged in.

Now I needed to replace on backup disc because it is too small. I kicked the volume, removed the disc and wanted to add a bigger one but that seems impossible now.

Same problem on 11.0 btw, I updated to 11.2 beta but the behaviour is still the same.

Thanks for helping.


Related issues

Copied to FreeNAS - Bug #50871: Get correct disks list for locked poolDone

Associated revisions

Revision 9015e907 (diff)
Added by William Grzybowski almost 2 years ago

fix(middlewared/pool): get correct disks for locked pool Ticket: #50291

Revision 790d3c78 (diff)
Added by William Grzybowski almost 2 years ago

fix(middlewared/pool): get correct disks for locked pool Ticket: #50291

Revision 2047b1d9 (diff)
Added by William Grzybowski almost 2 years ago

fix(middlewared/pool): get correct disks for locked pool Ticket: #50291 (cherry picked from commit 790d3c78132c27b08e3d722113e7efdbbae39497)

Revision e81348fe (diff)
Added by William Grzybowski almost 2 years ago

fix(middlewared/pool): get correct disks for locked pool Ticket: #50291 (cherry picked from commit 790d3c78132c27b08e3d722113e7efdbbae39497)

Revision dd3f8687 (diff)
Added by William Grzybowski almost 2 years ago

fix(middlewared/pool): get correct disks for locked pool Ticket: #50291

History

#1 Updated by Christian FitzGerald Forberg almost 2 years ago

  • File debug-atlas-20181008164622.txz added
  • Private changed from No to Yes

#2 Updated by Dru Lavigne almost 2 years ago

  • Reason for Blocked set to Need additional information from Author

Christian: to clarify, how is the disk physically attached to the system? As in type of controller?

#3 Updated by Christian FitzGerald Forberg almost 2 years ago

Sorry for not providing some info in the first place:

Mainboard: Supermicro X9SCM-F
CPU: Intel Xeon E3-1220v2
HBA: IBM ServeRAID M1015 (flashed to IT mode)
HDDs: RaidZ2: 6xWD Red 2TB WD20EFRX
1xSeagate Barracude 8TB for backups
RAM: 32GB Kingston ECC KVR1333D3E9S

The backup HDD is connected to the M1015 HBA like all other data discs.
Only the boot disc is connected with the mainboard.

#4 Updated by Dru Lavigne almost 2 years ago

  • Category changed from Middleware to Hardware
  • Assignee changed from Release Council to Alexander Motin

#6 Updated by Dru Lavigne almost 2 years ago

  • Reason for Blocked deleted (Need additional information from Author)

#7 Updated by Alexander Motin almost 2 years ago

  • Assignee changed from Alexander Motin to William Grzybowski

I don't see any reason for the disk would not appear in WebUI. William, do you?

Probably unrelated, but several part of network configuration look suspicious there:
- You have two aliases on lagg0 interface belonging to the same subnet. To not confuse routing table, only one of them should have mask /24, while other(s) should be /32.
- Log is full of messages:

arp: 7c:ff:4d:aa:bc:80 attempts to modify permanent entry for 192.168.168.1 on epair1b

Not sure what is that, but it does not look good.

#8 Updated by William Grzybowski almost 2 years ago

  • Status changed from Unscreened to Blocked
  • Reason for Blocked set to Need additional information from Author

Christian, can you get me the output of following commands:

midclt call disk.query | jq .
midclt call disk.get_reserved
midclt call disk.get_unused

Thanks

#9 Updated by Christian FitzGerald Forberg almost 2 years ago

Alexander, maybe the two aliases on lagg0 have something to do with us changing our DHCP-server just recently?
How can I get rid of the unneeded alias?

However, just to correct my previous info: the bootdisk is also connected to the m1015, so no disc is connected to the motherboard directly.
(Maybe that has something to do with the problem?)

Also please check with https://redmine.ixsystems.com/issues/21158 from some time ago.

William, here are the outputs you asked for:

root@atlas:~ # camcontrol devlist
<ATA FUJITSU MHV2040B 002A>        at scbus0 target 0 lun 0 (pass0,da0)
<ATA WDC WD20EFRX-68A 0A80>        at scbus0 target 1 lun 0 (pass1,da1)
<ATA WDC WD20EFRX-68A 0A80>        at scbus0 target 2 lun 0 (pass2,da2)
<ATA WDC WD20EFRX-68A 0A80>        at scbus0 target 3 lun 0 (pass3,da3)
<ATA WDC WD20EFRX-68A 0A80>        at scbus0 target 4 lun 0 (pass4,da4)
<ATA WDC WD20EFRX-68A 0A80>        at scbus0 target 5 lun 0 (pass5,da5)
<ATA WDC WD20EFRX-68A 0A80>        at scbus0 target 6 lun 0 (pass6,da6)
<ATA ST8000DM004-2CX1 0001>        at scbus0 target 12 lun 0 (pass7,da7)
root@atlas:~ # midclt call disk.query | jq .
[
  {
    "identifier": "{serial_lunid}NW9WT73263V2_ATA     FUJITSU MHV2040BH PL                            NW9WT73263V2",
    "name": "da0",
    "subsystem": "da",
    "number": 0,
    "serial": "NW9WT73263V2",
    "size": "40007761920",
    "multipath_name": "",
    "multipath_member": "",
    "description": "",
    "transfermode": "Auto",
    "hddstandby": "ALWAYS ON",
    "advpowermgmt": "DISABLED",
    "acousticlevel": "DISABLED",
    "togglesmart": true,
    "smartoptions": "",
    "expiretime": null,
    "enclosure_slot": null,
    "passwd": "" 
  },
  {
    "identifier": "{serial}WD-WMC301841262",
    "name": "da1",
    "subsystem": "da",
    "number": 1,
    "serial": "WD-WMC301841262",
    "size": "2000398934016",
    "multipath_name": "",
    "multipath_member": "",
    "description": "",
    "transfermode": "Auto",
    "hddstandby": "ALWAYS ON",
    "advpowermgmt": "DISABLED",
    "acousticlevel": "DISABLED",
    "togglesmart": true,
    "smartoptions": "",
    "expiretime": null,
    "enclosure_slot": null,
    "passwd": "" 
  },
  {
    "identifier": "{serial}WD-WMC301938991",
    "name": "da2",
    "subsystem": "da",
    "number": 2,
    "serial": "WD-WMC301938991",
    "size": "2000398934016",
    "multipath_name": "",
    "multipath_member": "",
    "description": "",
    "transfermode": "Auto",
    "hddstandby": "ALWAYS ON",
    "advpowermgmt": "DISABLED",
    "acousticlevel": "DISABLED",
    "togglesmart": true,
    "smartoptions": "",
    "expiretime": null,
    "enclosure_slot": null,
    "passwd": "" 
  },
  {
    "identifier": "{serial}WD-WMC301841949",
    "name": "da3",
    "subsystem": "da",
    "number": 3,
    "serial": "WD-WMC301841949",
    "size": "2000398934016",
    "multipath_name": "",
    "multipath_member": "",
    "description": "",
    "transfermode": "Auto",
    "hddstandby": "ALWAYS ON",
    "advpowermgmt": "DISABLED",
    "acousticlevel": "DISABLED",
    "togglesmart": true,
    "smartoptions": "",
    "expiretime": null,
    "enclosure_slot": null,
    "passwd": "" 
  },
  {
    "identifier": "{serial}WD-WMC301855194",
    "name": "da4",
    "subsystem": "da",
    "number": 4,
    "serial": "WD-WMC301855194",
    "size": "2000398934016",
    "multipath_name": "",
    "multipath_member": "",
    "description": "",
    "transfermode": "Auto",
    "hddstandby": "ALWAYS ON",
    "advpowermgmt": "DISABLED",
    "acousticlevel": "DISABLED",
    "togglesmart": true,
    "smartoptions": "",
    "expiretime": null,
    "enclosure_slot": null,
    "passwd": "" 
  },
  {
    "identifier": "{serial}WD-WMC301841239",
    "name": "da5",
    "subsystem": "da",
    "number": 5,
    "serial": "WD-WMC301841239",
    "size": "2000398934016",
    "multipath_name": "",
    "multipath_member": "",
    "description": "",
    "transfermode": "Auto",
    "hddstandby": "ALWAYS ON",
    "advpowermgmt": "DISABLED",
    "acousticlevel": "DISABLED",
    "togglesmart": true,
    "smartoptions": "",
    "expiretime": null,
    "enclosure_slot": null,
    "passwd": "" 
  },
  {
    "identifier": "{serial}WD-WCC300083661",
    "name": "da6",
    "subsystem": "da",
    "number": 6,
    "serial": "WD-WCC300083661",
    "size": "2000398934016",
    "multipath_name": "",
    "multipath_member": "",
    "description": "",
    "transfermode": "Auto",
    "hddstandby": "ALWAYS ON",
    "advpowermgmt": "DISABLED",
    "acousticlevel": "DISABLED",
    "togglesmart": true,
    "smartoptions": "",
    "expiretime": null,
    "enclosure_slot": null,
    "passwd": "" 
  },
  {
    "identifier": "{serial_lunid}WCT0XN9L_5000c500bd81bc32",
    "name": "da7",
    "subsystem": "da",
    "number": 7,
    "serial": "WCT0XN9L",
    "size": "8001563222016",
    "multipath_name": "",
    "multipath_member": "",
    "description": "Backup1",
    "transfermode": "Auto",
    "hddstandby": "ALWAYS ON",
    "advpowermgmt": "DISABLED",
    "acousticlevel": "DISABLED",
    "togglesmart": true,
    "smartoptions": "",
    "expiretime": null,
    "enclosure_slot": null,
    "passwd": "" 
  }
]
root@atlas:~ # midclt call disk.get_reserved
["da0", "da1", "da2", "da3", "da4", "da5", "da6", "da7"]
root@atlas:~ # midclt call disk.get_unused
[]

#10 Updated by William Grzybowski almost 2 years ago

Thats strange, it is behaving like that disk is already in use.

A few more outputs, please:

midclt call boot.get_disks
midclt call pool.get_disks

Also, do you have any iSCSI extent configured to use a device?

#11 Updated by Christian FitzGerald Forberg almost 2 years ago

Hey William,

here are the ouputs:

root@atlas:~ # midclt call boot.get_disks
["da0"]
root@atlas:~ # midclt call pool.get_disks
["da1", "da2", "da3", "da4", "da5", "da6", "da7"]

The volume dialog uses a certain layer to fill the area for 'Available discs'.
If we follow these calls and I try to call them manually from the commandline, we should be able to find out what this problem is all about.
'midctl call <methodname>' does exactly that, right?

iSCSI we do not use at the moment.

#12 Updated by William Grzybowski almost 2 years ago

Its more complicated than that. Can you add a new Debug, please?

Also a few more:

midclt call zfs.pool.get_disks freenas-boot
midclt call zfs.pool.get_disks tristore
midclt call pool.query | jq .

Thanks again

#13 Updated by Christian FitzGerald Forberg almost 2 years ago

I see ...

Here we go:

root@atlas:~ # midclt call zfs.pool.get_disks freenas-boot
["da0"]
root@atlas:~ # midclt call zfs.pool.get_disks tristore
["da1", "da2", "da3", "da4", "da5", "da6"]
root@atlas:~ # midclt call pool.query | jq .
[
  {
    "id": 1,
    "name": "tristore",
    "guid": "1318164832571228153",
    "encrypt": 2,
    "encryptkey": "00b0b506-80a6-4dd4-814f-b8bdf6127264",
    "status": "ONLINE",
    "scan": {
      "function": "SCRUB",
      "state": "FINISHED",
      "start_time": {
        "$date": 1536962404000
      },
      "end_time": {
        "$date": 1537324336000
      },
      "percentage": 100.05443096160889,
      "bytes_to_process": 10930935816192,
      "bytes_processed": 10924989702144,
      "errors": 0,
      "bytes_issued": 10930935816192,
      "pause": null
    },
    "topology": {
      "data": [
        {
          "type": "RAIDZ2",
          "path": null,
          "guid": "12260286393096901969",
          "status": "ONLINE",
          "stats": {
            "timestamp": 114271322124089,
            "read_errors": 0,
            "write_errors": 0,
            "checksum_errors": 0,
            "ops": [
              0,
              228918,
              7854297,
              4381267,
              0
            ],
            "bytes": [
              0,
              1733971968,
              119820161024,
              85794390016,
              0
            ],
            "size": 11957188952064,
            "allocated": 10869512085504,
            "configured_ashift": 12,
            "logical_ashift": 12,
            "physical_ashift": 0,
            "fragmentation": 44
          },
          "children": [
            {
              "type": "DISK",
              "path": "/dev/gptid/7dc085ff-0fcf-11e3-97a2-00261868091e.eli",
              "guid": "17791333297502264303",
              "status": "ONLINE",
              "stats": {
                "timestamp": 114271322124089,
                "read_errors": 0,
                "write_errors": 0,
                "checksum_errors": 0,
                "ops": [
                  1,
                  43010,
                  1906078,
                  3057833,
                  0
                ],
                "bytes": [
                  0,
                  288591872,
                  36031201280,
                  23948238848,
                  0
                ],
                "size": 0,
                "allocated": 0,
                "configured_ashift": 12,
                "logical_ashift": 12,
                "physical_ashift": 0,
                "fragmentation": 0
              },
              "children": [],
              "device": "da1p2" 
            },
            {
              "type": "DISK",
              "path": "/dev/gptid/7e4b723d-0fcf-11e3-97a2-00261868091e.eli",
              "guid": "16983476508831713209",
              "status": "ONLINE",
              "stats": {
                "timestamp": 114271322124089,
                "read_errors": 0,
                "write_errors": 0,
                "checksum_errors": 0,
                "ops": [
                  1,
                  62212,
                  1920456,
                  3083277,
                  0
                ],
                "bytes": [
                  0,
                  399577088,
                  36193853440,
                  24037507072,
                  0
                ],
                "size": 0,
                "allocated": 0,
                "configured_ashift": 12,
                "logical_ashift": 12,
                "physical_ashift": 0,
                "fragmentation": 0
              },
              "children": [],
              "device": "da2p2" 
            },
            {
              "type": "DISK",
              "path": "/dev/gptid/7edd4625-0fcf-11e3-97a2-00261868091e.eli",
              "guid": "3947486859667547633",
              "status": "ONLINE",
              "stats": {
                "timestamp": 114271322124089,
                "read_errors": 0,
                "write_errors": 0,
                "checksum_errors": 0,
                "ops": [
                  1,
                  21428,
                  2012806,
                  3171056,
                  0
                ],
                "bytes": [
                  0,
                  197898240,
                  36922167296,
                  24409563136,
                  0
                ],
                "size": 0,
                "allocated": 0,
                "configured_ashift": 12,
                "logical_ashift": 12,
                "physical_ashift": 0,
                "fragmentation": 0
              },
              "children": [],
              "device": "da3p2" 
            },
            {
              "type": "DISK",
              "path": "/dev/gptid/7f677960-0fcf-11e3-97a2-00261868091e.eli",
              "guid": "6165176045433977927",
              "status": "ONLINE",
              "stats": {
                "timestamp": 114271322124089,
                "read_errors": 0,
                "write_errors": 0,
                "checksum_errors": 0,
                "ops": [
                  1,
                  52685,
                  1907610,
                  3052031,
                  0
                ],
                "bytes": [
                  0,
                  335622144,
                  35996020736,
                  23926730752,
                  0
                ],
                "size": 0,
                "allocated": 0,
                "configured_ashift": 12,
                "logical_ashift": 12,
                "physical_ashift": 0,
                "fragmentation": 0
              },
              "children": [],
              "device": "da4p2" 
            },
            {
              "type": "DISK",
              "path": "/dev/gptid/7ff79cd8-0fcf-11e3-97a2-00261868091e.eli",
              "guid": "2331169440101015653",
              "status": "ONLINE",
              "stats": {
                "timestamp": 114271322124089,
                "read_errors": 0,
                "write_errors": 0,
                "checksum_errors": 0,
                "ops": [
                  1,
                  62564,
                  1924726,
                  3079849,
                  0
                ],
                "bytes": [
                  0,
                  411549696,
                  36200644608,
                  24029335552,
                  0
                ],
                "size": 0,
                "allocated": 0,
                "configured_ashift": 12,
                "logical_ashift": 12,
                "physical_ashift": 0,
                "fragmentation": 0
              },
              "children": [],
              "device": "da5p2" 
            },
            {
              "type": "DISK",
              "path": "/dev/gptid/808941a7-0fcf-11e3-97a2-00261868091e.eli",
              "guid": "16864669978800470780",
              "status": "ONLINE",
              "stats": {
                "timestamp": 114271322124089,
                "read_errors": 0,
                "write_errors": 0,
                "checksum_errors": 0,
                "ops": [
                  1,
                  22139,
                  2004564,
                  3167855,
                  0
                ],
                "bytes": [
                  0,
                  201515008,
                  36920418304,
                  24404582400,
                  0
                ],
                "size": 0,
                "allocated": 0,
                "configured_ashift": 12,
                "logical_ashift": 12,
                "physical_ashift": 0,
                "fragmentation": 0
              },
              "children": [],
              "device": "da6p2" 
            }
          ]
        }
      ],
      "log": [],
      "cache": [],
      "spare": []
    },
    "is_decrypted": true
  },
  {
    "id": 5,
    "name": "tribackup2",
    "guid": "2803188164465822101",
    "encrypt": 2,
    "encryptkey": "093acd74-83ec-4da0-86ec-c7d783bac2af",
    "status": "OFFLINE",
    "scan": null,
    "topology": null,
    "is_decrypted": false
  }
]

Btw, Pool 'tribackup2' refers to the second backup disk which is off site at the moment.
'tribackup1' existed too but I already deleted it. My intention now is to create pool 'tribackup1' again with the new disc.

Thank you for your fast support.

#14 Updated by William Grzybowski almost 2 years ago

Thats strange.

Does `midclt call zfs.pool.get_disks tribackup2` returns anything?

#15 Updated by Christian FitzGerald Forberg almost 2 years ago

not really:

root@atlas:~ # midclt call zfs.pool.get_disks tribackup2
[ENOENT] Pool tribackup2 not found
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/middlewared/plugins/zfs.py", line 68, in get_disks
disks = list(zfs.get(name).disks)
File "libzfs.pyx", line 370, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.6/site-packages/middlewared/plugins/zfs.py", line 68, in get_disks
disks = list(zfs.get(name).disks)
File "libzfs.pyx", line 479, in libzfs.ZFS.get
libzfs.ZFSException: Pool tribackup2 not found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/middlewared/main.py", line 159, in call_method
result = list(result)
File "/usr/local/lib/python3.6/site-packages/middlewared/plugins/zfs.py", line 70, in get_disks
raise CallError(str(e), errno.ENOENT)
middlewared.service_exception.CallError: [ENOENT] Pool tribackup2 not found

#16 Updated by William Grzybowski almost 2 years ago

  • Category changed from Hardware to Middleware
  • Status changed from Blocked to Not Started
  • Target version changed from Backlog to 11.2-RC2
  • Reason for Blocked deleted (Need additional information from Author)

Ok, I see whats going on here, thats because tribackup2 is a locked volume and its confusing da7 with the disk from that volume.

We should have a fix for that in 11.2 RELEASE.

A workaround is detaching tribackup2 from UI.

#17 Updated by Christian FitzGerald Forberg almost 2 years ago

You are right! I removed tribackup2 and it works now!

When I swap discs next time I whould be able to reimport volume 'tribackup2' from the other offsite disc, right?
And after that everything should be back to normal, cause I don't create new volumes anymore.
I just hotswap them in and out and unlock them, depending on the disc I put in. (Like I did it before for some time)

#18 Updated by Bug Clerk almost 2 years ago

  • Status changed from Not Started to In Progress

#19 Updated by Bug Clerk almost 2 years ago

  • Copied to Bug #50871: Get correct disks list for locked pool added

#20 Updated by Bug Clerk almost 2 years ago

  • Status changed from In Progress to Ready for Testing

#21 Updated by Dru Lavigne almost 2 years ago

  • File deleted (debug-atlas-20181008164622.txz)

#23 Updated by Dru Lavigne almost 2 years ago

  • Private changed from Yes to No

#24 Updated by Dru Lavigne almost 2 years ago

  • Subject changed from New disc does not appear in volume manager to Get correct disks list for locked pool
  • Needs Doc changed from Yes to No
  • Needs Merging changed from Yes to No

#25 Updated by Bonnie Follweiler almost 2 years ago

  • Needs QA changed from Yes to No

Test Passed in FreeNAS-11.2-INTERNAL31

#26 Updated by Dru Lavigne almost 2 years ago

  • Status changed from Ready for Testing to Done

#27 Updated by Christian FitzGerald Forberg almost 2 years ago

Bonnie Follweiler wrote:

Test Passed in FreeNAS-11.2-INTERNAL31

The problem is still there in FreeNAS-11.2-RC2.
I just locked my first backup disk and replaced it with the second backup disc. This second disc is blank and I wanted to initialize it with a new volume via the volume manager.
However, the second disc does not appear in the volume manager.

William, I checked your changes in pool.py on github. Would I be able to test the behaviour of changed method get_disks() locally on my system?

Thanks!

#28 Updated by Christian FitzGerald Forberg almost 2 years ago

Alexander Motin wrote:

I don't see any reason for the disk would not appear in WebUI. William, do you?

Probably unrelated, but several part of network configuration look suspicious there:
- You have two aliases on lagg0 interface belonging to the same subnet. To not confuse routing table, only one of them should have mask /24, while other(s) should be /32.
- Log is full of messages:
[...]
Not sure what is that, but it does not look good.

Hey Alex, thanks a lot for the info. I have no idea how to solve that. I did not make any network related changes without the FreeNas GUI.
At the moment I only have a lagg, consisting of two NICs and a couple of VMs and 2 jails using this lagg.
Can you be so kind and give me a hint how to adjust these aliases, cause I can't find them in the FreeNAS Gui.
Thanks!

#29 Updated by Alexander Motin almost 2 years ago

As I have told, if you have several aliases from the same network, only one of them, used by default for ourgoing connections should have proper mask like /24, while all the rest should have /32.

#30 Updated by Christian FitzGerald Forberg almost 2 years ago

Alexander Motin wrote:

As I have told, if you have several aliases from the same network, only one of them, used by default for ourgoing connections should have proper mask like /24, while all the rest should have /32.

I understand that. I never added any network alias manually. The only thing I created was a lagg interface. So its unclear to me where and how to list and remove network aliases.
Like I said: I did not make any network related changes without the FreeNas GUI. Also when creating the lagg I did not add extra aliases. So I don't know where they are coming from or how to get rid of them from within the freenas gui

(I know its getting a little bit off-topic - maybe you can direct me to some resources about this matter?)
Thanks!

#31 Updated by Christian FitzGerald Forberg almost 2 years ago

William?
Any news about this problem. Do you need more info?

#32 Updated by Christian FitzGerald Forberg almost 2 years ago

I Just checked again with latest FreeNAS-11.2-RELEASE and the problem is fixed!
The only thing I noticed was:
When trying to import a volume (encrpyted) with the disc beeing wiped, there is an exception coming up about a missing disc-label.
But I consider this a minor problem, the exception should probably been caught somewhere and presented to the user as 'This disc is no ZFS data disc' (or similar)
Thanks!

#33 Updated by Christian FitzGerald Forberg over 1 year ago

William, since you closed #67609 as duplicate, will you reopen this one again please?
Thanks!

#34 Updated by William Grzybowski over 1 year ago

Christian FitzGerald Forberg wrote:

William, since you closed #67609 as duplicate, will you reopen this one again please?
Thanks!

What is there to open here? As far as I understand the original problem of this ticket was resolved. What is missing?

#35 Updated by Christian FitzGerald Forberg over 1 year ago

#50291 is not fixed, as #67609 demonstrates. I was wrong. (Sorry)
I can reproduce the problem with the current version of FreeNAS consistently.

#36 Updated by William Grzybowski over 1 year ago

Christian FitzGerald Forberg wrote:

#50291 is not fixed, as #67609 demonstrates. I was wrong. (Sorry)
I can reproduce the problem with the current version of FreeNAS consistently.

#67609 is a different issue (not this one). A different ticket was related as duplicate of that. What did I miss?

#37 Updated by Christian FitzGerald Forberg over 1 year ago

William Grzybowski wrote:

Christian FitzGerald Forberg wrote:

#50291 is not fixed, as #67609 demonstrates. I was wrong. (Sorry)
I can reproduce the problem with the current version of FreeNAS consistently.

#67609 is a different issue (not this one). A different ticket was related as duplicate of that. What did I miss?

I don't know what you mean. I reported both of them, #50291 and #67609. I thought that #50291 was fixed and I was wrong. Since I did not receive any more reply from you I opened a new ticket which is now being closed as duplicate. Which is correct of course. But the problem still persist, as the screenshot and the callstack from #67609 proove so #50291 needs to be reopened, right?

#38 Updated by William Grzybowski over 1 year ago

Christian FitzGerald Forberg wrote:

William Grzybowski wrote:

Christian FitzGerald Forberg wrote:

#50291 is not fixed, as #67609 demonstrates. I was wrong. (Sorry)
I can reproduce the problem with the current version of FreeNAS consistently.

#67609 is a different issue (not this one). A different ticket was related as duplicate of that. What did I miss?

I don't know what you mean. I reported both of them, #50291 and #67609. I thought that #50291 was fixed and I was wrong. Since I did not receive any more reply from you I opened a new ticket which is now being closed as duplicate. Which is correct of course. But the problem still persist, as the screenshot and the callstack from #67609 proove so #50291 needs to be reopened, right?

Did you miss the part I said #67609 is a different technical issue? It was already reported and closed duplicate of #62001

#39 Updated by Christian FitzGerald Forberg over 1 year ago

William Grzybowski wrote:

Christian FitzGerald Forberg wrote:

William Grzybowski wrote:

Christian FitzGerald Forberg wrote:

#50291 is not fixed, as #67609 demonstrates. I was wrong. (Sorry)
I can reproduce the problem with the current version of FreeNAS consistently.

#67609 is a different issue (not this one). A different ticket was related as duplicate of that. What did I miss?

I don't know what you mean. I reported both of them, #50291 and #67609. I thought that #50291 was fixed and I was wrong. Since I did not receive any more reply from you I opened a new ticket which is now being closed as duplicate. Which is correct of course. But the problem still persist, as the screenshot and the callstack from #67609 proove so #50291 needs to be reopened, right?

Did you miss the part I said #67609 is a different technical issue? It was already reported and closed duplicate of #62001

I did! Thank you for clarification. I thought you closed #67609 as duplicate of #50291.
Thanks!

Also available in: Atom PDF