Project

General

Profile

Bug #7746

FreeNAS Jail cannot ping local network when VIMAGE checked and NAT unchecked.

Added by Joe Schmuck over 5 years ago. Updated about 3 years ago.

Status:
Closed: Not To Be Fixed
Priority:
Nice to have
Assignee:
John Hixson
Category:
Middleware
Target version:
Seen in:
Severity:
New
Reason for Closing:
Reason for Blocked:
Needs QA:
Yes
Needs Doc:
Yes
Needs Merging:
Yes
Needs Automation:
No
Support Suite Ticket:
n/a
Hardware Configuration:
ChangeLog Required:
No

Description

This is a very recent change in behavior so the problem likely crept in within the last 2 upgrades.

Problem: When you create a standard FreeNAS jail you cannot ping on the local network from within the jail unless you uncheck VIMAGE. The problem was brought to my attention by one user and I confirmed it. I also referenced the User Manual just in case there was some change I was unaware of but this does not work properly IAW the User Manual and past history.

This apparently doesn't impact previously created jails, only newly created ones.

History

#1 Updated by Joe Schmuck over 5 years ago

I have confirmation that this worked fine before the 24 Jan release of the software.

#2 Updated by Bidule0hm _ over 5 years ago

Note that I was able to ping the others pc of my LAN from the jail when I further tested, but I just can't ping the internet.

#3 Updated by Jordan Hubbard over 5 years ago

  • Category set to 38
  • Assignee set to John Hixson
  • Target version set to Unspecified

#4 Updated by John Hixson over 5 years ago

  • Status changed from Unscreened to 15

From within the jail, can you include some basic network troubleshooting commands?

cat /etc/resolv.conf
cat /etc/nsswitch.conf

ifconfig -a
netstat -nr

ping 8.8.8.8
traceroute 8.8.8.8

Also, is this a VM by chance? or a physical server?

#5 Updated by Bidule0hm _ over 5 years ago

There is more info on this thread https://forums.freenas.org/index.php?threads/how-to-install-minidlna-on-freenas-9-3-prior-to-plugin.25395/ (start at the end of the first page).

And some more info from our private conversation:

In the jail shell:

root@minidlna:/ # ifconfig
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
  options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
  inet6 ::1 prefixlen 128
  inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
  inet 127.0.0.1 netmask 0xff000000
  nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
epair0b: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
  options=8<VLAN_MTU>
  ether 02:78:29:00:08:0b
  inet 192.168.0.61 netmask 0xffffff00 broadcast 192.168.0.255
  nd6 options=9<PERFORMNUD,IFDISABLED>
  media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
  status: active

In the general FreeNAS shell:

[root@freenas] ~# ifconfig
igb0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
  options=400b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWTSO>
  ether 0c:c4:7a:30:01:b6
  inet 192.168.0.4 netmask 0xffffff00 broadcast 192.168.0.255
  nd6 options=9<PERFORMNUD,IFDISABLED>
  media: Ethernet autoselect (1000baseT <full-duplex>)
  status: active
ipfw0: flags=8801<UP,SIMPLEX,MULTICAST> metric 0 mtu 65536
  nd6 options=9<PERFORMNUD,IFDISABLED>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
  options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
  inet6 ::1 prefixlen 128
  inet6 fe80::1%lo0 prefixlen 64 scopeid 0x5
  inet 127.0.0.1 netmask 0xff000000
  nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
  ether 02:4d:fe:6f:05:00
  nd6 options=1<PERFORMNUD>
  id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
  maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
  root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
  member: epair0a flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
  ifmaxaddr 0 port 7 priority 128 path cost 2000
  member: igb0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
  ifmaxaddr 0 port 2 priority 128 path cost 20000
epair0a: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
  options=8<VLAN_MTU>
  ether 02:78:29:00:07:0a
  nd6 options=1<PERFORMNUD>
  media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
  status: active

I don't have the faulty jail anymore, maybe Joe Schmuck still has his test jail.

It's a physical server: FreeNAS-9.3-STABLE-201501241715 ### Supermicro X10SL7-F (flashed IT v16) ### Intel Core I3 4360 3.7GHz ### Crucial 2x8GB DDR3 ECC 1600MHz 1.35V [CT2KIT102472BF160B] ### RAID-Z3 8x3TB ### 4x Seagate 3TB NAS [ST3000VN000] + 4x Western Digital 3TB RED [WD30EFRX] ### SeaSonic X-Series 650W ### UPS MGE Ellipse 600

#6 Updated by Joe Schmuck over 5 years ago

I am using a VMWare Workstation 10 and just now updated my hardware NAS to the current version of FreeNAS and the problem occurs on both. If running the FreeNAS software prior to the current release this is not an issue, or if I uncheck VIMAGE.

The trick to recreate this is you have to go in and Edit the jail, ensure VIMAGE is checked, Save the configuration (you must click Save), Stop and Start your jail. Now try to ping 8.8.8.8, will not work.

cat /etc/resolv.conf

root@Test2:/ # cat /etc/resolv.conf
search local
nameserver 192.168.1.1
root@Test2:/ #

cat /etc/nsswitch.conf

root@Test2:/ # cat /etc/nsswitch.conf
#
# nsswitch.conf(5) - name service switch configuration file
# $FreeBSD: releng/9.3/etc/nsswitch.conf 224765 2011-08-10 20:52:02Z dougb $
#
group: compat
group_compat: nis
hosts: files dns
networks: files
passwd: compat
passwd_compat: nis
shells: files
services: compat
services_compat: nis
protocols: files
rpc: files
root@Test2:/ #

ifconfig -a

root@Test2:/ # ifconfig -a
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
        inet 127.0.0.1 netmask 0xff000000
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
epair0b: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=8<VLAN_MTU>
        ether 02:4b:ea:00:06:0b
        inet 192.168.1.61 netmask 0xffffff00 broadcast 192.168.1.255
        nd6 options=9<PERFORMNUD,IFDISABLED>
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
root@Test2:/ #

netstat -nr

root@Test2:/ # netstat -nr
Routing tables

Internet:
Destination        Gateway            Flags    Refs      Use  Netif Expire
default            192.168.1.1        UGS         0        4 epair0
127.0.0.1          link#1             UH          0        0    lo0
192.168.1.0/24     link#2             U           0        2 epair0
192.168.1.61       link#2             UHS         0        0    lo0

Internet6:
Destination                       Gateway                       Flags      Netif Expire
::/96                             ::1                           UGRS        lo0
::1                               link#1                        UH          lo0
::ffff:0.0.0.0/96                 ::1                           UGRS        lo0
fe80::/10                         ::1                           UGRS        lo0
fe80::%lo0/64                     link#1                        U           lo0
fe80::1%lo0                       link#1                        UHS         lo0
ff01::%lo0/32                     ::1                           U           lo0
ff02::/16                         ::1                           UGRS        lo0
ff02::%lo0/32                     ::1                           U           lo0
root@Test2:/ #

ping 8.8.8.8

root@Test2:/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
^C
--- 8.8.8.8 ping statistics ---
74 packets transmitted, 0 packets received, 100.0% packet loss
root@Test2:/ #

traceroute 8.8.8.8

root@Test2:/ # traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 64 hops max, 52 byte packets
 1  * * *
 2  * * *
 3  * * *
 4  * * *
 5  * *^C
root@Test2:/ #

#7 Updated by John Hixson over 5 years ago

Ok guys, can you please post the output of this command as well?

warden list -v

#8 Updated by John Hixson over 5 years ago

Bidule0hm _ wrote:

There is more info on this thread https://forums.freenas.org/index.php?threads/how-to-install-minidlna-on-freenas-9-3-prior-to-plugin.25395/ (start at the end of the first page).

And some more info from our private conversation:

In the jail shell:
[...]

In the general FreeNAS shell:
[...]

I don't have the faulty jail anymore, maybe Joe Schmuck still has his test jail.

It's a physical server: FreeNAS-9.3-STABLE-201501241715 ### Supermicro X10SL7-F (flashed IT v16) ### Intel Core I3 4360 3.7GHz ### Crucial 2x8GB DDR3 ECC 1600MHz 1.35V [CT2KIT102472BF160B] ### RAID-Z3 8x3TB ### 4x Seagate 3TB NAS [ST3000VN000] + 4x Western Digital 3TB RED [WD30EFRX] ### SeaSonic X-Series 650W ### UPS MGE Ellipse 600

Well, if your jail is now working, this doesn't do much good as far as finding out what happened. What did you to get it to work?

#9 Updated by John Hixson over 5 years ago

So you are using VMWare? VIMAGE will behave exactly as your describing if you don't put your interface into promiscuous mode. Now that it's clear you are using a VM, I'm fairly certain that is the issue. Can you configure your virtual NIC to be in promiscuous mode and verify you still have an issue here?

#10 Updated by John Hixson over 5 years ago

John Hixson wrote:

So you are using VMWare? VIMAGE will behave exactly as your describing if you don't put your interface into promiscuous mode. Now that it's clear you are using a VM, I'm fairly certain that is the issue. Can you configure your virtual NIC to be in promiscuous mode and verify you still have an issue here?

And by the way, FreeNAS is not actually supported in VMs =-)

#11 Updated by Bidule0hm _ over 5 years ago

Output of warden list -v

id: 1
host: minidlna
iface:
ipv4: 192.168.0.61/24
alias-ipv4:
bridge-ipv4:
alias-bridge-ipv4:
defaultrouter-ipv4:
ipv6:
alias-ipv6:
bridge-ipv6:
alias-bridge-ipv6:
defaultrouter-ipv6:
autostart: Enabled
vnet: Disabled
nat: Disabled
mac:
status: Running
type: standard
flags: allow.raw_sockets=true

My jail is working because I unchecked VIMAGE. It's a workaround because if the jail needed that VIMAGE is checked then there would be no solution.

The problem is:

- VIMAGE checked --> can't ping www.google.com from within the jail
- VIMAGE unchecked --> can ping www.google.com from within the jail

John Hixson wrote:

So you are using VMWare?

Joe Schmuck do but I don't, it's a physical server. Same bug on a VM and a real server.

#12 Updated by John Hixson over 5 years ago

Can you run the same command for the jail when VIMAGE is enabled?

#13 Updated by Bidule0hm _ over 5 years ago

I've created a new jail and I see exactly the same problem with VIMAGE checked.

From within the jail I can ping others pc on my LAN but interestingly I can't ping my router nor www.google.com

VIMAGE checked:

[root@freenas] ~# warden list -v

id: 1
host: minidlna
iface:
ipv4: 192.168.0.61/24
alias-ipv4:
bridge-ipv4:
alias-bridge-ipv4:
defaultrouter-ipv4:
ipv6:
alias-ipv6:
bridge-ipv6:
alias-bridge-ipv6:
defaultrouter-ipv6:
autostart: Enabled
vnet: Disabled
nat: Disabled
mac:
status: Running
type: standard
flags: allow.raw_sockets=true

id: 2
host: test
iface:
ipv4: 192.168.0.62/24
alias-ipv4:
bridge-ipv4:
alias-bridge-ipv4:
defaultrouter-ipv4: 192.168.0.254
ipv6:
alias-ipv6:
bridge-ipv6:
alias-bridge-ipv6:
defaultrouter-ipv6:
autostart: Enabled
vnet: Enabled
nat: Disabled
mac: 02:e8:8d:00:08:0b
status: Running
type: standard
flags: allow.raw_sockets=true

[root@freenas] ~#

VIMAGE unchecked:

[root@freenas] ~# warden list -v

id: 1
host: minidlna
iface:
ipv4: 192.168.0.61/24
alias-ipv4:
bridge-ipv4:
alias-bridge-ipv4:
defaultrouter-ipv4:
ipv6:
alias-ipv6:
bridge-ipv6:
alias-bridge-ipv6:
defaultrouter-ipv6:
autostart: Enabled
vnet: Disabled
nat: Disabled
mac:
status: Running
type: standard
flags: allow.raw_sockets=true

id: 2
host: test
iface:
ipv4: 192.168.0.62/24
alias-ipv4:
bridge-ipv4:
alias-bridge-ipv4:
defaultrouter-ipv4:
ipv6:
alias-ipv6:
bridge-ipv6:
alias-bridge-ipv6:
defaultrouter-ipv6:
autostart: Enabled
vnet: Disabled
nat: Disabled
mac:
status: Running
type: standard
flags: allow.raw_sockets=true

[root@freenas] ~#

#14 Updated by Joe Schmuck over 5 years ago

This is from my real metal machine, not a VM and lets put the VM behind us for now...
Setup:
1) If you have a previous copy of the freenas template downloaded, remove it. This will force a new download.

2) Create a normal Jail with the following settings:
a) Jail Name
b) Template "-----"
c) IPv4 address: 192.168.1.57 (choose your static address)
d) IPv4 default gateway: 192.168.1.1
This gets everyone on the same page.

3) The jail should be running at this point in time and the current freenas-standard-9.3 template will download.

4) Enter the jail by either using the jail shell or I use SSH and then jexec jail_name /bin/csh to get into the jail.

Now the first time I try to PING something it fails but the second time I try to PING, it works. To repeat the test just stop and start the jail. So I was thinking maybe there is a timing thing so I waited 30 seconds and still it would fail the first ping, waited about 45 seconds and it failed sometimes and didn't fail other times.

What bugs me is I cannot make it remain in a failing state like Bidule0hm can, well ever since I deleted all my jails (I had an Ubuntu jail running) and started from scratch. I wanted to eliminate any outside influences to the problem.

So my results right now on a clean system running FreeNAS-9.3-STABLE-201501241715 shows that either the first network request fails and subsequent ones work or if you wait long enough (a minute or so) that the first request works (on my system). Last night when I tested this out on my system, I was able to create the failure on my bare metal machine but as I mentioned above, I had an Ubuntu jail running and it doesn't play well with others.

So I'm not sure what to make of Bidule0hm issues. If he can reproduce this issue faithfully, and it is a FreeNAS running on bare metal, I can't explain it.

I will play with it some more but I think it's up to Bidule0hm to provide data until I can figure out how to make it fail permanently.

Getting back to the VM side of things... I don't recall this issue existing before for any of my VMs and I run them all with similar configurations when testing but VMWare Workstation 10 doesn't have promiscuous mode, I have selected Bridged Mode. I do have ESXi that I am just starting to dabble in and loaded FreeNAS on it today, figured out the promiscuous mode thing and it works as you explained. I can accept the fact that this may be behavior from the way a VM operates but right now I'm not 100% sold that this wasn't recently introduced, maybe intentionally which might be fine if this is the expected outcome.

As for FreeNAS not being made to run on a VM platform, maybe I don't follow but FreeNAS and TrueNAS are very close to the same thing and FreeNAS does have built in support code for VMWare so I fully understand that iXsystems does not formally support FreeNAS running in a virtual environment but it doesn't sound reasonable to imply (I am generalizing here) that when FreeNAS is running on a VM, that the VM is the likely cause. I don't want to argue or debate it, this is just my opinion and if I'm wrong and iXsystems doesn't include VMWare support code in FreeNAS, then just say so and I'll be wiser for it.

Hope some of what I typed is helpful but maybe not.

#15 Updated by Bidule0hm _ over 5 years ago

Note that before I created my jail I completly deleted the jails dataset and then recreated a new one, so it can't be a residual thing from my old jail. I also have only one jail (minus the test jail now) so it can't be a multi jail interaction.

#16 Updated by John Hixson over 5 years ago

Joe Schmuck wrote:

This is from my real metal machine, not a VM and lets put the VM behind us for now...
Setup:
1) If you have a previous copy of the freenas template downloaded, remove it. This will force a new download.

2) Create a normal Jail with the following settings:
a) Jail Name
b) Template "-----"
c) IPv4 address: 192.168.1.57 (choose your static address)
d) IPv4 default gateway: 192.168.1.1
This gets everyone on the same page.

3) The jail should be running at this point in time and the current freenas-standard-9.3 template will download.

4) Enter the jail by either using the jail shell or I use SSH and then jexec jail_name /bin/csh to get into the jail.

Now the first time I try to PING something it fails but the second time I try to PING, it works. To repeat the test just stop and start the jail. So I was thinking maybe there is a timing thing so I waited 30 seconds and still it would fail the first ping, waited about 45 seconds and it failed sometimes and didn't fail other times.

This sounds ARP-ish. Can you try to insert the jails MAC address into the system arp cache before doing this again?

What bugs me is I cannot make it remain in a failing state like Bidule0hm can, well ever since I deleted all my jails (I had an Ubuntu jail running) and started from scratch. I wanted to eliminate any outside influences to the problem.

So my results right now on a clean system running FreeNAS-9.3-STABLE-201501241715 shows that either the first network request fails and subsequent ones work or if you wait long enough (a minute or so) that the first request works (on my system). Last night when I tested this out on my system, I was able to create the failure on my bare metal machine but as I mentioned above, I had an Ubuntu jail running and it doesn't play well with others.

This convinces me even more this is an ARP issue. I just don't know why you (and the others here) are having that issue since it's not common. The warden inserts the jail MAC address into the system arp cache on jail startup, so perhaps there is an issue there. Either way, I'm interested in the results of manually inserting first, then trying to ping from within your jail afterwards.

So I'm not sure what to make of Bidule0hm issues. If he can reproduce this issue faithfully, and it is a FreeNAS running on bare metal, I can't explain it.

I will play with it some more but I think it's up to Bidule0hm to provide data until I can figure out how to make it fail permanently.

Getting back to the VM side of things... I don't recall this issue existing before for any of my VMs and I run them all with similar configurations when testing but VMWare Workstation 10 doesn't have promiscuous mode, I have selected Bridged Mode. I do have ESXi that I am just starting to dabble in and loaded FreeNAS on it today, figured out the promiscuous mode thing and it works as you explained. I can accept the fact that this may be behavior from the way a VM operates but right now I'm not 100% sold that this wasn't recently introduced, maybe intentionally which might be fine if this is the expected outcome.

According to Google, VMWare Workstation 10 does indeed have promiscuous mode. It would be silly for it not to have it.

As for FreeNAS not being made to run on a VM platform, maybe I don't follow but FreeNAS and TrueNAS are very close to the same thing and FreeNAS does have built in support code for VMWare so I fully understand that iXsystems does not formally support FreeNAS running in a virtual environment but it doesn't sound reasonable to imply (I am generalizing here) that when FreeNAS is running on a VM, that the VM is the likely cause. I don't want to argue or debate it, this is just my opinion and if I'm wrong and iXsystems doesn't include VMWare support code in FreeNAS, then just say so and I'll be wiser for it.

Hope some of what I typed is helpful but maybe not.

#17 Updated by Joe Schmuck over 5 years ago

John Hixson wrote:

According to Google, VMWare Workstation 10 does indeed have promiscuous mode. It would be silly for it not to have it.

Not sure what you were reading but VMWare Workstation 10 does not have promiscuous mode, to make it work there is a work around but it's not a built in setting although I wish it were. ESXi does have the setting built in.

My host is Windows 7 Pro for VMWare Workstation, it's a different bird than say a Linux host which is a simple work around. The work required to change my system (firewall rules, Antivirus changes, etc...) are too much for me to rearrange my entire system and then have to put it back and prey it works again. The best I am willing to do is place it in Bridge mode which isn't the same of course.

Bidule0hm should be able to provide consistent data since the failure is hard on his system and will hopefully test out the ARP theory. If I get some time I will try some of this out on my metal machine, no more VM for this issue.

It's Superbowl Sunday so lots to do around the house before the big game. My team didn't make it but my wife and I picked opposite teams in the Superbowl. Funny thing is, When I lived in the Seattle area I was a Seahawks fan, when I lived in Massachusetts I was a Patriots fan. (Military moved me a lot) Where I live now, I am not a Redskins fan.

#18 Updated by Jordan Hubbard over 5 years ago

  • Status changed from 15 to Investigation

BRB: Joe, is it possible that John could have a team viewer session with you or something, to observe the problem happening on your (live, metal) box?

#19 Updated by Joe Schmuck over 5 years ago

Jordan, I'm fine with using TeamCenter however as I posted above, the problem doesn't hang around long, like 30 seconds or so on my machine but it was repeatable a good 80% of the time. Bidule0hm supposedly has the issue 100% of the time and it doesn't disappear after 30 seconds like on my machine so I would honestly think it would be a better machine to test on. Again, I'm open to the idea and I haven't updated my NAS software since this occurred and this is my operational NAS, not the test rig, I'm sure John will be gentle with me :) if he wants to go this route.

Also this piece of info, looks like if you wait long enough at times it will start working (for me):

root@Dumpy:/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=8 ttl=57 time=16.759 ms
64 bytes from 8.8.8.8: icmp_seq=9 ttl=57 time=14.421 ms
64 bytes from 8.8.8.8: icmp_seq=10 ttl=57 time=13.735 ms
64 bytes from 8.8.8.8: icmp_seq=11 ttl=57 time=12.689 ms
64 bytes from 8.8.8.8: icmp_seq=12 ttl=57 time=18.797 ms
64 bytes from 8.8.8.8: icmp_seq=13 ttl=57 time=18.626 ms
64 bytes from 8.8.8.8: icmp_seq=14 ttl=57 time=10.116 ms
64 bytes from 8.8.8.8: icmp_seq=15 ttl=57 time=13.629 ms
^C
--- 8.8.8.8 ping statistics ---
16 packets transmitted, 8 packets received, 50.0% packet loss
round-trip min/avg/max/stddev = 10.116/14.847/18.797/2.817 ms

To recreate it I'm having to stop the jail, change the jail configuration to remove the VIMAGE check, save, change the jail this time selecting VIMAGE, start the jail and in the jail "ping 8.8.8.8". This method works more constantly.

If Bidule0hm isn't a good candidate please let me know because to me the only thing I didn't like was the fact that some people may need to uncheck VIMAGE on a metal machine when before I've never heard of an instance there was a problem/issue but that could be my own ignorance too. Just PM me on the forums and I'll eventually see the email message and we can arrange a time if you desire.

#20 Updated by John Hixson over 5 years ago

  • Status changed from Investigation to 15

Joe Schmuck wrote:

Jordan, I'm fine with using TeamCenter however as I posted above, the problem doesn't hang around long, like 30 seconds or so on my machine but it was repeatable a good 80% of the time. Bidule0hm supposedly has the issue 100% of the time and it doesn't disappear after 30 seconds like on my machine so I would honestly think it would be a better machine to test on. Again, I'm open to the idea and I haven't updated my NAS software since this occurred and this is my operational NAS, not the test rig, I'm sure John will be gentle with me :) if he wants to go this route.

Also this piece of info, looks like if you wait long enough at times it will start working (for me):
[...]

To recreate it I'm having to stop the jail, change the jail configuration to remove the VIMAGE check, save, change the jail this time selecting VIMAGE, start the jail and in the jail "ping 8.8.8.8". This method works more constantly.

If Bidule0hm isn't a good candidate please let me know because to me the only thing I didn't like was the fact that some people may need to uncheck VIMAGE on a metal machine when before I've never heard of an instance there was a problem/issue but that could be my own ignorance too. Just PM me on the forums and I'll eventually see the email message and we can arrange a time if you desire.

Well, I'd like to look at both of your machines. Joe Schmuck, when would be a good time to do a teamviewer session with you? Send me your info and availability and we can work something out. My email address is .

#21 Updated by Bidule0hm _ over 5 years ago

The thing is it's not a test server, it's used by a few people pretty much all the time, so I can't do any risky/destructive tests outside the test jail but I can provide the results of any "read-only" command of course.

#22 Updated by Joe Schmuck over 5 years ago

I don't think John will do anything destructive, he's just looking for some data to chew on so he can figure the system out.

As for my availability, let me check to see if the problem still exists, I upgraded a few days ago. I'll post again in the next 2 days.

#23 Updated by Bidule0hm _ over 5 years ago

Yeah, I think the same, it's just that I prefer to tell exactly what I can and can't do before in order to avoid any false assumption ;)

#24 Updated by John Hixson over 5 years ago

Joe Schmuck wrote:

I don't think John will do anything destructive, he's just looking for some data to chew on so he can figure the system out.

As for my availability, let me check to see if the problem still exists, I upgraded a few days ago. I'll post again in the next 2 days.

I will not harm your system ;-) Does this problem still exist for you?

#25 Updated by John Hixson over 5 years ago

Bidule0hm _ wrote:

Yeah, I think the same, it's just that I prefer to tell exactly what I can and can't do before in order to avoid any false assumption ;)

I really need to see this in real time on a system where it's occurring. I'm unable to reproduce this so I can only guess at this point. I still think there is some ARP funniness going on though.

#26 Updated by Bidule0hm _ over 5 years ago

"Does this problem still exist for you?" Yes, I'm currently on FreeNAS-9.3-STABLE-201502142001

#27 Updated by Joe Schmuck over 5 years ago

I'm out of this game, sorry. I have updated to FreeNAS-9.3-STABLE-201502142001 and I can still create the issue however on my system it appears that if I wait about 30-45 seconds after starting the jail that I cannot reproduce the problem. If I am quick, with VIMAGE checked, I can get PING to fail but after a few seconds it will start working. With VIMAGE unchecked, PING works each and every time without delay no matter how fast I am. For me it just looks like with VIMAGE checked that it is taking a little more time for the jail to become fully operational and I don't see this as a problem on my system.

I am not sure what is going on with BiduleOhm's system as I cannot reproduce it where it stays for any length of time that I would be concerned about.

#28 Updated by Jordan Hubbard over 5 years ago

  • Status changed from 15 to Closed: Not To Be Fixed

Executive decision time: This bug is weird enough, and definitely not of significant mainstream impact enough, for us to spend a lot of time investigating it when we have so many more obvious bugs to work on. NTBF.

#29 Avatar?id=14398&size=24x24 Updated by Kris Moore about 3 years ago

  • Target version changed from Unspecified to N/A

Also available in: Atom PDF