Firefly Perimeter Cluster (VSRX) Setup on Vmware ESX

As you might know Firefly Perimeter aka VSRX which is the virtual firewall running on Vmware ESX and KVM can be downloaded as evaluation at here I believe it is great to test most functionality for example an SRX cluster (High Availability) which is the topic of this post. Here I will show

  • How you can install two firefly instances and operate them in a cluster.
  • How you can setup redundancy groups

Installation of Firefly instances
First download your evaluation copy. You must have a Juniper Networks account to download one. At the time of this writing, current version is 12.1X46-D10.2. Once you downloaded the OVA file to your disk, deploy it into your ESX server via File->Deploy OVF Template.

2014-02-16 14_37_09-Deploy OVF Template Give name e.g firefly00 for the first instance. Continue and you can choose whatever suggested in the wizard. Now you should have two firefly instances ready to be configured as below:

firefly_list

Configuring Firefly instances
After deploying instances we must configure them for clustering. A default firefly instance requires 2GB RAM and two CPUs and instance comes with two ethernet interfaces but we will need more for clustering. It is because:

  1. ge-0/0/0 is used for management interface (can’t be changed)
  2. ge-0/0/1 is used for control link (can’t be changed)
  3. ge-0/0/2 is going to be used for fabric link (this is configurable)

Note: Although the minimum memory requirement is 2GB, I use 1GB for my testing purposes. It is also working but it isn’t the recommended memory.

As we lose 3 interfaces here, we will add 6 more interfaces to already configured 2 interfaces. (Max 10 interfaces can be added) Check release notes for this limitation

firefly_max_int

Add Internal ESX vSwitch
We will need an internal vSwitch on ESX platform for our HA and Control links. It doesn’t have to have any Physical Adapter. You can follow Configuration->Add Networking->Virtual Machine->Create a vSphere standard switch to add a new internal switch. You should have something like below after addition: internal_vswitch

In my case, virtual switch’s name is vSwitch5. Now add two normal interfaces with no VLAN assigned.

ha_control_fab_links

Both interfaces should be identical in their Port Group properties. Then we need to increase MTU. In order to do this, under vSwitch5 Properties window click “vSwitch” which is under the Ports list and then click “Edit”. Set the MTU to 9000 as below and apply. (Ignore the warning about no physical adapter assigned, we don’t need it for HA interfaces)

mtu_setup

Assign Cluster Interfaces
We have configured the internal vSwitch and HA interfaces now it is time to assign these to instances.

edit_settings_firefly
Note: Adapter 3 will be fab0 for node0 and fab1 for node1

Below is a simple table showing how interfaces are assigned in ESX and Firefly

ESX to Firefly Interface Mapping

Pretty intuitive:)

My management interface vlan is vlan4000_MGT. This is the VLAN through which I will connect to my VMs. We assigned adapter 2 and 3 to Control and FAB ports on vSwitch5. Exactly the same port assignment must be done on firefly01 VM too as they will be in cluster. Now it is time to boot two instances.

firefly_after_boot

After booting both firefly VMs, you will see the Amnesiac screen. There isn’t any password yet and you can login with root username. From now on, cluster configuration is the same as any branch SRX configuration. To configure cluster smoothly, follow the steps below on both nodes.

firefly00 (node0)

Note: As I already have another Firefly cluster, I have chosen 2 as the cluster id.

firefly01 (node1)

Firefly Interface Configuration
At this point, you should have two firefly instances running and one should have {primary:node0} and the other one {secondary:node1} on the prompt but we still don’t have management connectivity. we will do the cluster groups configuration and access the VMs via IP instead of console:

firefly00 node0

After commit your config will also be synced to node1 automatically which you will see on the CLI as well.

After this configuration, you should be able to reach your cluster nodes via their fxp0 interfaces. You don’t need any security policy for these interfaces to connect. As you can see I could SSH from my management network to firefly00 node.

Redundancy Group Configuration
Topology that I am trying to achieve is below. The host debian1 will reach Internet via Firefly cluster. Its gateway is 10.12.1.20 reth1 which belongs to the redundancy group 1. There is only one traffic redundancy group and once it fails, cluster should fail to node1. As you can see in the topology, second node’s interfaces start with ge-7/0/x once it is part of the cluster.

firefly_topology

Chassis Cluster Config

Redundant Interface Config

Note: Cluster’s default gateway is SRX100 device.

Security zone and Policy Config

Check the Interfaces and cluster status

Hmmm…there is something wrong. I don’t see the priorities for RG1. Why? Let’s check, cluster interfaces.

Aha… I forgot to configure the fabric links. I always forget to do something:) as you might remember from the beginning of the post, you can choose any interface you want for the fabric link and I have chosen ge-0/0/2 interfaces on both nodes.

Configure Fabric Link

Check the cluster status again

Yes, now we have fabric links configured and UP.

Now lets do a ping test from debian1 host to cluster’s reth1 IP.

Hmm, something isn’t working. What have I forgotten? I must tell you that the number one mistake I make when working with ESX is that port assignment. Since I haven’t assigned the interfaces to their respective VLAN ports, ping doesn’t work. Let assign.

reth_interfaces

As you can see I assigned Adapter 4 (ge-0/0/3) to vlan2001 and Adapter 5 (ge-0/0/4) to vlan2002. These are child links of reth0 and reth1 respectively.

Let’s try ping once again:

Heyyyy, it works!

Now ping 8.8.8.8 and see how the session table looks like

Hmm, sessions are flowing through node1 which isn’t what I wanted. Let’s fail over RG1 to node0.

After the fail over, we can see that packets are flowing through the node0.

In this post, I would like to show how virtual firefly SRX can be used on an ESX host. I haven’t configured interface monitoring as there doesn’t seem to be any point to monitor a virtual port which should be UP at all times or maybe there is a point but I don’t know it. You can also use ip monitoring by which you can leverage cluster functionality. I hope with this post, you can get up to speed on firefly quickly. If you have any questions or anything to contribute to this post, don’t hesitate!

Have a nice fireflying!

Genco.

73 thoughts on “Firefly Perimeter Cluster (VSRX) Setup on Vmware ESX

  1. red1

    thanks a lot, nice post! just one correction, VM can support up to 10 interfaces..so in the firefly VM we can have from ge-0/0/0 to ge-0/0/9

    Reply
    1. rtoodtoo Post author

      Thanks for the feedback Red. I recalled that it was 8 but release notes says it is 10vNICs. I have updated the post with the release notes link too.

      Reply
  2. dennis

    Hi, I cant seem to see any ge-0/0/x after enabling cluster mode. Is there something I am missing out in the esxi 5.1?
    I only have fxp0 and fxp1. πŸ™
    Cheers,
    Dennis

    Reply
    1. siteadmin

      Hi Dennis,
      – Do you have 12.1X46-D10.2 release?
      – You have ESX 5.1 without any update? It should be default 5.1 as far as I recall.
      – Have you followed the guide exactly? Because I don’t recall of having seen such issue.

      thanks.

      Reply
      1. Dennis

        I re-downloaded the image and tested again. Same version and same thing again, I cant see any of the ge-x/x/x interfaces. Are you able to see any ge-x/x/x interfaces when u are in cluster mode?

        Reply
        1. rtoodtoo Post author

          The only time that I had missing ge-x interfaces, when I played with the HW configuration especially CPU config of the VM. You shouldn’t be changing any setting if you have done so. Apart from these, I don’t recall anything. I am interested to look into this actually. I will try to check in my ESX server as well and send you my details to compare.

          Reply
          1. Dennis

            Nope, I didnt change any other settings apart from adding more vm network adapters to it. I am wondering if it has anything to do with 60 days trial since i gotten the file during January. I am now downloading the file again with another account to see if that makes a difference. It will be very interesting to find out what/why it happened. Let me bring up the new vm and i will copy the screen output and paste it here. πŸ™‚

          2. ganganath illesinghe

            I think its due to memory limitation. Try to increase the memory at least 512mb.

  3. Malik

    I tried in VMWare workstation 10 and it is working fine…. all are working.

    root@srx0# run show chassis cluster interfaces
    Control link status: Up

    Control interfaces:
    Index Interface Status Internal-SA
    0 fxp1 Up Disabled

    Fabric link status: Up

    Fabric interfaces:
    Name Child-interface Status
    (Physical/Monitored)
    fab0 ge-0/0/2 Up / Up
    fab0
    fab1 ge-7/0/2 Up / Up
    fab1

    Redundant-ethernet Information:
    Name Status Redundancy-group
    reth0 Down 1
    reth1 Down 1

    Redundant-pseudo-interface Information:
    Name Status Redundancy-group
    lo0 Up 0

    Reply
      1. rtoodtoo Post author

        Wirat, vmware workstation isn’t officially supported and haven’t really used workstation. I have got experience only with ESX.

        Reply
  4. Dennis

    I downloaded the latest image 12.1x460d20.5 which works perfectly. I guess the earlier version may have bugs and not supported yet.

    Reply
    1. Tchek14

      Hello Malik,

      Can you explain me vmware workstation configuration for cluster please ? When I create cluster all of my interface ge disappear and node are on hold status.

      Reply
  5. Tchek14

    Hello,

    Is it possible to create cluster on vmware workstation 10 ? If yes How can I do it please ?

    Reply
      1. Dennis

        It works on vm player for clustering and vmware workstation would work as well but didnt try clustering. Fusion works without clustering though.

        Reply
  6. Dennis

    Bad news.. It didnt work on my windows 8 machine… must be some drivers causing it not to work properly… πŸ™

    Reply
    1. Dennis

      Finally, I found the problem!!!!! Duh… Its my stupidity that caused my clustering not to work.
      Now, I can confirm that it will work with any vmware softwares including vmware fusion.

      The solution is…. ” DELETE ” the default configuration when you first boot up the SRX even if you ran ” set chassis cluster cluster-id 1 node X reboot “.

      You have to remove the default config file by

      1) edit
      2) delete
      3) set system root-authentication plain-text-password
      4) enter ya password
      5) re-enter ya password
      6) commit and-quit
      7) request system reboot

      Repeat that on the other box.

      To confirm it works,
      1) request chassis cluster status
      2) request chassis cluster interface

      it should tell you if there’s anything wrong with the clustering.

      Lastly, make sure your network adaptor is set exactly the same ie. bridging or customised adaptor.

      Cheers,
      Dennis

      Reply
        1. Dennis

          Hi HelpSeeker,

          Depends on how many network adapters you assigned to your firefly from VMWare workstation. It’d be ge-0/0/X.

          Reply
  7. JD

    Hi,

    Trying to create a cluster of Firefly on ESX 5.1 u1 , saw on the release notes that u2 was not working , but no mentions of u1 …

    I removed all the configration related to interfaces , do the set chassis cluster .

    Reboot and i can see that the traffic on the control link is ok , but i don’t see the interfaces of the node1 , and when i’m trying to commit the configuration i’m a getting a “connection refused” on the node 1 …

    I’m getting mad …

    Reply
    1. rtoodtoo Post author

      Hi JD,
      I don’t see that u1 is mentioned anywhere. I must say that I haven’t tested with u1.
      Don’t you see anything in the logs related to chassis clustering? You may also wait a bit. I believe the new release will have the fix for u2
      and if there is any issue with u1, probably it will have been fixed too.

      Genco.

      Reply
    1. rtoodtoo Post author

      That KB is about vGW with its new name Firefly Host which doesn’t have JUNOS actually. 5.5 isn’t in the supported list yet.

      Reply
  8. Bustami

    hi all, any news if there is any new version of Firefly that supports ESXi 5.5??

    I have already firefly with ESXi 5.0. however, it is only an evaluation mode and it is expiring in the coming 2 days. i am gonna loose everything i’ve done so far πŸ™

    any ideas?

    Reply
    1. Dennis

      Make sure you delete all default configurations before configuring it in a cluster mode.
      Otherwise, your control link will always be down. Hope that helps.

      Reply
  9. Claudio Magagnotti

    Hello,

    I have configured the cluster but suddenly all the interfaces of the node 1 disappeared..

    I mean the interfaces in node 0 was ge-0/0/0, ge-0/0/1,ge-0/0/2, etc.. and node 1 was ge-7/0/1, ge-7/0/2, etc.. but right now I just can see node 0 interfaces ( ge-0/0/0, ge-0/0/1,ge-0/0/2..etc.)

    Can you help me?

    root@fw-01# run show interfaces terse
    Interface Admin Link Proto Local Remote
    gr-0/0/0 up up
    ip-0/0/0 up up
    ge-0/0/2 up up
    ge-0/0/2.0 up up aenet –> reth1.0
    ge-0/0/3 up up
    ge-0/0/3.0 up up aenet –> fab0.0
    ge-0/0/4 up up
    ge-0/0/5 up up
    ge-0/0/6 up up
    ge-0/0/7 up up
    ge-0/0/7.0 up up aenet –> reth0.0
    dsc up up
    fab0 up up
    fab0.0 up up inet 30.33.0.200/24
    fxp0 up up
    fxp0.0 up up inet 100.100.100.203/24
    fxp1 up up
    fxp1.0 up up inet 129.32.0.1/2
    tnp 0x1200001
    gre up up
    ipip up up
    irb up up
    lo0 up up
    lo0.16384 up up inet 127.0.0.1 –> 0/0
    lo0.16385 up up inet 10.0.0.1 –> 0/0
    10.0.0.16 –> 0/0
    128.0.0.1 –> 0/0
    128.0.0.4 –> 0/0
    128.0.1.16 –> 0/0
    lo0.32768 up up
    lsi up up
    mtun up up
    pimd up up
    pime up up
    pp0 up up
    ppd0 up up
    ppe0 up up
    reth0 up up
    reth0.0 up up inet 192.168.88.26/24
    reth1 up up
    reth1.0 up up inet 172.19.1.1/24
    st0 up up
    tap up up

    {primary:node0}[edit]

    It’s the configuration

    root@fw-01# show
    ## Last changed: 2014-11-17 20:04:32 UTC
    version 12.1X47-D10.4;
    groups {
    node0 {
    system {
    host-name fw-01;
    }
    interfaces {
    fxp0 {
    unit 0 {
    family inet {
    address 100.100.100.203/24;
    }
    }
    }
    }
    }
    node1 {
    system {
    host-name fw-02;
    }
    interfaces {
    fxp0 {
    unit 0 {
    family inet {
    address 100.100.100.204/24;
    }
    }
    }
    }
    }
    }
    apply-groups “${node}”;
    system {
    root-authentication {
    encrypted-password “$1$GF8gmp1s$iGUBPDdBiDt0VO7hVyqSJ0”; ## SECRET-DATA
    }
    services {
    ssh;
    web-management {
    http {
    interface ge-0/0/0.0;
    }
    }
    }
    syslog {
    user * {
    any emergency;
    }
    file messages {
    any any;
    authorization info;
    }
    file interactive-commands {
    interactive-commands any;
    }
    }
    license {
    autoupdate {
    url https://ae1.juniper.net/junos/key_retrieval;
    }
    }
    }
    chassis {
    cluster {
    reth-count 2;
    redundancy-group 0 {
    node 0 priority 100;
    node 1 priority 99;
    }
    redundancy-group 1 {
    node 0 priority 100;
    node 1 priority 99;
    }
    }
    }
    interfaces {
    ge-0/0/2 {
    gigether-options {
    redundant-parent reth1;
    }
    }
    ge-0/0/7 {
    gigether-options {
    redundant-parent reth0;
    }
    }
    ge-7/0/2 {
    gigether-options {
    redundant-parent reth1;
    }
    }
    ge-7/0/7 {
    gigether-options {
    redundant-parent reth0;
    }
    }
    fab0 {
    fabric-options {
    member-interfaces {
    ge-0/0/3;
    }
    }
    }
    fab1 {
    fabric-options {
    member-interfaces {
    ge-7/0/3;
    }
    }
    }
    reth0 {
    redundant-ether-options {
    redundancy-group 1;
    }
    unit 0 {
    family inet {
    address 192.168.88.26/24;
    }
    }
    }
    reth1 {
    redundant-ether-options {
    redundancy-group 1;
    }
    unit 0 {
    family inet {
    address 172.19.1.1/24;
    }
    }
    }
    }
    routing-options {
    static {
    route 0.0.0.0/0 next-hop 192.168.88.1;
    }
    }
    security {
    policies {
    from-zone trust to-zone untrust {
    policy allow-all-internal {
    match {
    source-address any;
    destination-address any;
    application any;
    }
    then {
    permit;
    }
    }
    }
    }
    zones {
    security-zone untrust {
    host-inbound-traffic {
    system-services {
    all;
    }
    }
    interfaces {
    reth0.0;
    }
    }
    security-zone trust {
    host-inbound-traffic {
    system-services {
    all;
    }
    }
    interfaces {
    reth1.0;
    }
    }
    }
    }

    Reply
  10. Jaime

    Hello,

    I am trying to deploy vsrx clustering in KVM but facing some issues. I was wondering if you have given it a try.
    According to juniper, the new version 12.1X46-D20.5 can do clustering when using the virtio driver.

    I have translated the steps of your tutorial to KVM but the instances don’t see each other:

    Created to 3 virtual networks through the virt-manager: 1 per management, 1 per HA (control and fabric) and a 3rd one for reths

    set the MTU to 9000 in the corresponding virbr-nic and virbr.
    Set the brctl ageing to 1

    However when I set the cluster in both nodes and reboot, none of them see each other (i.e.: both appear as primary)

    show chassis cluster status” list the other pair as ‘lost;
    “show chassis cluster statistics” shows ‘failed to connect to the server after 1 retries…’

    Any help will be really appreciated as I am working 100% in KVM.

    Thanks,

    Reply
    1. rtoodtoo Post author

      Hi Jaime,
      I must tell you that I haven’t tested clustering on KVM yet. I would like to but haven’t done it yet. X47 is also available. Have you tried with that?

      Genco.

      Reply
  11. Ramakrishnan N

    Hello,

    Can we use one single Cluster and fabric network for multiple Junos Clusters???

    ie: HA Control, HA Fabic for Cluster 1 , 2 , 3 ……

    or do i need to have a separate Control and fabric network for Each Cluster.

    Regards,
    Ramakrishnan N

    Reply
  12. rtoodtoo Post author

    Rama, check this KB http://kb.juniper.net/KB15911. You need separate connections for data&control links. you aren’t supposed to mix different clusters in single data/control link. In my lab environment, I sometimes use it but I reckon it shouldn’t be used in a real network.

    Reply
  13. Amandeep Singh

    Excellent article..

    Facing 2 issues:

    1. Getting frequent syslog errors like:

    Message from syslogd@SRX-VM at Jan 22 21:46:14 …
    SRX-VM SRX-VM Frame 4: sp = 0x5835bf10, pc = 0x806e715

    Message from syslogd@SRX-VM at Jan 22 21:46:15 …
    SRX-VM SRX-VM Frame 5: sp = 0x5835bf50, pc = 0x805a690

    2. Devices hang at reboot/shutdown with these line & nothing happens:

    syncing disks… All buffers synced.
    Uptime: 2h9m4s
    Normal Shutdown (no dump device defined)

    Reply
      1. rtoodtoo Post author

        Amandeep,
        I don’t recall of having seen those messages on my VMs maybe after boot ups but you are using a laptop on which you may expect unusual behavior as the product has been tested on ESX and Centos KVM only.

        Genco.

        Reply
  14. Russ

    Thanks for a good article. One thing i noticed -The interface monitoring you mentioned, is required for redundancy groups if you want them to fail over once an issue is discovered eg a dead interface/nexthop.

    Reply
    1. rtoodtoo Post author

      Interface monitoring makes sense if you have a hardware appliance but in ESX where port is actually in software, it isn’t supposed to go down I believe as long as you don’t disconnect manually or due to some software issue. Probably ip monitoring is a better alternative.

      Reply
  15. kunto

    Great post and thanks for sharing.

    Sadly that Firefly Perimeter does not support LAG and ethernet switching..

    Reply
  16. Chris

    Thank you for your information. I also build the ESXi lab with vSRX. It’s great guide
    But I try practice “vrrp”, it seen not works. Can you try this function whether it works or not?

    vSRX support vrrp, but I can’t not use this protocol success.

    The version I use is 12.1X47-D20.7 and I disable security function. Just use packet mode to try vrrp.

    If you can reply this question, I will appreciate that, thank you.

    Reply
    1. rtoodtoo Post author

      I would recommend you to check the release notes of 12.1X46. There is one issue mentioned there for VRRP. Not sure whether this is what you are looking for not.

      Reply
  17. Gregory

    I have everything setup but have a couple questions.
    1. From my internal machine I can not ping 8.8.8.8 but from node0 I can ping 8.8.8.8 and I can ping the internal machine. Is this a policy or a nat problem – do you think.
    2. Since this is a virtual is there a way to install the VMware tools for better performance?

    Reply
    1. Leke Oluwatosin (Laykay)

      1. Can we have a config snippet? Make sure you have your source-nat configured as well as security policy to permit the traffic. I used this to prepare and pass my JNCIE-SEC
      2. VMware tools really don’t matter to me because you can console to the devices through the serial port of the virtual machine.

      Reply
      1. Gregory

        Would it be best for whole config or just the output of show | display set this seems to be shorter.

        Reply
          1. Gregory

            doing that command showed me I did not set my internal interface to the internal zone. once I did that all is working. Now the better test. Delete and restart to make sure I can get things working again. Great article. NO WAY I would have every got this working without it. Thanks
            If I run into further problems I will be back. Thanks Leke and rtoodtoo

    2. rtoodtoo Post author

      Gregory, as Leke said it can be either nat or security policy or if you chose an IP not assigned to interface, proxy-arp might be missing.
      For the second, I also agree with Leke.

      Reply
  18. Simon

    I notice
    your cli outputs are nicely coloured for clarity, did your ssh client do this or did you do this by hand, if the SSH client does this what are you using, it looks very neat and easy to read.
    got my vSRX cluster working nicely, this is enabled me to be sure of the procedure to swap out a real live cluster member, thanks for your time, Simon

    Reply
    1. rtoodtoo Post author

      Simon,
      I think it is done by the wordpress plugin “Crayon Syntax Highlighter”. I am not doing something special other than tagging the outputs by the plugin’s tag.

      Reply
  19. ausafali88

    Hello guys..
    Does anyone knows that why isnt ping working on a directly connected Windows Client to a reth sub-interface….Just writes Destination host unreachable……?
    But when it is connected to physical interface it pings…
    All system services allowed…also Promiscuous mode accept in ESXi vSwitch..but still no success….

    Moreover arp also does not shows any entry for those hosts…

    These Clients cannot ping reth1 as their default gateway.

    For the lab topology…please see the link below:

    https://mega.nz/#!jlVWAIaD!u6xYqF0OSeDLMHoa0mLRHyezLT5rbamr7ns62jiZB6w

    Waiting eagerly for your kind consideration.

    Reply
  20. ausafali88

    Dear members……

    Instead of having an EX switch between Cluster and Debian host…am just using a vSwitch from ESXi…..Please help me urgently…..because cannot ping the reth interface from a Windows host….

    Waiting eagerly fro your kind consideration.

    Regards.

    Reply
    1. Leke Oluwatosin (Laykay)

      Since you’re using a vswitch, what port-groups does your reth belong to? What interface type are you using on the vsrx vm? is it e1000 or vxnet interface?
      Can you post your configs on the vsrx and some screenshots of ESXi

      Reply
    2. simon bingham

      when VMs like this don’t work try this, on your vSwitch properties, enable promiscuous mode also enable forged transmits for good measure. Also
      is you Reth interface is a securty zone ?, does that security zone permit ICMP/ PING in.
      to check basic connectivity check the arp cache of the host if you have a IPaddress and Mac for the SRX then the V switch is not the issue and its something on the SRX. most of the time people have not put an interface into a Zone at all.

      Reply
  21. Vivek

    In order to configure on VMware Workstation simply add interfaces in Bridged mode, rest follow this post, cluster will come up. Kudos rtoodtoo for writing this tutorial!!

    Reply
  22. Simon Bingham

    Although not relating exactly to clustering I’ve downloaded the latest vSRX 15.1X49-D15.4. And no more interfaces than ge-0/0/0 and ge-0/0/1 come up.
    I wonder if this is a new restriction consistent with this VMs intended use to provide security inside virtual environment. OR am I missing something I’ve added 7 interfaces on vmware and rebooted, does anyone have any thoughts ?

    root> show configuration interfaces
    ge-0/0/0 {
    unit 0 {
    family inet;
    }
    }
    ge-0/0/1 {
    unit 0 {
    family inet;
    }
    }
    ge-0/0/2 {
    unit 0 {
    family inet {
    address 1.2.3.4/32;
    }
    }
    }
    ge-0/0/3 {
    unit 0 {
    family inet {
    address 65.4.2.2/32;
    }
    }
    }
    ge-0/0/4 {
    unit 0 {
    family inet;
    }

    ge-0/0/0 up up
    ge-0/0/0.0 up up inet
    gr-0/0/0 up up
    ip-0/0/0 up up
    lsq-0/0/0 up up
    lt-0/0/0 up up
    mt-0/0/0 up up
    sp-0/0/0 up up
    sp-0/0/0.0 up up inet
    inet6
    sp-0/0/0.16383 up up inet
    ge-0/0/1 up up
    ge-0/0/1.0 up up inet
    dsc up up

    Reply
    1. Simon Bingham

      I was able to answer my own question when adding extra interfaces they must be VMXNET3 type, the vSRX can see them

      Reply
  23. Simon Bingham

    Has anyone got the latest vSRX working with Family Ethernet switching ? its in the CLI, but when you try to commit this fails with a message as below

    [edit interfaces ge-0/0/1]
    root# show
    unit 0 {
    family ethernet-switching;
    }

    [edit interfaces ge-0/0/1]
    root# commit check
    [edit interfaces ge-0/0/1 unit 0]
    ‘family’
    family ‘ethernet-switching’ only valid in enhanced-switching mode
    warning: Interfaces are changed from route mode to mix mode. Please request system reboot hypervisor or all nodes in the HA cluster!
    error: configuration check-out failed

    [edit interfaces ge-0/0/1]

    Reply

You have a feedback?