Defender for IoT installation

This article describes how to install the following elements of Azure Defender for IoT:

  • Sensor: Defender for IoT sensors collects ICS network traffic by using passive (agentless) monitoring. Passive and nonintrusive, the sensors have zero impact on OT and IoT networks and devices. The sensor connects to a SPAN port or network TAP and immediately begins monitoring your network. Detections appear in the sensor console. There, you can view, investigate, and analyze them in a network map, device inventory, and an extensive range of reports. Examples include risk assessment reports, data mining queries, and attack vectors. Read more about sensor capabilities in the Defender for IoT Sensor User Guide (direct download).

  • On-premises management console: The on-premises management console lets you carry out device management, risk management, and vulnerability management. You can also use it to carry out threat monitoring and incident response across your enterprise. It provides a unified view of all network devices, key IoT, and OT risk indicators and alerts detected in facilities where sensors are deployed. Use the on-premises management console to view and manage sensors in air-gapped networks.

This article covers the following installation information:

  • Hardware: Dell and HPE physical appliance details.

  • Software: Sensor and on-premises management console software installation.

  • Virtual Appliances: Virtual machine details and software installation.

After installation, connect your sensor to your network.

About Defender for IoT appliances

The following sections provide information about Defender for IoT sensor appliances and the appliance for the Defender for IoT on-premises management console.

Physical appliances

The Defender for IoT appliance sensor connects to a SPAN port or network TAP and immediately begins collecting ICS network traffic by using passive (agentless) monitoring. This process has zero impact on OT networks and devices because it isn't placed in the data path and doesn't actively scan OT devices.

The following rack mount appliances are available:

Deployment type Corporate Enterprise SMB Line
Model HPE ProLiant DL360 Dell PowerEdge R340 XL HPE ProLiant DL20 HPE ProLiant DL20
Monitoring ports up to 15 RJ45 or 8 OPT up to 9 RJ45 or 6 OPT up to 8 RJ45 or 6 OPT 4 RJ45
Max Bandwidth* 3 Gb/Sec 1 Gb/Sec 1 Gb/Sec 100 Mb/Sec
Max Protected Devices 30,000 10,000 15,000 1,000

*Maximum bandwidth capacity might vary depending on protocol distribution.

Virtual appliances

The following virtual appliances are available:

Deployment type Corporate Enterprise SMB Line
Description Virtual appliance for corporate deployments Virtual appliance for enterprise deployments Virtual appliance for SMB deployments Virtual appliance for line deployments
Max Bandwidth* 2.5 Gb/Sec 800 Mb/sec 160 Mb/sec 3 Mb/sec
Max protected devices 30,000 10,000 2,500 100
Deployment Type Corporate Enterprise SMB Line

*Maximum bandwidth capacity might vary depending on protocol distribution.

Hardware specifications for the on-premises management console

Item Description
Description In a multi-tier architecture, the on-premises management console delivers visibility and control across geographically distributed sites. It integrates with SOC security stacks, including SIEMs, ticketing systems, next-generation firewalls, secure remote access platforms, and the Defender for IoT ICS malware sandbox.
Deployment type Enterprise
Appliance type Dell R340, VM
Number of managed sensors Unlimited

Prepare for the installation

Access the ISO installation image

The installation image is accessible from the Defender for IoT portal.

To access the file:

  1. Sign in to your Defender for IoT account.

  2. Go to the Network sensor or On-premises management console page and select a version to download.

Install from DVD

Before the installation, ensure you have:

  • A portable DVD drive with the USB connector.

  • An ISO installer image.

To install:

  1. Burn the image to a DVD or prepare a disk on a key. Connect a portable DVD drive to your computer, right-click the ISO image, and select Burn to disk.

  2. Connect the DVD or disk on a key and configure the appliance to boot from DVD or disk on a key.

Install from disk on a key

Before the installation, ensure you have:

  • Rufus installed.

  • A disk on key with USB version 3.0 and later. The minimum size is 4 GB.

  • An ISO installer image file.

The disk on a key will be erased in this process.

To prepare a disk on a key:

  1. Run Rufus and select SENSOR ISO.

  2. Connect the disk on a key to the front panel.

  3. Set the BIOS of the server to boot from the USB.

Dell PowerEdgeR340XL installation

Before installing the software on the Dell appliance, you need to adjust the appliance's BIOS configuration:

Dell PowerEdge R340XL requirements

To install the Dell PowerEdge R340XL appliance, you need:

  • Enterprise license for Dell Remote Access Controller (iDrac)

  • BIOS configuration XML

  • Server firmware versions:

    • BIOS version 2.1.6

    • iDrac version 3.23.23.23

Dell PowerEdge R340 front panel

Dell PowerEdge R340 front panel.

  1. Left control panel
  2. Optical drive (optional)
  3. Right control panel
  4. Information tag
  5. Drives

Dell PowerEdge R340 back panel

Dell PowerEdge R340 back panel.

  1. Serial port
  2. NIC port (Gb 1)
  3. NIC port (Gb 1)
  4. Half-height PCIe
  5. Full-height PCIe expansion card slot
  6. Power supply unit 1
  7. Power supply unit 2
  8. System identification
  9. System status indicator cable port (CMA) button
  10. USB 3.0 port (2)
  11. iDRAC9 dedicated network port
  12. VGA port

Dell BIOS configuration

Dell BIOS configuration is required to adjust the Dell appliance to work with the software.

The BIOS configuration is performed through a predefined configuration. The file is accessible from the Help Center.

Import the configuration file to the Dell appliance. Before using the configuration file, you need to establish the communication between the Dell appliance and the management computer.

The Dell appliance is managed by an integrated iDRAC with Lifecycle Controller (LC). The LC is embedded in every Dell PowerEdge server and provides functionality that helps you deploy, update, monitor, and maintain your Dell PowerEdge appliances.

To establish the communication between the Dell appliance and the management computer, you need to define the iDRAC IP address and the management computer's IP address on the same subnet.

When the connection is established, the BIOS is configurable.

To configure Dell BIOS:

  1. Configure the iDRAC IP address

  2. Import the BIOS configuration file

Configure iDRAC IP address

  1. Power up the sensor.

  2. If the OS is already installed, select the F2 key to enter the BIOS configuration.

  3. Select iDRAC Settings.

  4. Select Network.

    Note

    During the installation, you must configure the default iDRAC IP address and password mentioned in the following steps. After the installation, you change these definitions.

  5. Change the static IPv4 address to 10.100.100.250.

  6. Change the static subnet mask to 255.255.255.0.

    Screenshot that shows the static subnet mask.

  7. Select Back > Finish.

Import the BIOS configuration file

This section describes how to configure the BIOS by using the configuration file.

  1. Plug in a PC with a static preconfigured IP address 10.100.100.200 to the iDRAC port.

    Screenshot of the preconfigured IP address port.

  2. Open a browser and enter 10.100.100.250 to connect to iDRAC web interface.

  3. Sign in with Dell default administrator privileges:

    • Username: root

    • Password: calvin

  4. The appliance's credentials are:

    • Username: XXX

    • Password: XXX

      The import server profile operation is initiated.

      Note

      Before you import the file, make sure:

      • You're the only user who is currently connected to iDRAC.
      • The system is not in the BIOS menu.
  5. Go to Configuration > Server Configuration Profile. Set the following parameters:

    Screenshot that shows the configuration of your server profile.

    Parameter Configuration
    Location Type Select Local.
    File Path Select Choose File and add the configuration XML file.
    Import Components Select BIOS, NIC, RAID.
    Maximum wait time Select 20 minutes.
  6. Select Import.

  7. To monitor the process, go to Maintenance > Job Queue.

    Screenshot that shows Job Queue.

Manually configuring BIOS

You need to manually configure the appliance BIOS if:

  • You did not purchase your appliance from Arrow.

  • You have an appliance, but do not have access to the XML configuration file.

After you access the BIOS, go to Device Settings.

To manually configure:

  1. Access the appliance BIOS directly by using a keyboard and screen, or use iDRAC.

    • If the appliance is not a Defender for IoT appliance, open a browser and go to the IP address that was configured before. Sign in with the Dell default administrator privileges. Use root for the username and calvin for the password.

    • If the appliance is a Defender for IoT appliance, sign in by using XXX for the username and XXX for the password.

  2. After you access the BIOS, go to Device Settings.

  3. Choose the RAID-controlled configuration by selecting Integrated RAID controller 1: Dell PERC<PERC H330 Adapter> Configuration Utility.

  4. Select Configuration Management.

  5. Select Create Virtual Disk.

  6. In the Select RAID Level field, select RAID5. In the Virtual Disk Name field, enter ROOT and select Physical Disks.

  7. Select Check All and then select Apply Changes

  8. Select Ok.

  9. Scroll down and select Create Virtual Disk.

  10. Select the Confirm check box and select Yes.

  11. Select OK.

  12. Return to the main screen and select System BIOS.

  13. Select Boot Settings.

  14. For the Boot Mode option, select BIOS.

  15. Select Back, and then select Finish to exit the BIOS settings.

Software installation (Dell R340)

The installation process takes about 20 minutes. After the installation, the system is restarted several times.

To install:

  1. Verify that the version media is mounted to the appliance in one of the following ways:

    • Connect the external CD or disk on a key with the release.

    • Mount the ISO image by using iDRAC. After signing in to iDRAC, select the virtual console, and then select Virtual Media.

  2. In the Map CD/DVD section, select Choose File.

  3. Choose the version ISO image file for this version from the dialog box that opens.

  4. Select the Map Device button.

    Screenshot that shows a mapped device.

  5. The media is mounted. Select Close.

  6. Start the appliance. When you're using iDRAC, you can restart the servers by selecting the Consul Control button. Then, on the Keyboard Macros, select the Apply button, which will start the Ctrl+Alt+Delete sequence.

  7. Select English.

  8. Select SENSOR-RELEASE-<version> Enterprise.

    Select your sensor version and enterprise type.

  9. Define the appliance profile, and network properties:

    Screenshot that shows the appliance profile, and network properties.

    Parameter Configuration
    Hardware profile enterprise
    Management interface eno1
    Network parameters (provided by the customer) -
    management network IP address: -
    subnet mask: -
    appliance hostname: -
    DNS: -
    default gateway IP address: -
    input interfaces: The system generates the list of input interfaces for you. To mirror the input interfaces, copy all the items presented in the list with a comma separator. You do not have to configure the bridge interface. This option is used for special use cases only.
  10. After about 10 minutes, the two sets of credentials appear. One is for a CyberX user, and one is for a Support user.

  11. Save the appliance ID and passwords. You'll need these credentials to access the platform the first time you use it.

  12. Select Enter to continue.

HPE ProLiant DL20 installation

This section describes the HPE ProLiant DL20 installation process, which includes the following steps:

  • Enable remote access and update the default administrator password.
  • Configure BIOS and RAID settings.
  • Install the software.

About the installation

  • Enterprise and SMB appliances can be installed. The installation process is identical for both appliance types, except for the array configuration.
  • A default administrative user is provided. We recommend that you change the password during the network configuration process.
  • During the network configuration process, you'll configure the iLO port on network port 1.
  • The installation process takes about 20 minutes. After the installation, the system is restarted several times.

HPE ProLiant DL20 front panel

HPE ProLiant DL20 front panel.

HPE ProLiant DL20 back panel

The back panel of the HPE ProLiant DL20.

Enable remote access and update the password

Use the following procedure to set up network options and update the default password.

To enable and update the password:

  1. Connect a screen and a keyboard to the HP appliance, turn on the appliance, and press F9.

    Screenshot that shows the HPE ProLiant window.

  2. Go to System Utilities > System Configuration > iLO 5 Configuration Utility > Network Options.

    Screenshot that shows the System Configuration window.

    1. Select Shared Network Port-LOM from the Network Interface Adapter field.

    2. Disable DHCP.

    3. Enter the IP address, subnet mask, and gateway IP address.

  3. Select F10: Save.

  4. Select Esc to get back to the iLO 5 Configuration Utility, and then select User Management.

  5. Select Edit/Remove User. The administrator is the only default user defined.

  6. Change the default password and select F10: Save.

Configure the HPE BIOS

The following procedure describes how to configure the HPE BIOS for the enterprise and SMB appliances.

To configure the HPE BIOS:

  1. Select System Utilities > System Configuration > BIOS/Platform Configuration (RBSU).

  2. In the BIOS/Platform Configuration (RBSU) form, select Boot Options.

  3. Change Boot Mode to Legacy BIOS Mode, and then select F10: Save.

  4. Select Esc twice to close the System Configuration form.

For the enterprise appliance

  1. Select Embedded RAID 1: HPE Smart Array P408i-a SR Gen 10 > Array Configuration > Create Array.

  2. In the Create Array form, select all the options. Three options are available for the Enterprise appliance.

For the SMB appliance

  1. Select Embedded RAID 1: HPE Smart Array P208i-a SR Gen 10 > Array Configuration > Create Array.

  2. Select Proceed to Next Form.

  3. In the Set RAID Level form, set the level to RAID 5 for enterprise deployments and RAID 1 for SMB deployments.

  4. Select Proceed to Next Form.

  5. In the Logical Drive Label form, enter Logical Drive 1.

  6. Select Submit Changes.

  7. In the Submit form, select Back to Main Menu.

  8. Select F10: Save and then press Esc twice.

  9. In the System Utilities window, select One-Time Boot Menu.

  10. In the One-Time Boot Menu form, select Legacy BIOS One-Time Boot Menu.

  11. The Booting in Legacy and Boot Override windows appear. Choose a boot override option; for example, to a CD-ROM, USB, HDD, or UEFI shell.

    Screenshot that shows the first Boot Override window.

    Screenshot that shows the second Boot Override window.

Software installation (HPE ProLiant DL20 appliance)

The installation process takes about 20 minutes. After the installation, the system is restarted several times.

To install the software:

  1. Connect the screen and keyboard to the appliance, and then connect to the CLI.

  2. Connect an external CD or disk on the key with the ISO image that you downloaded from the Updates page in the Defender for IoT portal.

  3. Start the appliance.

  4. Select English.

    Selection of English in the CLI window.

  5. Select SENSOR-RELEASE- Enterprise.

    Screenshot of the screen for selecting a version.

  6. In the Installation Wizard define the hardware profile and network properties:

    Screenshot that shows the Installation Wizard.

    Parameter Configuration
    Hardware profile Select Enterprise, or SMB deployments.
    Management interface eno2
    Default network parameters (usually the parameters are provided by the customer) management network IP address:

    appliance hostname:
    DNS:
    the default gateway IP address:
    input interfaces: The system generates the list of input interfaces for you.

    To mirror the input interfaces, copy all the items presented in the list with a comma separator: eno5, eno3, eno1, eno6, eno4

    For HPE DL20: Do not list eno1, enp1s0f4u4 (iLo interfaces)

    BRIDGE: There's no need to configure the bridge interface. This option is used for special use cases only. Press Enter to continue.
  7. After about 10 minutes, the two sets of credentials appear. One is for a CyberX user, and one is for a Support user.

  8. Save the appliance's ID and passwords. You'll need the credentials to access the platform for the first time.

  9. Select Enter to continue.

HPE ProLiant DL360 installation

  • A default administrative user is provided. We recommend that you change the password during the network configuration.

  • During the network configuration, you'll configure the iLO port.

  • The installation process takes about 20 minutes. After the installation, the system is restarted several times.

HPE ProLiant DL360 front panel

HPE ProLiant DL360 front panel.

HPE ProLiant DL360 back panel

HPE ProLiant DL360 back panel.

Enable remote access and update the password

Refer to the preceding sections for HPE ProLiant DL20 installation:

  • "Enable remote access and update the password"

  • "Configure the HPE BIOS"

The enterprise configuration is identical.

Note

In the array form, verify that you select all the options.

iLO remote installation (from a virtual drive)

This procedure describes the iLO installation from a virtual drive.

To install:

  1. Sign in to the iLO console, and then right-click the servers' screen.

  2. Select HTML5 Console.

  3. In the console, select the CD icon, and choose the CD/DVD option.

  4. Select Local ISO file.

  5. In the dialog box, choose the relevant ISO file.

  6. Go to the left icon, select Power, and the select Reset.

  7. The appliance will restart and run the sensor installation process.

Software installation (HPE DL360)

The installation process takes about 20 minutes. After the installation, the system is restarted several times.

To install:

  1. Connect the screen and keyboard to the appliance, and then connect to the CLI.

  2. Connect an external CD or disk on a key with the ISO image that you downloaded from the Updates page in the Defender for IoT portal.

  3. Start the appliance.

  4. Select English.

  5. Select SENSOR-RELEASE- Enterprise.

    Screenshot that shows selecting the version.

  6. In the Installation Wizard define the appliance profile and network properties.

    Screenshot that shows the Installation Wizard.

    Parameter Configuration
    Hardware profile Select corporate.
    Management interface eno2
    Default network parameters (provided by the customer) management network IP address:
    subnet mask:
    appliance hostname:
    DNS:
    the default gateway IP address:
    input interfaces: The system generates a list of input interfaces for you.

    To mirror the input interfaces, copy all the items presented in the list with a comma separator.

    You do not need to configure the bridge interface. This option is used for special use cases only.
  7. After about 10 minutes, the two sets of credentials appear. One is for a CyberX user, and one is for a support user.

  8. Save the appliance's ID and passwords. You'll need these credentials to access the platform for the first time.

  9. Select Enter to continue.

HP EdgeLine 300 installation

• A default administrative user is provided. We recommend that you change the password during the network configuration.

• The installation process takes about 20 minutes. After the installation, the system is restarted several times.

HP EdgeLine 300 back panel

View of the back panel of the EL300

Enable remote access

  1. Enter the iSM IP Address into your web browser.

  2. Sign in using the default username and password found on your appliance.

  3. Navigate to Wired and Wireless Network > IPV4

    navigate to highlighted sections.

  4. Disable DHCP toggle.

  5. Configure the IPv4 addresses as such:

    • IPV4 Address: 192.168.1.125
    • IPV4 Subnet Mask: 255.255.255.0
    • IPV4 Gateway: 192.168.1.1
  6. Select Apply.

  7. Sign out and reboot the appliance.

Configure the BIOS

The following procedure describes how to configure the BIOS for HP EL300 appliance.

To configure the BIOS:

  1. Turn on the appliance and push F9 to enter the BIOS.

  2. Select Advanced, and scroll down to CSM Support.

    Enable CSM support to open the additional menu.

  3. Push Enter to enable CSM Support.

  4. Navigate to Storage and push +/- to change it to Legacy.

  5. Navigate to Video and push +/- to change it to Legacy.

    Navigate to storage and video and change them to Legacy.

  6. Navigate to Boot > Boot mode select.

  7. Push +/- to change it to Legacy.

    Change Boot mode select to Legacy.

  8. Navigate to Save & Exit.

  9. Select Save Changes and Exit.

    Save your changes and exit the system.

  10. Select Yes, and the appliance will reboot.

  11. Push F11 to enter the Boot Menu.

  12. Select the device with the sensor image. Either DVD or USB.

  13. Select your language.

  14. Select sensor-10.0.3.12-62a2a3f724 Office: 4 CPUS, 8 GB RAM, 100 GB STORAGE.

    Select the sensor version as shown.

  15. In the Installation Wizard, define the appliance profile, and network properties:

    Define the appliance's profile and network configurations with the following parameters.

    Parameter Configuration
    configure hardware profile office
    configure management network interface enp3s0
    or
    possible value
    configure management network IP address: IP address provided by the customer
    configure subnet mask: IP address provided by the customer
    configure DNS: IP address provided by the customer
    configure default gateway IP address: IP address provided by the customer
    configure input interface(s) enp4s0
    or
    possible value
    configure bridge interface(s) N/A
  16. Accept the settings and continue by entering Y.

Sensor installation for the virtual appliance

You can deploy the virtual machine for the Defender for IoT sensor in the following architectures:

Architecture Specifications Usage Comments
Enterprise CPU: 8
Memory: 32G RAM
HDD: 1800 GB
Production environment Default and most common
Small Business CPU: 4
Memory: 8G RAM
HDD: 500 GB
Test or small production environments -
Office CPU: 4
Memory: 8G RAM
HDD: 100 GB
Small test environments -

Prerequisites

The on-premises management console supports both VMware and Hyper-V deployment options. Before you begin the installation, make sure you have the following items:

  • VMware (ESXi 5.5 or later) or Hyper-V hypervisor (Windows 10 Pro or Enterprise) installed and operational

  • Available hardware resources for the virtual machine

  • ISO installation file for the Azure Defender for IoT sensor

Make sure the hypervisor is running.

Create the virtual machine (ESXi)

  1. Sign in to the ESXi, choose the relevant datastore, and select Datastore Browser.

  2. Upload the image and select Close.

  3. Go to Virtual Machines, and then select Create/Register VM.

  4. Select Create new virtual machine, and then select Next.

  5. Add a sensor name and choose:

    • Compatibility: <latest ESXi version>

    • Guest OS family: Linux

    • Guest OS version: Ubuntu Linux (64-bit)

  6. Select Next.

  7. Choose the relevant datastore and select Next.

  8. Change the virtual hardware parameters according to the required architecture.

  9. For CD/DVD Drive 1, select Datastore ISO file and choose the ISO file that you uploaded earlier.

  10. Select Next > Finish.

Create the virtual machine (Hyper-V)

This procedure describes how to create a virtual machine by using Hyper-V.

To create a virtual machine:

  1. Create a virtual disk in Hyper-V Manager.

  2. Select format = VHDX.

  3. Select type = Dynamic Expanding.

  4. Enter the name and location for the VHD.

  5. Enter the required size (according to the architecture).

  6. Review the summary and select Finish.

  7. On the Actions menu, create a new virtual machine.

  8. Enter a name for the virtual machine.

  9. Select Specify Generation > Generation 1.

  10. Specify the memory allocation (according to the architecture) and select the check box for dynamic memory.

  11. Configure the network adaptor according to your server network topology.

  12. Connect the VHDX created previously to the virtual machine.

  13. Review the summary and select Finish.

  14. Right-click the new virtual machine and select Settings.

  15. Select Add Hardware and add a new network adapter.

  16. Select the virtual switch that will connect to the sensor management network.

  17. Allocate CPU resources (according to the architecture).

  18. Connect the management console's ISO image to a virtual DVD drive.

  19. Start the virtual machine.

  20. On the Actions menu, select Connect to continue the software installation.

Software installation (ESXi and Hyper-V)

This section describes the ESXi and Hyper-V software installation.

To install:

  1. Open the virtual machine console.

  2. The VM will start from the ISO image, and the language selection screen will appear. Select English.

  3. Select the required architecture.

  4. Define the appliance profile and network properties:

    Parameter Configuration
    Hardware profile <required architecture>
    Management interface ens192
    Network parameters (provided by the customer) management network IP address:
    subnet mask:
    appliance hostname:
    DNS:
    default gateway:
    input interfaces:
    bridge interfaces: There's no need to configure the bridge interface. This option is for special use cases only.
  5. Enter Y to accept the settings.

  6. Sign-in credentials are automatically generated and presented. Copy the username and password in a safe place, because they're required for sign-in and administration.

    • Support: The administrative user for user management.

    • CyberX: The equivalent of root for accessing the appliance.

  7. The appliance restarts.

  8. Access the management console via the IP address previously configured: https://ip_address.

    Screenshot that shows access to the management console.

On-premises management console installation

Before installing the software on the appliance, you need to adjust the appliance's BIOS configuration:

BIOS configuration

To configure the BIOS for your appliance:

  1. Enable remote access and update the password.

  2. Configure the BIOS.

Software installation

The installation process takes about 20 minutes. After the installation, the system is restarted several times.

During the installation process, you will can add a secondary NIC. If you choose not to install the secondary NIC during installation, you can add a secondary NIC at a later time.

To install the software:

  1. Select your preferred language for the installation process.

    Select your preferred language for the installation process.

  2. Select MANAGEMENT-RELEASE-<version><deployment type>.

    Select your version.

  3. In the Installation Wizard, define the network properties:

    Screenshot that shows the appliance profile.

    Parameter Configuration
    configure management network interface For Dell: eth0, eth1
    For HP: enu1, enu2
    or
    possible value
    configure management network IP address: IP address provided by the customer
    configure subnet mask: IP address provided by the customer
    configure DNS: IP address provided by the customer
    configure default gateway IP address: IP address provided by the customer
  4. (Optional) If you would like to install a secondary Network Interface Card (NIC), define the following appliance profile, and network properties:

    Screenshot that shows the Secondary NIC install questions.

    Parameter Configuration
    configure sensor monitoring interface (Optional): eth1, or possible value
    configure an IP address for the sensor monitoring interface: IP address provided by the customer
    configure a subnet mask for the sensor monitoring interface: IP address provided by the customer
  5. Accept the settlings and continue by typing Y.

  6. After about 10 minutes, the two sets of credentials appear. One is for a CyberX user, and one is for a Support user.

    Copy these credentials as they will not be presented again.

    Save the usernames, and passwords, you'll need these credentials to access the platform the first time you use it.

  7. Select Enter to continue.

For information on how to find the physical port on your appliance, see Find your port.

Add a secondary NIC

You can enhance security to your on-premises management console by adding a secondary NIC. This could be used for high availability. By adding a secondary NIC, you may also have one dedicated for your users while using the other to support the configuration of a gateway for routed networks. The second NIC is then dedicated to all attached sensors within an IP address range.

The overall architecture of the secondary NIC.

Both NICs will support the user interface (UI).

If you choose not to deploy a secondary NIC, all of the above features will be available through the primary NIC.

If you have already configured your on-premises management console, and would like to add a secondary NIC to your on-premises management console, use the following steps:

  1. Use the network reconfigure command:

    sudo cyberx-management-network-reconfigure
    
  2. Enter the following responses to the following questions:

    Enter the following answers to configure your appliance.

    Parameters Response to enter
    Management Network IP address N
    Subnet mask N
    DNS N
    Default gateway IP Address N
    Sensor monitoring interface (Optional. Applicable when sensors are on a different network segment. For more information, see the Installation instructions) Y, select a possible value
    An IP address for the sensor monitoring interface (accessible by the sensors) Y, IP address provided by the customer
    A subnet mask for the sensor monitoring interface (accessible by the sensors) Y, IP address provided by the customer
    Hostname provided by the customer
  3. Review all choices, and enter Y to accept the changes. The system reboots.

Find your port

If you are having trouble locating the physical port on your device, you can use the following command to:

sudo ethtool -p <port value> <time-in-seconds>

This command will cause the light on the port to flash for the specified time period. For example, entering sudo ethtool -p eno1 120, will have port eno1 flash for 2 minutes allowing you to find the port on the back of your appliance.

Virtual appliance: On-premises management console installation

The on-premises management console VM supports the following architectures:

Architecture Specifications Usage
Enterprise
(Default and most common)
CPU: 8
Memory: 32G RAM
HDD: 1.8 TB
Large production environments
Small CPU: 4
Memory: 8G RAM
HDD: 500 GB
Large production environments
Office CPU: 4
Memory: 8G RAM
HDD: 100 GB
Small test environments

Prerequisites

The on-premises management console supports both VMware and Hyper-V deployment options. Before you begin the installation, verify the following:

  • VMware (ESXi 5.5 or later) or Hyper-V hypervisor (Windows 10 Pro or Enterprise) is installed and operational.

  • The hardware resources are available for the virtual machine.

  • You have the ISO installation file for the on-premises management console.

  • The hypervisor is running.

Create the virtual machine (ESXi)

To a create virtual machine (ESXi):

  1. Sign in to the ESXi, choose the relevant datastore, and select Datastore Browser.

  2. Upload the image and select Close.

  3. Go to Virtual Machines.

  4. Select Create/Register VM.

  5. Select Create new virtual machine and select Next.

  6. Add a sensor name and choose:

    • Compatibility: <latest ESXi version>

    • Guest OS family: Linux

    • Guest OS version: Ubuntu Linux (64-bit)

  7. Select Next.

  8. Choose relevant datastore and select Next.

  9. Change the virtual hardware parameters according to the required architecture.

  10. For CD/DVD Drive 1, select Datastore ISO file and choose the ISO file that you uploaded earlier.

  11. Select Next > Finish.

Create the virtual machine (Hyper-V)

To create a virtual machine by using Hyper-V:

  1. Create a virtual disk in Hyper-V Manager.

  2. Select the format VHDX.

  3. Select Next.

  4. Select the type Dynamic expanding.

  5. Select Next.

  6. Enter the name and location for the VHD.

  7. Select Next.

  8. Enter the required size (according to the architecture).

  9. Select Next.

  10. Review the summary and select Finish.

  11. On the Actions menu, create a new virtual machine.

  12. Select Next.

  13. Enter a name for the virtual machine.

  14. Select Next.

  15. Select Generation and set it to Generation 1.

  16. Select Next.

  17. Specify the memory allocation (according to the architecture) and select the check box for dynamic memory.

  18. Select Next.

  19. Configure the network adaptor according to your server network topology.

  20. Select Next.

  21. Connect the VHDX created previously to the virtual machine.

  22. Select Next.

  23. Review the summary and select Finish.

  24. Right-click the new virtual machine, and then select Settings.

  25. Select Add Hardware and add a new adapter for Network Adapter.

  26. For Virtual Switch, select the switch that will connect to the sensor management network.

  27. Allocate CPU resources (according to the architecture).

  28. Connect the management console's ISO image to a virtual DVD drive.

  29. Start the virtual machine.

  30. On the Actions menu, select Connect to continue the software installation.

Software installation (ESXi and Hyper-V)

Starting the virtual machine will start the installation process from the ISO image.

To install the software:

  1. Select English.

  2. Select the required architecture for your deployment.

  3. Define the network interface for the sensor management network: interface, IP, subnet, DNS server, and default gateway.

  4. Sign-in credentials are automatically generated. Save the username and passwords, you'll need these credentials to access the platform the first time you use it.

    The appliance will then reboot.

  5. Access the management console via the IP address previously configured: <https://ip_address>.

    Screenshot that shows the management console's sign-in screen.

Legacy devices

This section describes devices that are no longer available for purchase, but are still supported by Azure Defender for IoT.

Nuvo 5006LP installation

This section provides the Nuvo 5006LP installation procedure. Before installing the software on the Nuvo 5006LP appliance, you need to adjust the appliance BIOS configuration.

Nuvo 5006LP front panel

A view of the front panel of the Nuvo 5006LP device.

  1. Power button, Power indicator
  2. DVI video connectors
  3. HDMI video connectors
  4. VGA video connectors
  5. Remote on/off Control, and status LED output
  6. Reset button
  7. Management network adapter
  8. Ports to receive mirrored data

Nuvo back panel

A view of the back panel of the Nuvo 5006lp.

  1. SIM card slot
  2. Microphone, and speakers
  3. COM ports
  4. USB connectors
  5. DC power port (DC IN)

Configure the Nuvo 5006LP BIOS

The following procedure describes how to configure the Nuvo 5006LP BIOS. Make sure the operating system was previously installed on the appliance.

To configure the BIOS:

  1. Power on the appliance.

  2. Press F2 to enter the BIOS configuration.

  3. Navigate to Power and change Power On after Power Failure to S0-Power On.

    Change you Nuvo 5006 to power on after a power failure..

  4. Navigate to Boot and ensure that PXE Boot to LAN is set to Disabled.

  5. Press F10 to save, and then select Exit.

Software installation (Nuvo 5006LP)

The installation process takes approximately 20 minutes. After installation, the system is restarted several times.

  1. Connect the external CD, or disk on key with the ISO image.

  2. Boot the appliance.

  3. Select English.

  4. Select XSENSE-RELEASE- Office....

    Select the version of the sensor to install.

  5. Define the appliance architecture, and network properties:

    Define the Nuvo's architecture and network properties.

    Parameter Configuration
    Hardware profile Select office.
    Management interface eth0
    Management network IP address IP address provided by the customer
    Management subnet mask IP address provided by the customer
    DNS IP address provided by the customer
    Default gateway IP address 0.0.0.0
    Input interface The list of input interfaces is generated for you by the system.
    To mirror the input interfaces, copy all the items presented in the list with a comma separator.
    Bridge interface -
  6. Accept the settings and continue by entering Y.

After approximately 10 minutes, sign-in credentials are automatically generated. Save the username and passwords, you'll need these credentials to access the platform the first time you use it.

Fitlet2 mini sensor Installation

This section provides the Fitlet2 installation procedure. Before installing the software on the Fitlet appliance, you need to adjust the appliance's BIOS configuration.

Fitlet2 front panel

A view of the front panel of the Fitlet 2.

Fitlet2 back panel

A view of the back panel of the Fitlet 2.

Configure the Fitlet2 BIOS

  1. Power on the appliance.

  2. Navigate to Main > OS Selection.

  3. Press +/- to select Linux.

    Set the OS to Linux on your Fitlet2.

  4. Verify that the system date, and time are updated with the installation date, and time.

  5. Navigate to Advanced, and select ACPI Settings.

  6. Select Enable Hibernation, and press +/- to select Disabled.

    Diable the hibernation mode on your Fitlet2.

  7. Press Esc.

  8. Navigate to Advanced > TPM Configuration.

  9. Select fTPM, and press +/- to select Disabled.

  10. Press Esc.

  11. Navigate to CPU Configuration > VT-d.

  12. Press +/- to select Enabled.

  13. Navigate to CSM Configuration > CSM Support.

  14. Press +/- to select Enabled.

  15. Navigate to Advanced > Boot option filter [Legacy only] and change setting in the following fields to Legacy:

    • Network
    • Storage
    • Video
    • Other PCI

    Set all fields to Legacy.

  16. Press Esc.

  17. Navigate to Security > Secure Boot Customization.

  18. Press +/- to select Disabled.

  19. Press Esc.

  20. Navigate to Boot > Boot mode select, and select Legacy.

  21. Select Boot Option #1 – [USB CD/DVD].

  22. Select Save & Exit.

Software installation (Fitlet2)

The installation process takes approximately 20 minutes. After installation, the system is restarted several times.

  1. Connect the external CD, or disk on key with the ISO image.

  2. Boot the appliance.

  3. Select English.

  4. Select XSENSE-RELEASE- Office....

    Select the version of the sensor to install.

    Note

    Do not select Ruggedized.

  5. Define the appliance architecture, and network properties:

    Define the Nuvo's architecture and network properties.

    Parameter Configuration
    Hardware profile Select office.
    Management interface em1
    Management network IP address IP address provided by the customer
    Management subnet mask IP address provided by the customer
    DNS IP address provided by the customer
    Default gateway IP address 0.0.0.0
    Input interface The list of input interfaces is generated for you by the system.
    To mirror the input interfaces, copy all the items presented in the list with a comma separator.
    Bridge interface -
  6. Accept the settings and continue by entering Y.

After approximately 10 minutes, sign-in credentials are automatically generated. Save the username and passwords, you'll need these credentials to access the platform the first time you use it.

Post-installation validation

To validate the installation of a physical appliance, you need to perform many tests. The same validation process applies to all the appliance types.

Perform the validation by using the GUI or the CLI. The validation is available to the user Support and the user CyberX.

Post-installation validation must include the following tests:

  • Sanity test: Verify that the system is running.

  • Version: Verify that the version is correct.

  • ifconfig: Verify that all the input interfaces configured during the installation process are running.

Check system health by using the GUI

Screenshot that shows the system health check.

Sanity

  • Appliance: Runs the appliance sanity check. You can perform the same check by using the CLI command system-sanity.

  • Version: Displays the appliance version.

  • Network Properties: Displays the sensor network parameters.

Redis

  • Memory: Provides the overall picture of memory usage, such as how much memory was used and how much remained.

  • Longest Key: Displays the longest keys that might cause extensive memory usage.

System

  • Core Log: Provides the last 500 rows of the core log, enabling you to view the recent log rows without exporting the entire system log.

  • Task Manager: Translates the tasks that appear in the table of processes to the following layers:

    • Persistent layer (Redis)
    • Cash layer (SQL)
  • Network Statistics: Displays your network statistics.

  • TOP: Shows the table of processes. It's a Linux command that provides a dynamic real-time view of the running system.

  • Backup Memory Check: Provides the status of the backup memory, checking the following:

    • The location of the backup folder
    • The size of the backup folder
    • The limitations of the backup folder
    • When the last backup happened
    • How much space there is for the extra backup files
  • ifconfig: Displays the parameters for the appliance's physical interfaces.

  • CyberX nload: Displays network traffic and bandwidth by using the six-second tests.

  • Errors from Core, log: Displays errors from the core log file.

To access the tool:

  1. Sign in to the sensor with the Support user credentials.

  2. Select System Statistics from the System Settings window.

Check system health by using the CLI

Test 1: Sanity

Verify that the system is up and running:

  1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user Support.

  2. Enter system sanity.

  3. Check that all the services are green (running).

    Screenshot that shows running services.

  4. Verify that System is UP! (prod) appears at the bottom.

Test 2: Version check

Verify that the correct version is used:

  1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user Support.

  2. Enter system version.

  3. Check that the correct version appears.

Test 3: Network validation

Verify that all the input interfaces configured during the installation process are running:

  1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user Support.

  2. Enter network list (the equivalent of the Linux command ifconfig).

  3. Validate that the required input interfaces appear. For example, if two quad Copper NICs are installed, there should be 10 interfaces in the list.

    Screenshot that shows the list of interfaces.

Test 4: Management access to the UI

Verify that you can access the console web GUI:

  1. Connect a laptop with an Ethernet cable to the management port (Gb1).

  2. Define the laptop NIC address to be in the same range as the appliance.

    Screenshot that shows management access to the UI.

  3. Ping the appliance's IP address from the laptop to verify connectivity (default: 10.100.10.1).

  4. Open the Chrome browser in the laptop and enter the appliance's IP address.

  5. In the Your connection is not private window, select Advanced and proceed.

  6. The test is successful when the Defender for IoT sign-in screen appears.

    Screenshot that shows access to management console.

Troubleshooting

You can't connect by using a web interface

  1. Verify that the computer that you're trying to connect is on the same network as the appliance.

  2. Verify that the GUI network is connected to the management port.

  3. Ping the appliance's IP address. If there is no ping:

    1. Connect a monitor and a keyboard to the appliance.

    2. Use the Support user and password to sign in.

    3. Use the command network list to see the current IP address.

      Screenshot that shows the network list.

  4. If the network parameters are misconfigured, use the following procedure to change them:

    1. Use the command network edit-settings.

    2. To change the management network IP address, select Y.

    3. To change the subnet mask, select Y.

    4. To change the DNS, select Y.

    5. To change the default gateway IP address, select Y.

    6. For the input interface change (sensor only), select N.

    7. To apply the settings, select Y.

  5. After restart, connect with the support user credentials and use the network list command to verify that the parameters were changed.

  6. Try to ping and connect from the GUI again.

The appliance isn't responding

  1. Connect a monitor and keyboard to the appliance, or use PuTTY to connect remotely to the CLI.

  2. Use the Support user's credentials to sign in.

  3. Use the system sanity command and check that all processes are running.

    Screenshot that shows the system sanity command.

For any other issues, contact Microsoft Support.

Appendix A: Mirroring port on vSwitch (ESXi)

Configure a SPAN port on an existing vSwitch

A vSwitch does not have mirroring capabilities, but you can use a workaround to implement a SPAN port.

To configure a SPAN port:

  1. Open vSwitch properties.

  2. Select Add.

  3. Select Virtual Machine > Next.

  4. Insert a network label SPAN Network, select VLAN ID > All, and then select Next.

  5. Select Finish.

  6. Select SPAN Network > *Edit.

  7. Select Security, and verify that the Promiscuous Mode policy is set to Accept mode.

  8. Select OK, and then select Close to close the vSwitch properties.

  9. Open the XSense VM properties.

  10. For Network Adapter 2, select the SPAN network.

  11. Select OK.

  12. Connect to the sensor and verify that mirroring works.

Appendix B: Access sensors from the on-premises management console

You can enhance system security by preventing direct user access to the sensor. Instead, use proxy tunneling to let users access the sensor from the on-premises management console with a single firewall rule. This technique narrows the possibility of unauthorized access to the network environment beyond the sensor. The user's experience when signing in to the sensor remains the same.

Screenshot that shows access to the sensor.

To enable tunneling:

  1. Sign in to the on-premises management console's CLI with the CyberX, or the Support user credentials.

  2. Enter sudo cyberx-management-tunnel-enable.

  3. Select Enter.

  4. Enter --port 10000.

Next steps

Set up your network