Azure Ubuntu Remote Desktop



I've setup an Azure virtual machine using Ubuntu 16.04LTS but I'm struggling to get remote desktop to work for it. I've setup the endpoint as follows: I've also setup and started xrdp as well as installed xfce4. For reference, here's my command history since creating the VM: Update Packages and Install xfce4. One option is to install Gnome, which is the default desktop environment in Ubuntu 20.04. Another option is to install Xfce. It is a fast, stable, and lightweight desktop environment, which makes it ideal for usage on a remote server. Run one of the commands below to install the desktop environment of your choice. Install Gnome: sudo apt.

-->

In the majority of cases, you need to install the Ubuntu remote desktop client manually on your Ubuntu Linux. In today’s blog post, Ubuntu 18.04.2 is installed on the VMware virtual machine. Some packages are already installed in our Ubuntu before going to configuring Ubuntu remote desktop: the text editor vim, net-tools and the SSH server. Remote Desktop to Azure AD Joined Computer Unfortunately, at this time it isn’t quite as easy as “open up a new RDP connection, type in the computer, type my email, and connect”. If it were, this post wouldn’t be here.

To take advantage of the GPU capabilities of Azure N-series VMs backed by NVIDIA GPUs, you must install NVIDIA GPU drivers. The NVIDIA GPU Driver Extension installs appropriate NVIDIA CUDA or GRID drivers on an N-series VM. Install or manage the extension using the Azure portal or tools such as the Azure CLI or Azure Resource Manager templates. See the NVIDIA GPU Driver Extension documentation for supported distributions and deployment steps.

If you choose to install NVIDIA GPU drivers manually, this article provides supported distributions, drivers, and installation and verification steps. Manual driver setup information is also available for Windows VMs.

For N-series VM specs, storage capacities, and disk details, see GPU Linux VM sizes.

Supported distributions and drivers

NVIDIA CUDA drivers

NVIDIA CUDA drivers for NC, NCv2, NCv3, ND, and NDv2-series VMs (optional for NV-series) are supported only on the Linux distributions listed in the following table. CUDA driver information is current at time of publication. For the latest CUDA drivers and supported operating systems, visit the NVIDIA website. Ensure that you install or upgrade to the latest CUDA drivers for your distribution.

Tip

As an alternative to manual CUDA driver installation on a Linux VM, you can deploy an Azure Data Science Virtual Machine image. The DSVM editions for Ubuntu 16.04 LTS or CentOS 7.4 pre-install NVIDIA CUDA drivers, the CUDA Deep Neural Network Library, and other tools.

NVIDIA GRID drivers

Microsoft redistributes NVIDIA GRID driver installers for NV and NVv3-series VMs used as virtual workstations or for virtual applications. Install only these GRID drivers on Azure NV VMs, only on the operating systems listed in the following table. These drivers include licensing for GRID Virtual GPU Software in Azure. You do not need to set up a NVIDIA vGPU software license server.

The GRID drivers redistributed by Azure do not work on non-NV series VMs like NC, NCv2, NCv3, ND, and NDv2-series VMs.

DistributionDriver
Ubuntu 18.04 LTS
Ubuntu 16.04 LTS
Red Hat Enterprise Linux 7.7 to 7.9, 8.0, 8.1
SUSE Linux Enterprise Server 12 SP2
SUSE Linux Enterprise Server 15 SP2
NVIDIA GRID 12.0, driver branch R460(.exe)

Visit GitHub for the complete list of all previous Nvidia GRID driver links.

Warning

Installation of third-party software on Red Hat products can affect the Red Hat support terms. See the Red Hat Knowledgebase article.

Install CUDA drivers on N-series VMs

Here are steps to install CUDA drivers from the NVIDIA CUDA Toolkit on N-series VMs.

C and C++ developers can optionally install the full Toolkit to build GPU-accelerated applications. For more information, see the CUDA Installation Guide.

To install CUDA drivers, make an SSH connection to each VM. To verify that the system has a CUDA-capable GPU, run the following command:

You will see output similar to the following example (showing an NVIDIA Tesla K80 card):

lspci lists the PCIe devices on the VM, including the InfiniBand NIC and GPUs, if any. If lspci doesn't return successfully, you may need to install LIS on CentOS/RHEL (instructions below).Then run installation commands specific for your distribution.

Ubuntu

  1. Download and install the CUDA drivers from the NVIDIA website.

    Note

    The example below shows the CUDA package path for Ubuntu 16.04. Replace the path specific to the version you plan to use.

    Visit the [Nvidia Download Center] (https://developer.download.nvidia.com/compute/cuda/repos/) for the full path specific to each version.

    The installation can take several minutes.

  2. To optionally install the complete CUDA toolkit, type:

  3. Reboot the VM and proceed to verify the installation.

CUDA driver updates

We recommend that you periodically update CUDA drivers after deployment.

CentOS or Red Hat Enterprise Linux

Azure Ubuntu Remote Desktop
  1. Update the kernel (recommended). If you choose not to update the kernel, ensure that the versions of kernel-devel and dkms are appropriate for your kernel.

  2. Install the latest Linux Integration Services for Hyper-V and Azure. Check if LIS is required by verifying the results of lspci. If all GPU devices are listed as expected (and documented above), installing LIS is not required.

    Please note that LIS is applicable to Red Hat Enterprise Linux, CentOS, and the Oracle Linux Red Hat Compatible Kernel 5.2-5.11, 6.0-6.10, and 7.0-7.7. Please refer to the [Linux Integration Services documentation] (https://www.microsoft.com/en-us/download/details.aspx?id=55106) for more details.Skip this step if you plan to use CentOS/RHEL 7.8 (or higher versions) as LIS is no longer required for these versions.

  3. Reconnect to the VM and continue installation with the following commands:

    The installation can take several minutes.

    Note

    Visit Fedora and Nvidia CUDA repo to pick the correct package for the CentOS or RHEL version you want to use.

For example, CentOS 8 and RHEL 8 will need the following steps.

  1. To optionally install the complete CUDA toolkit, type:

    Note

    If you see an error message related to missing packages like vulkan-filesystem then you may need to edit /etc/yum.repos.d/rh-cloud , look for optional-rpms and set enabled to 1

  2. Reboot the VM and proceed to verify the installation.

Verify driver installation

To query the GPU device state, SSH to the VM and run the nvidia-smi command-line utility installed with the driver.

If the driver is installed, you will see output similar to the following. Note that GPU-Util shows 0% unless you are currently running a GPU workload on the VM. Your driver version and GPU details may be different from the ones shown.

RDMA network connectivity

RDMA network connectivity can be enabled on RDMA-capable N-series VMs such as NC24r deployed in the same availability set or in a single placement group in a virtual machine (VM) scale set. The RDMA network supports Message Passing Interface (MPI) traffic for applications running with Intel MPI 5.x or a later version. Additional requirements follow:

Distributions

Azure ubuntu remote desktop login

Deploy RDMA-capable N-series VMs from one of the images in the Azure Marketplace that supports RDMA connectivity on N-series VMs:

  • Ubuntu 16.04 LTS - Configure RDMA drivers on the VM and register with Intel to download Intel MPI:

    1. Install dapl, rdmacm, ibverbs, and mlx4

    2. In /etc/waagent.conf, enable RDMA by uncommenting the following configuration lines. You need root access to edit this file.

    3. Add or change the following memory settings in KB in the /etc/security/limits.conf file. You need root access to edit this file. For testing purposes you can set memlock to unlimited. For example: <User or group name> hard memlock unlimited.

    4. Install Intel MPI Library. Either purchase and download the library from Intel or download the free evaluation version.

      Only Intel MPI 5.x runtimes are supported.

      For installation steps, see the Intel MPI Library Installation Guide.

    5. Enable ptrace for non-root non-debugger processes (needed for the most recent versions of Intel MPI).

  • CentOS-based 7.4 HPC - RDMA drivers and Intel MPI 5.1 are installed on the VM.

  • CentOS-based HPC - CentOS-HPC 7.6 and later (for SKUs where InfiniBand is supported over SR-IOV). These images have Mellanox OFED and MPI libraries pre-installed.

Note

CX3-Pro cards are supported only through LTS versions of Mellanox OFED. Use LTS Mellanox OFED version (4.9-0.1.7.0) on the N-series VMs with ConnectX3-Pro cards. For more information, see Linux Drivers.

Also, some of the latest Azure Marketplace HPC images have Mellanox OFED 5.1 and later, which don't support ConnectX3-Pro cards. Check the Mellanox OFED version in the HPC image before using it on VMs with ConnectX3-Pro cards.

The following images are the latest CentOS-HPC images that support ConnectX3-Pro cards:

  • OpenLogic:CentOS-HPC:7.6:7.6.2020062900
  • OpenLogic:CentOS-HPC:7_6gen2:7.6.2020062901
  • OpenLogic:CentOS-HPC:7.7:7.7.2020062600
  • OpenLogic:CentOS-HPC:7_7-gen2:7.7.2020062601
  • OpenLogic:CentOS-HPC:8_1:8.1.2020062400
  • OpenLogic:CentOS-HPC:8_1-gen2:8.1.2020062401

Install GRID drivers on NV or NVv3-series VMs

To install NVIDIA GRID drivers on NV or NVv3-series VMs, make an SSH connection to each VM and follow the steps for your Linux distribution.

Ubuntu

  1. Run the lspci command. Verify that the NVIDIA M60 card or cards are visible as PCI devices.

  2. Install updates.

  3. Disable the Nouveau kernel driver, which is incompatible with the NVIDIA driver. (Only use the NVIDIA driver on NV or NVv2 VMs.) To do this, create a file in /etc/modprobe.d named nouveau.conf with the following contents:

  4. Reboot the VM and reconnect. Exit X server:

  5. Download and install the GRID driver:

  6. When you're asked whether you want to run the nvidia-xconfig utility to update your X configuration file, select Yes.

  7. After installation completes, copy /etc/nvidia/gridd.conf.template to a new file gridd.conf at location /etc/nvidia/

  8. Add the following to /etc/nvidia/gridd.conf:

  9. Remove the following from /etc/nvidia/gridd.conf if it is present:

  10. Reboot the VM and proceed to verify the installation.

CentOS or Red Hat Enterprise Linux

  1. Update the kernel and DKMS (recommended). If you choose not to update the kernel, ensure that the versions of kernel-devel and dkms are appropriate for your kernel.

  2. Disable the Nouveau kernel driver, which is incompatible with the NVIDIA driver. (Only use the NVIDIA driver on NV or NV3 VMs.) To do this, create a file in /etc/modprobe.d named nouveau.conf with the following contents:

  3. Reboot the VM, reconnect, and install the latest Linux Integration Services for Hyper-V and Azure. Check if LIS is required by verifying the results of lspci. If all GPU devices are listed as expected (and documented above), installing LIS is not required.

    Skip this step if you plan to use CentOS/RHEL 7.8 (or higher versions) as LIS is no longer required for these versions.

  4. Reconnect to the VM and run the lspci command. Verify that the NVIDIA M60 card or cards are visible as PCI devices.

  5. Download and install the GRID driver:

  6. When you're asked whether you want to run the nvidia-xconfig utility to update your X configuration file, select Yes.

  7. After installation completes, copy /etc/nvidia/gridd.conf.template to a new file gridd.conf at location /etc/nvidia/

  8. Add the following to /etc/nvidia/gridd.conf:

  9. Remove the following from /etc/nvidia/gridd.conf if it is present:

  10. Reboot the VM and proceed to verify the installation.

Verify driver installation

To query the GPU device state, SSH to the VM and run the nvidia-smi command-line utility installed with the driver.

If the driver is installed, you will see output similar to the following. Note that GPU-Util shows 0% unless you are currently running a GPU workload on the VM. Your driver version and GPU details may be different from the ones shown.

X11 server

If you need an X11 server for remote connections to an NV or NVv2 VM, x11vnc is recommended because it allows hardware acceleration of graphics. The BusID of the M60 device must be manually added to the X11 configuration file (usually, etc/X11/xorg.conf). Add a 'Device' section similar to the following:

Additionally, update your 'Screen' section to use this device.

The decimal BusID can be found by running

The BusID can change when a VM gets reallocated or rebooted. Therefore, you may want to create a script to update the BusID in the X11 configuration when a VM is rebooted. For example, create a script named busidupdate.sh (or another name you choose) with contents similar to the following:

Then, create an entry for your update script in /etc/rc.d/rc3.d so the script is invoked as root on boot.

Troubleshooting

  • You can set persistence mode using nvidia-smi so the output of the command is faster when you need to query cards. To set persistence mode, execute nvidia-smi -pm 1. Note that if the VM is restarted, the mode setting goes away. You can always script the mode setting to execute upon startup.
  • If you updated the NVIDIA CUDA drivers to the latest version and find RDMA connectivity is no longer working, reinstall the RDMA drivers to reestablish that connectivity.
  • During installation of LIS, if a certain CentOS/RHEL OS version (or kernel) is not supported for LIS, an error “Unsupported kernel version” is thrown. Please report this error along with the OS and kernel versions.

Next steps

Azure Ubuntu Remote Desktop Software

  • To capture a Linux VM image with your installed NVIDIA drivers, see How to generalize and capture a Linux virtual machine.