Developing Nitro: Testing

In this chapter we take a look at how Nitro is tested and what is required for running the tests. Testing is essential for ensuring the quality of Nitro and for protecting us from accidental regression.

Unit Tests

As a project, Nitro is highly dependent on multiple external components like the customized version of the Linux kernel and the extended QEMU virtual machine platform. While all this is necessary, it makes testing the project a bit more challenging than the average Python module.

Unit tests try to break down this complexity by concentrating on individual components and features of them. We replace the real interfaces and dependencies with mocked impostors to remove the need for complex outside dependencies and to make the tests more deterministic. This limits the kinds of tests we can create but is ideal for verifying the correctness of core logic.

Because of the self-contained nature of unit tests, running the test suite is simple. The unit test suite is located in tests/unittests directory and the tests can be run by simply invoking the nose2 test runner there.

tester@kvm-vmi ~/projects/nitro/tests/unittests $ nose2 --verbose
test_associate_process (test_linux.TestLinux)
Test process association. ... ok
test_backend_creation (test_linux.TestLinux)
Check that LinuxBackend can be created. ... ok
test_check_caches_flushed (test_linux.TestLinux)
Check that libvmi caches are flushed. ... ok
test_clean_name (test_linux.TestLinux)
Test that system call handler names are properly cleaned. ... ok
test_process_event (test_linux.TestLinux)
Test that the event handler returns a syscall object with somewhat sensible content ... ok
test_syscall_name (test_linux.TestLinux)
Check that syscall names can be extracted from system call table. ... ok

----------------------------------------------------------------------
Ran 6 tests in 0.007s

OK

Because of the lax requirements for the testing environment, Nitro’s unit tests are ideal for running in an automated fashion as a part of a continuous integration pipeline.

Integration Tests

While unit tests are useful, it is often difficult to test how the system operates as a whole and how it interacts with a real guest operating systems. For this reason, Nitro includes a suite of integration tests that try out the different features in a test environment with virtual machines. The environment enables us to automatically run test binaries inside real virtual machines and checks that Nitro can correctly analyze their actions.

Creating a Testing VM

Before actual testing can take place, a virtual machine needs to be created. For tests to be deterministic, the VM must be constructed in a way that allows us to know exactly what gets included and what the result will be. This is to make sure we can reliably replicate problems that might arise during testing. Additionally, the virtual machine images we use for testing are specifically optimized for testing purposes with unnecessary services disabled.

Nitro includes Packer virtual machine templates for building the test environment. The tests/vm_templates directory includes the packer binary itself and templates for Linux and Windows testing environments. With the templates in place, we can simply ask packer to create the VM for us:

tester@kvm-vmi ~/projects/nitro/tests/vm_templates $ ./packer build --var-file ubuntu_1604_x64.json ubuntu.json 
qemu output will be in this color.

==> qemu: Downloading or copying ISO
    qemu: Downloading or copying: http://releases.ubuntu.com/16.04/ubuntu-16.04.2-server-amd64.iso
==> qemu: Creating floppy disk...
    qemu: Copying files flatly from floppy_files
    qemu: Copying file: http/preseed.cfg
    qemu: Done copying files from floppy_files
    qemu: Collecting paths from floppy_dirs
    qemu: Resulting paths from floppy_dirs : []
    qemu: Done copying paths from floppy_dirs
==> qemu: Creating hard drive...
==> qemu: Starting HTTP server on port 8288
==> qemu: Found port for communicator (SSH, WinRM, etc): 3679.
==> qemu: Looking for available port between 5900 and 6000 on 127.0.0.1
==> qemu: Starting VM, booting from CD-ROM
    qemu: The VM will be run headless, without a GUI. If you want to
    qemu: view the screen of the VM, connect via VNC without a password to
    qemu: vnc://127.0.0.1:5933
==> qemu: Overriding defaults Qemu arguments with QemuArgs...
==> qemu: Waiting 10s for boot...
==> qemu: Connecting to VM via VNC
==> qemu: Typing the boot command over VNC...
==> qemu: Waiting for SSH to become available...
==> qemu: Connected to SSH!
==> qemu: Uploading linux/ => /tmp
==> qemu: Provisioning with shell script: /tmp/packer-shell277821116
    qemu: [sudo] password for vagrant: Generating grub configuration file ...
    qemu: Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
    qemu: Found linux image: /boot/vmlinuz-4.4.0-62-generic
    qemu: Found initrd image: /boot/initrd.img-4.4.0-62-generic
    qemu: done
    qemu: Removed symlink /etc/systemd/system/sysinit.target.wants/systemd-timesyncd.service.
==> qemu: Gracefully halting virtual machine...
    qemu: [sudo] password for vagrant:
==> qemu: Converting hard drive...
Build 'qemu' finished.

==> Builds finished. The artifacts of successful builds are:
--> qemu: VM files in directory: output-qemu

After the process finishes, we have to import the created VM into libvirt. This can be done automatically with the included import_libvirt.py script. Depending on the way your libvirt installation is configured, the script might require superuser privileges. To import the newly constructed VM run:

# ./import_libvirt.py output-qemu/ubuntu1604

The import script will create a new storage pool with the name nitro at the tests/images directory and move the generated VM image there from the output-qemu directory where Packer left it. Subsequently, the script will define a new libvirt domain for the machine and associate the image with it. The domain is created with system libvirt instance. Finally the script will remove the unnecessary output-qemu directory.

Running the Tests

Once the virtual machine is in place, we can proceed to actual testing. Nitro’s integration tests work by first restoring the testing virtual machine to a clean state from a snapshot. After this, the test runner packages the selected test binary into an ISO image that can be attached to the virtual machine. To run the tests, the test runner boots up the VM, waits for it to settle, and attaches the disc image to it. Each testing virtual machine contains special configuration for automatically executing the attached test images. Finally, test runner attaches Nitro to the virtual machine and monitors the execution. At the end, each test case can check the produced execution traces for features interesting to them.

While all this might seem complicated, all the hard work is done automatically by the testing framework. To run the test suite, simply invoke nose2 within the tests directory. Running all the tests can be time consuming, and therefore, it is often desirable to only run some of the tests. This can be achieved by specifying the test case manually:

$ nose2 --verbose test_linux.TestLinux.test_open