Create A New Container Image

Most vertices in VTS topologies are implemented as containers on unix-like systems (including hardware forwarding devices such as Pica8 and Znyx switches). This How-To describes the basic components of these images, and how to get started in creating new ones.

This How-To does not cover creating the synthetic images used on vendor hardware that does not expose a unix-like environment.

High Level Pieces

Every VTS container image is composed of a maximum of 3 distinct components:

  • A disk image used to launch each instance of the container
  • A JSON-based spec file describing the image usage to the VTS orchestrator
  • (optional) A python-based handler allowing for fine-grained integration into the orchestrator

Disk Images

VTS Disk Images are typically built using Dockerfiles (although they are not run using Docker). This allows images to be created very quickly using existing base OS images available from Docker, and merely adding the software required for the function you are adding. While there is no officially ‘blessed’ directory layout used for image creation, most image builds use the following layout:

/my-image/
          build/
                Dockerfile
                ...
          runtime/
                  spec.json

There is a skeleton image build directory (in skel/) available in the UH-NetLab Images repository, which you can copy to get a basic image build environment. The skeleton image uses Alpine linux as the base, and includes supervisor, ssh, and rsyslog which enables support for standard orchestrator services. The install.py script in the same repository will install images built in the same format as the skeleton, using a process well understood by VTS administrators.

Once you have a skeleton layout, edit your Dockerfile and supervisord.conf as necessary (and add any additional required files) to describe the disk image your container requires.

Note

Since the Docker tools are only used to build images, but not to run them, you can only use features of Dockerfile that are used during the build process. Features such as EXPOSE and VOLUME are runtime values that will not be evaluated. As a result, it is generally better to refer to pre-existing image repositories for examples rather than going to the Docker documentation.

Spec Files

JSON specification files are used to provide important metadata about your image to the orchestrator (how much memory to allocate, attributes that may be supplied by the user at reservation time, etc.). The spec files are documented at Container Image Specfiles. You can also review other examples in existing repositories.

Handlers

Image Handlers are Python classes subclassed from foam.vts.images.ImageHandler that are added at runtime to the orchestrator in order to provide programmatic handling for instances that use your image. This generally is limited to two entrypoints:

  • Callbacks when instances using your image are being built
  • PerformOperationalAction (POA) API extensions

Note

The UH-Netlab image repository contains a number of useful handler subclasses in vts_uh.bases that you may want to subclass from instead of using foam.vts.images.ImageHandler directly.

Each handler can specify a number of callbacks that are invoked at various points during the image instantiation process. There is currently limited documentation for these callbacks, although there is a wealth of example handler code. The current callbacks are:

  • .prebuild (self, cobj) - Invoked before a container using your image has been built (for each requested instance)
  • .postlocaltopo (self, cobj) - Invoked after a container using your image has been instantiated, and has local networking (all interfaces have been attached).
  • .postnettopo (self, cobj) - Invoked after the entire topology graph creation is complete (although you cannot guarantee that postnettopo has been called for any other instances, as no order is specified)
  • .geniSetup (registry) (static method) - Invoked once (per process) for each image spec that references this handler. Used to set up new POA endpoints and any other one-time initialization items.

While all of these methods take the cobj argument, it should not be used and self.container should be used instead.

POA Extensions

Generally POA extensions should utilize the vts_uh.bases._wrapPOA wrapper function which provides a consistent interface to tools for new POA endpoints. The vts_uh.bases.AlpineHost handler class has clean examples of how to use this wrapper.