Build, Deploy and Validate Cassini Image

The recommended approach for image build setup and customization is to use the kas build tool. To support this, Cassini provides configuration files to setup and build different target images, different distribution image features, and set associated parameter configurations.

This page first briefly describes below the kas configuration files provided with Cassini, before guidance is given on using those kas configuration files to set up the Cassini distribution on a target platform.

Note

All command examples on this page can be copied by clicking the copy button. Any console prompts at the start of each line, comments, or empty lines will be automatically excluded from the copied text.

The kas directory contains kas configuration files to support building and customizing Cassini distribution images via kas. These kas configuration files contain default parameter settings for a Cassini distribution build. Here, the files are briefly introduced, classified into three ordered categories:

  • Base Configs: Configures common software components

    • cassini.yml to build an image for the Cassini distribution.

    • cassini-dev.yml to build a Cassini image suitable for development (e.g. allowing root login without password)

    • cassini-sdk.yml to build a Cassini image with additional tools for software development.

  • Build Cloud Configs: Set and configure features of the Cassini distribution

    • k3s.yml to include K3s orchestration.

  • Build Modifier Configs: Set and configure features of the Cassini distribution

    • tests.yml to include run-time validation tests into the image.

    • security.yml to build a security-hardened Cassini distribution image.

  • Target Platform Configs: Set the target platform

    For information on supported targets in Cassini and corresponding value for MACHINE variable, refer to Target Platforms.

These kas configuration files can be used to build a custom Cassini distribution by passing one Base Config, zero or more Build Cloud Configs, zero or more Build Modifier Configs, and one Target Platform Config to the kas build tool, chained via a colon (:) character. Examples for this are given later in this document. For example:

kas build <Base Config>:<Build Cloud Configs>:<Build Modifier Configs>:<Target Platform Config>

In the next section, guidance is provided for configuring, building and deploying Cassini distributions using these kas configuration files.

Build Host Environment Setup

This documentation assumes an Ubuntu based build host, where the build steps have been validated on the Ubuntu 20.04 LTS (Focal Fossa) and 22.04 LTS (Jammy Jellyfish).

Note

The following build steps can be run on Ubuntu 18.04 LTS version, however since Ubuntu 18.04 doesn’t provide required versions of development tools (such as Python 3.8), then the extra Yocto buildtools environment setup is needed.

Note

When using Ubuntu 22.04, installing Python 3.8 or 3.9 is recommended as kas 4.0 has dependencies which are incompatible with the version of setuptools that ships with Python 3.10.

A number of package dependencies must be installed on the Build Host to run build scenarios via the Yocto Project. The Yocto Project documentation provides the list of essential packages together with a command for their installation.

The recommended approach for building Cassini is to use the kas build tool. To install kas:

pip3 install --upgrade kas==4.0

For more details on kas installation, see kas Dependencies & installation.

To deploy a Cassini distribution image onto the supported target platform, bmap-tools is used. This can be installed via:

sudo apt install bmap-tools

Note

The Build Host should have at least 65 GBytes of free disk space to build a Cassini distribution image.

Download

The meta-cassini repository can be downloaded using Git, via:

# Change the tag or branch to be fetched by replacing the value supplied to
# the --branch parameter option

git clone https://gitlab.com/Linaro/cassini/meta-cassini.git --branch nanbield-dev
cd meta-cassini

Build and Deploy

Refer to the platform guides instructions on how to build and deploy the Cassini images on supported platforms:

Run

To run the deployed Cassini distribution image, simply boot the target platform.

The Cassini distribution image can be logged into as cassini user.

The distribution can then be used for deployment and orchestration of application workloads in order to achieve the desired use-cases.

Validate

As an initial validation step, check that the appropriate Systemd services are running successfully,

  • docker.service

  • k3s.service

These services can be checked by running the command:

systemctl status --no-pager --lines=0 docker.service k3s.service

And ensuring the command output lists them as active and running.

More thorough run-time validation of Cassini components are provided as a series of integration tests, available if the kas/tests.yml kas configuration file was included in the image build.

Note

Due to performance limitations, K3S is not currently supported on the Arm Corstone-1000.

Reproducing the Cassini Use-Cases

This section briefly demonstrates simplified use-case examples, where detailed instructions for developing, deploying, and orchestrating application workloads are left to the external documentation of the relevant technology.

Deploying Application Workloads via Docker and K3s

This example deploys the Nginx web server as an application workload, using the nginx container image available from Docker’s default image repository. The deployment can be achieved either via Docker or via K3s, as follows:

  1. Boot the image and log-in as cassini user.

  2. Ensure the target device can access the internet

    wget www.linaro.org
    

    The output should be similar to:

    --2023-12-02 12:42:10--  http://www.linaro.org/
    Resolving www.linaro.org... 18.165.227.69, 18.165.227.126, 18.165.227.43, ...
    Connecting to www.linaro.org|18.165.227.69|:80... connected.
    HTTP request sent, awaiting response... 301 Moved Permanently
    Location: https://www.linaro.org/ [following]
    --2023-12-02 12:42:10--  https://www.linaro.org/
    Connecting to www.linaro.org|18.165.227.69|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 54811 (54K) [text/html]
    Saving to: 'index.html.1'
    
    index.html    100%[===============>]  53.53K   323KB/s    in 0.2s
    
    2023-12-02 12:42:26 (323 KB/s) - 'index.html' saved [54811/54811]
    
  3. Deploy the example application workload:

    • Deploy via Docker

      3.1. Run the following example command to deploy via Docker:

      sudo docker run -p 8082:80 -d nginx
      

      3.2. Confirm the Docker container is running by checking its STATUS in the container list:

      sudo docker container list
      
    • Deploy via K3s

      3.1. Run the following example command to deploy via K3s:

      cat << EOT > nginx-example.yml && sudo kubectl apply -f nginx-example.yml
      apiVersion: v1
      kind: Pod
      metadata:
        name: k3s-nginx-example
      spec:
        containers:
        - name: nginx
          image: nginx
          ports:
          - containerPort: 80
            hostPort: 8082
      EOT
      

      3.2. Confirm that the K3s Pod hosting the container is running by checking that its STATUS is running, using:

      sudo kubectl get pods -o wide
      
  4. After the Nginx application workload has been successfully deployed, it can be interacted with on the network, via for example:

    wget localhost:8082
    

Note

As both methods deploy a web server listening on port 8082, the two methods cannot be run simultaneously and one deployment must be stopped before the other can start.

Note

Due to performance limitations, K3S is not currently supported on the Arm Corstone-1000.