fix up BUILD.bazel, move lcov_parser rules into subdir

Some of the BUILD.bazel rules for lcov_parser were broken
* get_changed_lines.py was renamed to changed_lines.py during review
* incrememtal -> incremental
* the `deps` field was using file names instead of targets.
* mark incremental_coverage as a binary since it has a main
* move the lcov_parser rules into their subdir
* ran formatter over /BUILD.bazel, which alphabetized a few things

TESTED: `bazel test ...` now works as expected.

Change-Id: I698857a5a4d1603e3c7f485a0167cb7b6448fa93
Signed-off-by: Daniel Latypov <dlatypov@google.com>
2 files changed
tree: 19d76473861f2af3de72d8f897dc03c9bbd1f170
  1. deployment/
  2. lcov/
  3. lcov_parser/
  4. .bazelrc
  5. .gitignore
  6. BUILD.bazel
  7. config.yaml
  8. CONTRIBUTING.md
  9. debs.bzl
  10. Dockerfile
  11. Dockerfile.test
  12. kunit.sh
  13. kunitconfig
  14. LICENSE
  15. README.md
  16. WORKSPACE
README.md

Prow Presubmit for KUnit

KUnit uses Prow for presubmit and CI.

This repository contains all the code and instructions needed for generating a docker container for building and running KUnit tests on Prow, and then using that presubmit job container with a Prow cluster. Note that as of now, Prow will require a Google Cloud Storage Bucket for storing job artifacts and Gubernator for a frontend to viewing them.

The official repo for Prow is test-infra, Kubernetes' test infrastructure. It enables us to run presubmit jobs on changes to our gerrit repo against an arbitrary docker container image. We have pushed the docker image generated from here to gcr.io/kunit-presubmit/kunit. The image is pulled and deployed by our prow cluster, source is pulled into working directory of container from kunit source, and entrypoint script (kunit.sh) is ran. The script copies kunitconfig to working directory, runs kunit.py, and sends output to job artifacts that can be viewed later from the link specified in the report. Prow will comment on the Gerrit change with the status of the job (successful, a test failed, a test crashed, etc) and link to Prow URL for more details. The container is currently configured to require the inclusion of kunitconfig in repo. A convenience of this model is that updating the job image can be done without change to ProwJob configuration.

ProwJob Docker image

The docker image is configured in the Dockerfile. It is based off of Debian, with kernel build tools installed and the script ran by the prowjob included.

Note : The script sets kunitconfig to the latest kunitconfig at https://kunit.googlesource.com/kunitconfig at branch kunit/alpha/master.

A couple peculiarities of building/running UML in a docker container are solved in the deployment configuration discussed below under “Prow Job Specification”.

To build and push, you can either use Bazel or Docker. You only need to use one of these methods. The Bazel method is usually simpler, but the Docker method has fewer dependencies, and so can be more robust.

Note that you only need to run the “Push” stage if you wish to update the global version of the presubmit, running in production. You‘ll also need to have permission to push to the gcr repository to do so. If unsure, DON’T DO IT: you don't need to push in order to test locally.

Method 1: Bazel

Bazel allows building/pushing without dependence on docker and automates using latest kunitconfig in build. As Bazel enforces deterministic builds, all debian packages depended on are listed in debs.bzl, and based off the snapshot specified in the dpkg_src rule in WORKSPACE.

Update

You only need to worry about this if dependencies have changed. Otherwise, skip ahead to ‘Use’ below.

The direct dependencies needed are:

  • build-essential
  • bc
  • m4
  • flex
  • bison
  • python3
  • libelf-dev
  • lcov

If recursive dependencies change in future snapshots, debs.bzl must be manually updated with a list of all dependencies. A simple solution:

# run interactive shell in debian:stretch image
docker run -it --rm debian:stretch

# update list of available packages
apt update

# generate list of all required dependencies recursively
apt-cache depends \
    --recurse \
    --no-recommends \
    --no-suggests \
    --no-conflicts \
    --no-breaks \
    --no-replaces \
    --no-enhances \
    --no-pre-depends \
    build-essential \
    bc \
    m4 \
    flex \
    bison \
    python3 \
    libelf-dev \
    lcov | grep "^\w" | sort -u

The snapshot used can be updated as specified in the distroless package manager bazel rules.

Use

# build prowjob image
bazel build :kunit

# build test image with kunit source and add to local docker
bazel run :kunit_test

# need docker to run, results to stdout
docker run --privileged --tmpfs /dev/shm:exec kunit-presubmit/test:kunit_test

Push

If you wish to update the global version of the prow presubmit (and you have the requisite permissions to do so), you can use the following command:

# push to configured repo
bazel run :push_kunit

Method 2: Docker

You must have Docker Make sure to enable sudoless docker (fixes gcr push authentication problems).

Use

Now to build and push image :

# build image
docker build -t kunit-presubmit/kunit .

# confirm image is built
docker images

To test the container locally, you'll need to have a temporary directory containing a copy of the linux source (with KUnit) in the linux/ subdirectory.

# have tmp directory with source checked out to $TMP/linux
cp Dockerfile.test $TMP/Dockerfile
cd $TMP

# build test container which includes source
docker build . -t test

# run with args to handle issues with UML in Docker.
docker run --privileged --tmpfs /dev/shm:exec test

# extract log
RUN=$(docker container ls --last 1 -q)
docker cp $RUN:/artifacts/kunit.log .
cat kunit.log

# cleanup container
docker rm $RUN

Note: Testing with an interactive shell results in unexpected behaviour. Running the UML Kernel in an entrypoint script works as intended but fails in an interactive shell.

Push

Now to build and push the image (if you have permission):

# build image (note the 'gcr.io' prefix)
docker build . gcr.io/kunit-presubmit/kunit

# confirm image is built
docker images

# push to gcr
docker push gcr.io/kunit-presubmit/kunit

Registry

By default, we have configured our install to use Google Container Registry. This requires installing Google Cloud SDK and configuring with project to push to. If using docker to push, you will need to enable gcloud authentication first to push with gcloud auth configure-docker.

Prow Job Specification

Prow Jobs are detailed at testinfra/prow/jobs.md. The prow job specification is held in the config.yaml. It specifies the gerrit repo for prow to poll and the specifications for the container. We found that running the container privileged and mounting a emptydir at /dev/shm fixed KUnit build errors. There may be more secure methods, but as prow doesn't expose job containers to external resources, this solution results in the cleanest Dockerfile.

We are using the decorated prow job which is recommended for all new prowjobs and detailed at test-infra/prow/pod-utilities.md. These utilities perform setup and capture output from the job container. The sidecar utility runs alongside the job container and uploads the artifacts to a GCS Bucket. Therefore, you must specify a GCS Cloud Bucket in the Plank configuration in order to upload logs / artifacts from job container. Fill in the TODO‘s in config.yaml with your bucket details. See Prow API documentation under DecorationConfig and test-infra’s default config.yaml for details.

Job Script

The job script and kunitconfig stored inside in the docker image are kunit.sh and kunitconfig.

Prow sets the environment variable ARTIFACTS to specify a directory that will be exported to gcloud on job completion. Exit code of 0 signals success and 1 signals failure which will be used when reporting the Job's state to gerrit.

Prow Cluster Deployment

To deploy in any Kubernetes environment, first read Prow deployment documentation [here](https://github.com/kubernetes/test-infra/blob/master/prow/
getting_started_deploy.md) and for further clarification, the Kubernetes documentation here.

Prow comes with several components, several of which are only necessary for interacting with github webhooks. We have included here a deployment.yaml which includes just the components needed to deploy prow. Applying the configs under deployment/gerrit will deploy the gerrit adapter and the crier reporter for reporting back to gerrit. Applying the configs under deployment/lkml will deploy the lkml adapter and the custom crier reporter (named mail to allow concurrent deployment with regular crier) for handling mail.

For all Deployment objects, please check the source for the corresponding component under prow/cmd for arguments to provide the container being deployed.

For every new gerrit repo to run presubmits on, you will need to update the Gerrit component in the deployment accordingly.

Checkout prow/cluster/starter.yaml for github presubmit instructions which requires oauth token authentication. Job configuration in config.yaml does not need to change. Prow/cluster generally has default deployments for all prow components and the documentation under prow/cmd/{component} will specify the arguments to configure it.

Succinctly, a Kubernetes deployment file specifies all the API objects needed for deployment. Each prow component is a container pod that is described in an object of kind Deployment, specifying a Docker image that the deployment will launch. Services specify extra-pod communication and ServiceAccounts provide an identity to the processes that run in these pods. Roles and RoleBindings provide RBAC authorization for components. Read this article on RBAC in Kubernetes if documentation is not sufficient. PersistentVolumeClaim are a type of Volume used for maintaining state and is used for gerrit to keep track of the latest synced commit. You may also need to configure the Ingress depending on your network / deployment environment for all external communication.

If you require gerrit authentication, you will also need a git https cookie file. For a token periodically authenticated with gcloud, see/deploy grandmatriarch