logo

Container Instance Architecture

If you use nsc run, or any of the CI runners (e.g. GitHub Runners), you are using Container Instances.

Each Container Instance is a micro virtual machine that is fully isolated from other workloads. It gets it's own CPU, RAM and Disk allocations.

Namespace relies on containerd to provide the container primitives users expect.

User workloads, and a small set of supporting services, including containerd run in "guest" space -- i.e. within the virtual machine. They're supported by a number of services that operate outside the guest, either for performance or security isolation reasons. We call those services "host services".

Container Instances are optimized for interactivity, and thus have very low startup times. To achieve a very low startup time, a Container Instance is devoid of most traditional linux management software, being supported primarily by host services.

The software included in a Container Instance includes:

  • init: a custom init system developed by the Namespace team ("concierge").
  • containerd: for container management.
  • vector: for logs and metrics exfiltration.

After the kernel boots, Namespace's init performs minimal operating system initialization, including system mount management, network setup, etc. This guest-boot procedure does not take more than a few milliseconds.

It then starts containerd and defers any additional work to it.

Additional non-critical software is also included, notably dockerd.

Workload Identity

Each Container Instance obtains unique workload identity credentials which allow it to identify itself when calling other Namespace services. These can also be used for cross-service federation purposes. See Federation for more.

Namespaces and networking

Each container that is started with nsc run, nsc run --on or via one of the runners, is deployed as an individual container and task in containerd, in the default namespace.

Depending on configuration, containers can rely on internal networking, or host networking.

Each Container Instance benefits from Namespace's DNS caching improving both resolution time and reliability.

Docker API

Containers started can optionally have access to the Docker API. When that capability is enabled, a /var/run/docker.sock is made available to their root fs, which can be used by any Docker client.

Note however that unlike in a traditional Docker environment, /var/run/docker.sock is not backed by dockerd directly; rather it's served by concierge, which itself is a Docker-aware reverse proxy. This reverse proxy is aware of the calling container's identity and can perform translations, authorization checks, etc based on that identity.

One important aspect is that to retain full compatibility and featureset, when a container creates additional containers using the Docker API, those new containers become siblings of the container creating them, rather than children.

To enable bind mounts that refer to contents within the creating container, /var/run/docker.sock is backed by a Docker API proxy which understands the container anatomy and performs the necessary directory translations to ensure correct behavior.

Example

If you have a container you've started with nsc run, with the Docker API enabled, and you create /mydata/foobar, followed by docker run --rm -it -v /mydata:/parent ubuntu and ls /parent, you'll observe foobar.

This works by having /mydata/ translated to an overlayfs-aware local path, which is then passed to dockerd. E.g.

/mydata --> /run/containerd/io.containerd.runtime.v2.task/default/{containerid}/mydata

This translation mechanism also ensures that translated paths can't escape the caller container's root.

Cgroup management

In a Container Instance, containerd is the sole owner of the cgroups sub-system, which is exposed exclusively as cgroups v2.

Docker support is available in the environment, but rather than using its own containerd, it's configured to use the Namespace-managed one. Containers started in Docker will thus be observed alongside other Namespace managed containers, albeit in a different containerd namespace. Docker deploys containers to the moby containerd namespace.

Machine resource shapes

Namespace allows you to shape your compute resources flexibly. For brevity, we typically describe shapes as axb, where a represents the number of vCPUs and b represents the amount of RAM in GB. Acceptable numbers of vCPU are 2, 4, 8, 16, and 32. Acceptable values of RAM in GB are 2, 4, 8, 16, 32, 64, 80, 96, 112, 128, 256, 384, and 512.

Using instances with over 64GB of RAM is not allowed by default. Please contact support@namespace.so if you want to use high-memory instances.

Disk that is available to your instances automatically scales with the configured shape. The unit count of an instance is defined as:

unit count = max(vCPU count, GB RAM divided by 2)

Based on this unit count, the provisioned disk per instance is the result of the following formula:

disk size = 32GB + (unit count) x 8GB

Resource limits

You can schedule workloads concurrently with Namespace. To ensure that you can reliably predict how much capacity is available to you, Namespace enforces concurrency limits per workspace (how many vCPUs/RAM you can allocate concurrently).

When hitting your concurrency limits, additional workloads will be queued until old workloads complete and free up concurrent capacity.

Concurrency limits protect against a large bill in case of a surprising burst of workload spawns. Larger subscriptions include higher concurrency limits.

You can observe your concurrent resource usage at cloud.namespace.so/workspace/usage/concurrency. This also allows you to track how close you are running to your concurrency limit.

If you need larger concurrency limits than available to you please contact support@namespace.so.