Faster GitHub Actions
Accelerate your GitHub Actions with Namespace while saving cost.
Namespace offers performant GitHub Runners that boot in a few seconds and suspend themselves immediately after the CI job completes.
Why Namespace Runners?
Designed as a drop-in replacement for GitHub Action runners, Namespace Runners bring to the table:
Getting started
To speed up your workflows with Namespace, you need to connect Namespace with your GitHub organization and do a one-line change to your workflow definition:
runs-on
field in a workflow file to runs-on: nscloud
. For example:jobs:build:- runs-on: ubuntu-20.04+ runs-on: nscloud-ubuntu-22.04-amd64-4x16name: Build Docker image
Using Namespace GitHub Runners
To schedule a GitHub Actions job on Namespace GitHub runners, change the
runs-on
option in your workflow job to runs-on: nscloud
.
For example:
jobs:build:- runs-on: ubuntu-20.04+ runs-on: nscloudname: Build Docker image
That's it! Your workflow runs on Namespace now.
High-performance builds
Namespace Runners seamlessly integrate with Remote Builders, allowing you to speed up your GitHub Action runs even more. By offloading expensive Docker builds to Remote Builders, you remove much burden from the workflow run so that you can select a smaller runner shape.
Within Namespace Runners, the Docker context comes pre-configured to use our Remote Builders. No extra changes to your workflow file are required.
Any call to docker build
or docker/build-push-action
automatically offloads the build to Namespace Remote Builder.
Remote Builders rely on Docker buildx context. Make sure that your workflow does not overwrite the buildx context. For example, calling the action docker/setup-buildx-action
would prevent using Namespace Remote Builders. Consider removing it or using namespacelabs/nscloud-setup-buildx-action
instead.
Disabling Namespace Remote Builders
By default, Namespace-managed GitHub Runners have Remote Builders
pre-configured. Docker image builds (e.g., docker build
command) use Remote
Builders by default.
If you prefer to use the runner's local Docker builder instead of Namespace Remote Builders, you have two options:
- Disable Remote Builders: You can request that runners created for a
particular workflow job do not use Remote Builders by passing an
additional configuration label
nsc-no-remote-builders
, e.g.
jobs:
myjob:
runs-on: [nscloud-ubuntu-22.04-amd64-4x16, nscloud-no-remote-builders]
- Revert the Docker build context to the default: Namespace configures Remote Builders
as a separate
buildx
context. You can switch back to the default by callingdocker buildx use default
. E.g.
jobs:
build:
steps:
- name: Use default builder
run: docker buildx use default
Configuring a Runner
You can specify one of the following labels to use different runner CPU architectures or machine shapes.
Label | OS | Architecture | vCPU | Memory |
---|---|---|---|---|
nscloud | Ubuntu 22.04 | AMD 64-bit | 4 | 16 GB |
nscloud-ubuntu-22.04 | Ubuntu 22.04 | AMD 64-bit | 4 | 16 GB |
nscloud-amd64 | Ubuntu 22.04 | AMD 64-bit | 4 | 16 GB |
nscloud-ubuntu-22.04-amd64 | Ubuntu 22.04 | AMD 64-bit | 4 | 16 GB |
nscloud-ubuntu-22.04-amd64-4x16 | Ubuntu 22.04 | AMD 64-bit | 4 | 16 GB |
nscloud-arm64 | Ubuntu 22.04 | ARM 64-bit | 4 | 16 GB |
nscloud-ubuntu-22.04-arm64 | Ubuntu 22.04 | ARM 64-bit | 4 | 16 GB |
nscloud-ubuntu-22.04-arm64-4x16 | Ubuntu 22.04 | ARM 64-bit | 4 | 16 GB |
Note that only one nscloud
label is allowed in the runs-on
field of your workflow file. Namespace will not schedule any workflow job if runs-on
specifies more than one nscloud
label or invalid ones.
Runner machine resources
Namespace supports custom machine shapes, so you can configure your runner to a bigger or smaller machine than what is provided by the table of well-known labels above. You can specify the machine shape with the label format as follows:
runs-on: nscloud-{os}-{arch}-{vcpu}x{mem}
Where:
{os}
: is the operating system image to use. Today, only "ubuntu-22.04" is allowed.{arch}
: can either be "amd64" or "arm64".{vcpu}
: acceptable number vCPU are 2, 4, 8, 16, and 32.{mem}
: acceptable RAM values in GB are 2, 4, 8, 16, 32, 64, 80, 96, 112, and 128.
For example, to create a 2 vCPU 4 GB ARM runner, use the following label:
runs-on: nscloud-ubuntu-22.04-arm64-2x4
Runners will only be scheduled if the machine shape is valid and your workspace has not run out of available concurrent capacity.
Using GitHub Runners with over 64GB of RAM (80, 96, 112, 128) is not allowed by default. Please contact support@namespace.so if you want to use high-memory GitHub Runners.
Localization
Namespace runners use localization C.UTF-8
by default, aligning with GitHub-managed runners.
If you require POSIX
instead, add a nscloud-config-locale-posix
label to your runs-on
block, e.g.
jobs:
myjob:
runs-on: [nscloud-ubuntu-22.04-amd64-4x16, nscloud-config-locale-posix]
Cross-invocation caching
You can use Namespace cache volumes to speed up your GitHub Actions. Workflows can store dependencies, tools, and any data that needs to be available across invocations. Cached data persists and survives to the runner instance, so the following runners can read the data and skip expensive downloads and installations.
By enabling this feature, you can attach cache volumes to runner instances. Cache volumes are tagged so that any runner sharing the same cache tag will share cache volume contents. Cache volumes are attached on a best-effort basis depending on their locality, expiration, and current usage (so they should not be relied upon as durable data storage).
After a cache volume is attached to a runner, you can store files using whatever tools you prefer: by configuring paths in your favorite tools, copying files in or out, and simply relying on bind mounts.
The cache volume content is only committed when the GitHub workflow completes successfully. So, any intermediate data written into the cache volume is automatically discarded if the workflow fails.
You can use our nscloud-cache-action to wire your cache mounts for you.
Namespace uses local fast storage for cache volumes - and some magic. Cache volumes are forked on demand, ensuring that jobs can concurrently use the same caches with uncontested read-write access.
Using a cache volume
Cache volumes are created and attached to runners based on their runs-on
labels. Use the following required labels:
nscloud-{os}-{arch}-{vcpu}x{mem}-with-cache
(required): the meaning of{os}
,{arch}
,{vcpu}
and{mem}
is the same as detailed in Runner machine resources.nscloud-cache-tag-{tag-value}
(required):{tag-value}
is the key to select the volume to mount. Namespace infrastructure will attempt to attach the most recently used volume with this tag (or, failing that, an empty volume).nscloud-cache-size-{size}gb
(optional):{size}
is the requested cache volume size in gigabytes. It defaults to 20 GB. The minimum value is 20 GB. The maximum size depends on your subscription plan: The team plan includes 50 GB, and the Business plan offers 100 GB.
Setting these labels will make the cache volume available to the runner at path /cache
. This location is static and not configurable at the moment.
Example: Caching NPM packages
Use our nscloud-cache-action
to mount the volume
under the paths you want to cache.
jobs:
tests:
runs-on:
- nscloud-ubuntu-22.04-amd64-8x16-with-cache
- nscloud-cache-size-20gb
- nscloud-cache-tag-my-repository-e2e-tests
steps:
- name: Setup npm cache
uses: namespacelabs/nscloud-cache-action@v0
with:
path: |
~/.npm
./node_modules
use-cache-volume: true
- name: Install dependencies
run: npm install
- name: NPM tests
run: npm run test
Alternatively, you can mount, write, or copy data into the /cache
directory.
jobs:
tests:
runs-on:
- nscloud-ubuntu-22.04-amd64-8x16-with-cache
- nscloud-cache-size-20gb
- nscloud-cache-tag-my-repository-e2e-tests
steps:
- name: Setup cache
run: |
# Cache ~/.npm
mkdir -p /cache/npm-cache/.npm ~/.npm
sudo mount --bind /cache/npm-cache/.npm ~/.npm
# Cache node_modules
mkdir -p /cache/npm-cache/demo-remix/node_modules ./node_modules
sudo mount --bind /cache/npm-cache/demo-remix/node_modules ./node_modules
- name: Install dependencies
run: npm install
- name: NPM tests
run: npm run test
Experimental: Caching Docker images across invocations
Namespace also makes it trivial to cache container image pulls (and unpacks, often the most expensive bit) across invocations.
You can enable this feature by adding the experimental label nscloud-exp-container-image-cache
.
The runner's container engine stores and retrieves images from the cache volume when adding this label.
We're still ironing out some of the kinks, so consider this support experimental.
jobs:
tests:
runs-on:
- nscloud-ubuntu-22.04-amd64-8x16-with-cache
- nscloud-cache-tag-e2e-tests
- nscloud-cache-size-50gb
- nscloud-exp-container-image-cache
steps:
- name: Pull ubuntu image
run: |
time docker pull ubuntu
We recommend configuring a cache size of 50GB or more for optimal performance with container image caching.
What's Next?
- Chat with the team on Discord.