Getting Started

San Francisco Compute is a market for buying time on H100 clusters.

You can buy a a tradable contract for a multi-node H100 cluster with Infiniband. Then you can sell back whatever time you don’t use. Other providers prevent you from reselling in their terms of service.

Then you can buy any shape of compute you'd like.

# buy 64 h100s         -n 64
# for 24 hours         -d "24h" 
# starting tomorrow    -s "tomorrow at 9am"
sf buy -n 64 -d "24h" -s "tomorrow at 9am"

You can also buy and get access to nodes right away:

# buy 8 h100s                -n 8
# for 1 hour                 -d "1hr"
# at $2.50 per GPU per hour  -p "2.50"
sf buy -d "1hr" -n 8 -p "2.50"

In this example, you're willing to pay up to $2.50 per GPU per hour. This means the maximum total cost would be 2.50 x 8 = $20 per hour, but you may end up paying less depending on market conditions and available supply.

If you’d like, you can sell back anything you buy. Every sell order is backed by a contract, specified with the -c flag. You can find all of your contracts with sf contracts list. The following is an example sell command:

# sell 1024 H100s        -n 128
# for 12 hours           -d "12h"
# starting at 7pm        -s "tonight at 7pm"
# at $3.50/gpu/hour      -p '3.50'
# from contract cont_123 -c cont_123
sf sell -c cont_123 -p '3.50' -n 128 -d '12h' -s 'tonight at 7pm'

These are orders, not reservations. By placing an order, you're signaling to the market that you're willing to buy or sell a block of compute at a certain price. You haven't actually bought or sold anything until your order gets filled.

If you place a buy order, your order won't be filled unless there's a corresponding sell order for the same amount of compute at the same price or lower.

Similarly, if you place a sell order, your order won't be filled unless there's a corresponding buy order for the same amount of compute at the same price or higher.

Multiple buy orders can be combined to fill a single sell order, and similarly multiple sell orders can be combined to fill a buy order (but only so long as the market can pack them onto the same cluster). When a buy order gets filled, the contract is guaranteed to all land on the same physical, interconnected cluster.

If you place an order and it can't immediately be filled, it will sit in the market until you cancel it, it expires, or it gets filled. You can see all of the outstanding orders with the CLI, or by visiting the blackbook.

sf orders list --public

To cancel an order, you can run

sf orders cancel ordr_FW6wCEr1JnFL2MSL # this is the order id, shown in sf orders ls

InstallationCopied!

To place an order on SF Compute, you'll need to sign up on the website, sign a services agreement, and fund your account. If you'd like to talk to us before you do so, you can reach us at contact@sfcompute.com.

Then, you can download the command line tool.

curl -fsSL https://sfcompute.com/cli/install | bash

Next, login via the CLI.

sf login # this will open a browser window

Finally, install kubectl for your platform by following the instructions here. SF Compute automatically sets up a kubernetes namespace on your cluster for you, and kubectl is how you will spin up resources and submit jobs to your cluster.

Placing Buy OrdersCopied!

Let's place an order for 1 day on 1024 H100s with InfiniBand (128 nodes), starting tomorrow at 10am, for $5/hour.

sf buy -t h100i -n 1024 -p '5' -s 'tomorrow at 10am' -d '1d'
# ... it will confirm with the price before executing

Hmm, $5 an hour isn't crazy for a last minute 1024 fully-interconnected cluster for only a day, but that's way little higher than we want. Instead, let's set a max price. If someone offers our price or lower, we'll get the cluster, otherwise the order is canceled.

We'll set a low price, like $1.5 / hr, and see if we can get lucky. Maybe someone's code isn't ready yet, and they need to firesale their cluster to recoup costs.

sf buy -t h100i -n 1024 -s 'tomorrow at 10am' -d '1d' -p '1.5'
# ... it will confirm before executing

You can check the status of orders with sf orders list.

sf orders list

BUY ord_1a2b3c4d5e 
  Type: h100i 
  Quantity: 128
  Start: 2024-06-15 10:00:00 UTC
  Duration: 1d
  Price: $36,864
  Status: Pending

# ...

Extending a ContractCopied!

If you have a contract that’s already running, but you’d like to buy more time on it, you can use a colocation constraint.

First, run sf contracts ls to get the contract id:

sf contracts ls
▶  cont_8WBXM6DaJwK
type  h100i
colo  -

  8 x h100i (gpus)  │ dec 1 11:18 am → dec 1 4:00 pm (5h)

Now you can place buy orders with the -colo cont_8WBXM6DaJwK flag:

sf buy -d 4h -n 16 -colo cont_8WBXM6DaJwK

That order will add two more nodes to your cluster (16 more GPUs), starting now and lasting until 4pm (the duration is rounded up to end on the nearest hour).

You can also extend in time, like this:

sf buy -s 4pm -d 4h -colo cont_8WBXM6DaJwK

That will make your 8-GPU contract last until 8pm instead of 4pm.

The next time you run sf contracts ls, your new contract will point to the old one in its colo field.

The downside of colocation constraints is that your order is less likely to be fillable, because it can only be filled on the same physical cluster that your original order was scheduled on. So, in general the price may be higher or it may even be unfillable altogether.

Connecting to Your ClusterCopied!

First, you can list out the clusters you have access to.

sf clusters list

sunset
k8s api    https://sunset.clusters.sfcompute.com:6443
namespace  sf-sunset

# ...

Then, you can add a user (a Kubernetes service account), with

sf clusters users add --cluster sunset --user myuser

You can use kubectl to check if your connection is successful. You should see something like this:

kubectl get pods

No resources found in sf-sfcompute-com-alex namespace.

Creating ResourcesCopied!

Now that you’re setup, you have access to your Kubernetes namespace. Here’s an example config for training nanogpt.

# nanogpt.yaml
apiVersion: v1
kind: Service
metadata:
  name: nanogpt-svc
spec:
  clusterIP: None  # Headless service
  selector:
    job-name: nanogpt
  ports:
  - port: 29500
    name: dist-port
---
apiVersion: batch/v1
kind: Job
metadata:
  name: nanogpt
spec:
  completions: 2  # Total number of pods
  parallelism: 2  # Run all pods in parallel
  completionMode: Indexed
  template:
    metadata:
      labels:
        job-name: nanogpt  # This matches service selector
    spec:
      containers:
      - name: trainer
        image: alexsfcompute/nanogpt-k8s:latest
        command: ["torchrun", "--nnodes", "2", "--nproc_per_node", "8", "--rdzv-backend", "c10d", "--rdzv-endpoint", "nanogpt-0.nanogpt-svc:29500", "train.py"]
        ports:
        - containerPort: 29500
        resources:
          requests:
            nvidia.com/gpu: 8
            nvidia.com/hostdev: 8
            memory: "512Gi"
            cpu: "32"
          limits:
            nvidia.com/gpu: 8
            nvidia.com/hostdev: 8
            memory: "512Gi"
            cpu: "32"
        volumeMounts:
        - name: data-volume
          mountPath: /data
      volumes:
      - name: data-volume
        emptyDir: {}
      restartPolicy: Never
      subdomain: nanogpt-svc  # needed for networking between pods in the job

This config uses torchrun to spawn a distributed training job on 16 GPUs across two nodes, and uses a kubernetes service to expose port 29500 on each of the nodes for the different pytorch processes to be able to discover each other. Once the service has been created, each node is accessible as nanogpt-<i>.nanogpt-svc, where i is the rank of the node, starting at 0.

This example also mounts a local data volume at /data, which is larger and more performant than the default Docker filesystem. The nanogpt example doesn’t use it, but you could cache model checkpoints or batches of training data there.

You can apply the config with kubectl.

kubectl apply -f nanogpt.yaml

You can watch the pods spinning up with kubectl get pods, and follow along the logs with kubectl logs -f <pod-name>.

It’s possible that the first time you run it, different pods will start at very different times because they take different amounts of time to downlaod the docker image, which can cause some of the first ones to timeout while waiting for the stragglers. It should work fine the second time, once the image has been cached, and the pods should even restart automatically if they timeout.

Note that at the moment, all you have access to is a single namespace, so you may not be able to install general Helm charts. We’re working on removing this limitation.

Building Docker ImagesCopied!

If you’re building docker images on from a mac with an Arm processor, we recommend using docker buildx to build your Dockerfile:

docker buildx build --platform linux/amd64 -t <your image tag> .

Once it’s built, you can tag it and push it to your container repository:

docker tag <local tag> <remote tag>
docker push <remote tag>

Here we are using Docker Hub, but you can use any container repository you like (AWS ECR, Google GCR, etc.)

Adding Local VolumesCopied!

If you need more than about 200GB of storage in your pod, or faster reads and writes than the default Docker filesystem, you can add an Ephemeral Volume to your Kubernetes manifest.

Here is an example config for a pod that mounts an Ephemeral Volume at /data :

apiVersion: v1
kind: Pod
metadata:
  name: cuda-pod
spec:
  containers:
  - name: cuda
    image: nvidia/cuda:12.3.1-base-ubuntu22.04
    command: ["sleep", "infinity"]
    volumeMounts:
    - name: data-volume
      mountPath: /data
  volumes:
  - name: data-volume
    emptyDir: {}

Connecting to a Jupyter NotebookCopied!

It’s straightforward to start a jupyter notebook in your cluster, and you can even connect your local VS Code to it and use it as a backend.

Here is a kubernetes manifest that you can apply:

# jupyter.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jupyter
  labels:
    app: jupyter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jupyter
  template:
    metadata:
      labels:
        app: jupyter
    spec:
      containers:
      - name: jupyter
        image: quay.io/jupyter/pytorch-notebook:cuda12-python-3.11.8
        ports:
        - containerPort: 8888
        command: ["start-notebook.sh"]
        args: ["--NotebookApp.token=''", "--NotebookApp.password=''"]
        resources:
          requests:
            nvidia.com/gpu: 8
            nvidia.com/hostdev: 8
            memory: "512Gi"
            cpu: "32"
          limits:
            nvidia.com/gpu: 8
            nvidia.com/hostdev: 8
            memory: "512Gi"
            cpu: "32"
        volumeMounts:
        - mountPath: /dev/shm
          name: dshm
      volumes:
      - name: dshm
        emptyDir:
          medium: Memory
          sizeLimit: "64Gi" 
---
apiVersion: v1
kind: Service
metadata:
  name: jupyter-service
spec:
  selector:
    app: jupyter
  ports:
    - protocol: TCP
      port: 8888
      targetPort: 8888
  type: ClusterIP

Apply it with kubectl apply -f jupyter.yaml, and you can watch it spinning up with kubectl get pods.

Once it’s running, you can forward the jupyter server’s port to your laptop like this:

kubectl port-forward service/jupyter-service 8888:8888

To connect your VS Code to the server:

  1. Open VS Code

  2. Create or open a Jupyter notebook (.ipynb file)

  3. Click on the kernel selector in the top right

  4. Select "Select Another Kernel"

  5. Choose "Existing Jupyter Server"

  6. Enter the URL: http://localhost:8888

You should now be able to run cells! As a test, you can run !nvidia-smi , or

import torch
torch.cuda.is_available()

> True

Seeing Your ScheduleCopied!

When your order is filled, you're given a "contract" for some amount of compute for some period of time. You can see the contracts you own with contracts list .

sf contracts list

cont_aGVsbG93b3Js   h100i   64  2024-06-15 10:00:00 UTC  2024-06-16 10:00:00 UTC
cont_bXlwYXNzd29y   h100i   8   2024-06-16 12:00:00 UTC  2024-06-17 12:00:00 UTC

Placing Sell OrdersCopied!

If you have nodes, you can sell them back at any point. Let's sell back next week’s nodes from one of the contracts we own.

sf sell -c cont_aGVsbG93b3Js -p '2.85' -n 64 -d '1w' -s 'tomorrow at 10am'