Nodes
We offer two kinds of nodes: Reserved and Auto Reserved Nodes. Once provisioned, you can run VMs on them.
Reserved vs Auto Reserved
Use Reserved nodes if you need guaranteed access to GPUs for a specific duration of time.
Auto Reserved nodes are a good fit if you have an interruptible workload and are flexible about the timing and duration assigned to your VMs.
Quickstart
Use the Nodes API to create and manage nodes with code.
To create a node, you'll need to install the sf CLI
Install and Login to the CLI
curl -fsSL https://sfcompute.com/cli/install | bash
Source your shell profile to add the sf command to your PATH
source ~/.bashrc # For Bash
source ~/.zshrc # For Zsh
Login to the CLI
sf login
Run the following to generate a startup.sh that configures SSH access to your nodes. If you like, this can also be a cloud-init user-data file.
cat >startup.sh <<SCRIPT
#!/bin/bash
mkdir -p /root/.ssh
cat >>/root/.ssh/authorized_keys <<"EOF"
$(cat ~/.ssh/id_rsa.pub 2>/dev/null)
$(cat ~/.ssh/id_ecdsa.pub 2>/dev/null)
$(cat ~/.ssh/id_ecdsa_sk.pub 2>/dev/null)
$(cat ~/.ssh/id_ed25519.pub 2>/dev/null)
$(cat ~/.ssh/id_ed25519_sk.pub 2>/dev/null)
$(cat ~/.ssh/id_xmss.pub 2>/dev/null)
$(cat ~/.ssh/id_dsa.pub 2>/dev/null)
EOF
SCRIPT
Now, we can create a Reserved Node
# Create a reserved node for 1 hour with a max price of $20.00 per node hour and run statup.sh when the VM starts
sf nodes create cuda-crunch --zone landsend --duration 1h --max-price 20.00 --user-data-file ./startup.sh
To check the status of your node, run:
sf nodes list
NAME TYPE STATUS CURRENT VM GPU ZONE START/END
cuda-crunch Reserved Running vm_mxCExUDERw8zvxrTf0e5W H100 landsend Oct 2, 3pm β 5pm
You can use the current node name or ID to get logs or SSH into the VM running on your node.
sf nodes logs cuda-crunch
sf nodes ssh root@cuda-crunch
Create Reserved Nodes
# Create a reserved node for 2 hours starting now
sf nodes create cuda-crunch --zone landsend --duration 2h --max-price 10.00
# Create a reserved node starting in 1 hour for 6 hours
sf nodes create cuda-crunch --zone landsend --start "+1h" --duration 6h --max-price 10.00
# Create a reserved node with specific start/end times
sf nodes create cuda-crunch --zone landsend --start "2024-01-15T10:00:00Z" --end "2024-01-15T12:00:00Z" --max-price 10.00
If you'd like to hold onto a reserved node for a longer period, you can extend it.
See Configure Nodes for details on using custom os images or configuring VMs on startup.
Create Auto Reserved Nodes
# Create an auto reserved node with a max price of $10 per node hour
sf nodes create cuda-crunch --auto --zone landsend --max-price 10
# Create 3 auto reserved nodes with a max price of $10 per node hour
sf nodes create -n 3 --auto --zone landsend --max-price 10 -U ./cloud_init.yaml
Once created, we will continually bid to rent Nodes at or below your max price. If successful, a VM will be provisioned on the node to run your workload.
If the market price exceeds your max price, any running VM will be terminated (at the end of your allotted time) but the node will remain active and continue to wait for the market price to drop and provision a new VM.
If you'd like to stop renewing an auto reserved node, you can release it.
See Configure Nodes for details on using custom os images or configuring VMs on startup
List Nodes
You can view all your nodes with:
sf nodes list --verbose
Node: cuda-crunch
ID: n_0d2f72f0b631b674
Type: Auto Reserved Node
Status: Released
GPU: H100
Zone: landsend
Owner: sfcompute-com
π
Schedule:
Start: 2025-08-04 19:03:09 UTC
End: 2025-08-07 15:50:24 UTC
Duration: 68 hours
π° Pricing:
Max Price: $10.00/hour
# One auto reserved node can have multiple VMs
# VMs are created when market price <= max price
πΏ Virtual Machines Status Start/End
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
vm_47xHVvU6dewFvMXNJZJ9M Destroyed 2025-08-05 15:41 β 18:00
vm_9imnBqbLvpJWToxmnYCJs Destroyed 2025-08-04 19:04 β 14:00
π― Actions:
Logs: sf nodes logs cuda-crunch
SSH: sf nodes ssh root@cuda-crunch
Configure Nodes
You can configure nodes to use a custom OS image or a cloud-init file.
Custom Images
Contact support@sfcompute.com to enable custom images for your account.
We support any custom UEFI bootable x86_64 raw image.
# Upload a custom os.img to your account
sf nodes images upload ./path/to/os.img
# Create a node with the custom image
sf nodes create -n 1 --image <image-id> --zone landsend --max-price 30
Cloud-Init
With a cloud-init file, you can set up users, SSH keys, install packages, and run setup scripts.
Sample Cloud-Init File to setup SSH access
Generate a SSH public/private key pair (if you don't already have one)
# Create a new SSH key pair
ssh-keygen -t ed25519 -C "youremail@example.com"
# For MacOS
pbcopy < ~/.ssh/id_ed25519.pub
# For Linux
xclip -sel clip < ~/.ssh/id_ed25519.pub
Create a file named startup.yaml and paste in the following, replacing [ssh_public_key_here] with your actual SSH public key.
#cloud-config
disable_root: false
ssh_pwauth: false
users:
- name: root
ssh_authorized_keys:
- ssh-ed25519 [ssh_public_key] alice@example.com
- ssh-ed25519 [ssh_public_key] bob@example.com # optionally add multiple keys
# Create a node with a custom cloud-init file
sf nodes create -n 1 --user-data-file ./startup.yaml --zone landsend --max-price 30
Update Node Configuration
There is no persistent storage attached to a node. Redeploying a node will result in loss of all data stored on the node.
You can redeploy a node to update its cloud-init file or OS image.
# Redeploy a node with a new cloud-init file
sf nodes redeploy cuda-crunch --user-data-file ./new_startup.yaml
# Redeploy a node with a new os image
sf nodes redeploy cuda-crunch --image <new-image-id>
# Redeploy a node with both a new cloud-init file and os image
sf nodes redeploy cuda-crunch --image <new-image-id> --user-data-file ./new_startup.yaml
# Redeploy a node with the same configuration (re-runs cloud-init)
sf nodes redeploy cuda-crunch
# Redeploy a node with a new os and remove the existing cloud-init file
sf nodes redeploy cuda-crunch --image <new-image-id> --override-empty
Extend Nodes
Extend only works with Reserved Nodes.
To hold onto a reserved node for a longer period, you can extend it by buying more time.
# Extend a reserved node by 1 hour for $10.00 per node hour
sf nodes extend cuda-crunch --duration 1h --max-price 10.00
Release Nodes
Release only works with Auto Reserved Nodes.
To stop renewing an auto reserved node, you can release it. Any running VMs will be terminated at the end of your allotted time.
# Release an auto reserved Node
sf nodes release cuda-crunch
Node Observability
You can view logs or SSH into any VM running on your node.
Run sf nodes list to get the vm_id of the VM running on your node
sf nodes list
NAME TYPE STATUS CURRENT VM GPU ZONE START/END
cuda-crunch Reserved Running vm_mxCExUDERw8zvxrTf0e5W H100 landsend Oct 2, 3pm β 5pm
Get logs for the VM running on your node
sf nodes logs cuda-crunch
# You can also get logs from a specific VM
sf nodes logs --instance vm_mxCExUDERw8zvxrTf0e5W
SSH into the VM running on your node
sf nodes ssh root@cuda-crunch
Delete Nodes
Deleting a node is permanent and cannot be undone.
sf nodes delete cuda-crunch
You cannot delete a node that is currently running a VM. You must first release (for auto reserved nodes) or wait for the node to finish (for reserved nodes).
Deleted nodes do not appear in the output of sf nodes list
Limitations
-
Nodes only support Virtual Machines
-
You cannot
extendorredeploynodes less than five minutes before their scheduled end time. -
There are no public IPs.
- To host an inference server, youβll need to setup a proxy, like nginx on another cloud, connect that cloud to your VPN, and then point your proxy at your nodes.
-
Infiniband is not supported.
-
Nodes do not currently share a VPC or virtual LAN. If you want this to be the case, configure a VPN.
-
Nodes take about 5 minutes to spin up. If you reboot the machines, you will lose access for a while.
-
There is no persistent storage. If the underlying machine dies due to a hardware issue, youβll be given another one, but that one will not share the same disk as your previous one.
-
There is no GPU monitoring, such as active or passive checks on VMs. Some GPUs may βfall off the busβ and not appear occasionally, and you can either reboot the node, or tell us.
-
If all nodes are taken, and yours dies, you may not get another node. Weβll refund you for the time, but itβs not automatic and youβll need to tell us.
-
VMs created using the
nodescommand do not show up in thesf vm listcommand.