How to host a web server

To expose your cluster to the public internet, you can use sfcompute’s proxy. To do this, use an Ingress resource in Kubernetes with the class sfcompute, along side a ClusterIP service.

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: your-ingress
spec:
  ingressClassName: sfcompute # this is the key bit
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: your-service
                port:
                  number: 2242
---
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: ClusterIP
  selector:
    app: your-app
  ports:
    - port: 2242
      targetPort: 2242
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: your-app
# ... the rest of your deployment

You can deploy this with:

kubectl apply -f ingress.yaml

When you deploy the server, it will get assigned an address, like fsohstwstzxdhasz.sfcompute.dev. You can find it by inspecting the ingress resource.

kubectl describe ingress

NAME                      CLASS       HOSTS   ADDRESS                          PORTS   AGE
qwq-32b-ingress           sfcompute   *       fsohstwstzxdhasz.sfcompute.dev   80      4h

SFC ProxyCopied!

We run a load balancer called “SFC-proxy” that will route traffic to all the clusters you happen to have on San Francisco Compute. If you have two clusters on SFC, you can route traffic to both of them by using the same ingress name, under the same namespace.

For example, I have my-ingress on one cluster, and my-ingress on another cluster, both in the same namespace, they will both have the same address, <examplehash>.sfcompute.dev .

In other words, you can recreate your infrastructure deterministically, but just reapplying your Kubernetes YAML.

Limitations Copied!

  • Ingress does not yet support custom domains. It will soon though.

  • Requests have a soft timeout of 5 minutes, and a hard timeout of 20 minutes for open requests. These are relatively long timeouts, because many folks wish to do HTTP streaming, which sometimes keep requests open for quite awhile.