Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Rise

Rise Web Dashboard Screenshot

A Rust-based platform for deploying containerized applications with minimal configuration.


 

Warning

Early Work in Progress

This project is in a very early experimental stage. It is approximately 99% coded by Claude AI (under technical guidance), which means:

  • The codebase is actively evolving and may contain bugs or incomplete features
  • APIs and interfaces may change frequently without notice
  • Documentation may be out of sync with the current implementation
  • Production use is not recommended at this stage

If you choose to use or experiment with Rise, please be aware that you’re working with experimental software. Contributions, bug reports, and feedback are welcome, but please set expectations accordingly.

What is Rise?

Rise simplifies container deployment by providing:

  • Simple CLI for building and deploying apps
  • Multi-tenant projects with team collaboration
  • OAuth2 authentication via Dex
  • Multiple registry backends (AWS ECR, Docker)
  • Service accounts for CI/CD integration
  • Web dashboard for monitoring deployments

Features

  • Project & Team Management: Organize apps and collaborate with teams
  • OAuth2/OIDC Authentication: Secure authentication via Dex
  • Multi-Registry Support: AWS ECR, Docker Registry (Harbor, Quay, etc.)
  • Service Accounts: Workload identity for GitHub Actions, GitLab CI
  • Multi-Process Architecture: Separate controllers for deployments, projects, ECR
  • Embedded Web Frontend: Single-binary deployment with built-in UI

Quick Start

Prerequisites

  • Docker and Docker Compose
  • Rust 1.91+
  • mise (recommended for development)

Start Services

# Install development tools
mise install

# Start all services (postgres, dex, registry, backend)
mise backend:run

Services will be available at:

  • Backend API: http://localhost:3000
  • Web UI: http://localhost:3000
  • PostgreSQL: localhost:5432

Default credentials:

  • Email: admin@example.com or test@example.com
  • Password: password

Build and Use CLI

# Build the CLI from source
cargo build --bin rise

# The CLI is now available as 'rise' (if using direnv)
# Or use the full path: ./target/debug/rise

rise login
rise project create my-app
rise deployment create my-app --image nginx:latest

# Local development
rise run --project my-app  # Build and run locally with project env vars

Install from crates.io

# Install the CLI and backend from crates.io
cargo install rise-deploy

# Verify installation
rise --version

Documentation

Full documentation is available in /docs:

Architecture

Rise uses a multi-process architecture:

ComponentPurpose
rise-backend (server)HTTP API with embedded web frontend
rise-backend (controllers)Deployment, project, and ECR reconciliation
rise (CLI)Command-line interface
PostgreSQLDatabase for projects, teams, deployments
DexOAuth2/OIDC provider for authentication

Project Status

Production Ready:

  • ✅ OAuth2 PKCE authentication
  • ✅ Project & team management
  • ✅ Service accounts (workload identity for CI/CD)
  • ✅ AWS ECR integration with Terraform module
  • ✅ Kubernetes controller with Ingress authentication
  • ✅ Build integrations (Docker, Buildpacks, Railpack)
  • ✅ Embedded web frontend
  • ✅ Deployment rollback and expiration

In Development:

  • 🚧 Additional registry providers (GCR, ACR, GHCR)

Contributing

Contributions are welcome! See Local Development for development setup, code style, testing, and commit conventions.

License

[Add your license here]

Quick Setup

Prerequisites

  • Docker and Docker Compose
  • Rust 1.91+
  • mise (recommended)

Launch Services

mise install
mise backend:run

Services available at http://localhost:3000 (API and Web UI).

Default credentials: admin@example.com / password or test@example.com / password

Build CLI

cargo build --bin rise

First Steps

# Login (opens browser for OAuth)
rise login

# Create a project
rise project create my-app --visibility public

# Deploy
rise deployment create my-app --image nginx:latest

See Authentication for authentication details and CLI Guide for all commands.

Web UI

Navigate to http://localhost:3000 for the web dashboard (OAuth2 PKCE authentication, projects/teams management, deployment tracking).

Reset Environment

docker-compose down -v
cargo clean
mise backend:run

Next Steps

Local Development

Prerequisites

  • Docker and Docker Compose
  • Rust 1.91+
  • mise - Task runner and tool version manager
  • direnv (optional) - Auto-loads .envrc

Development Stack

Docker Compose Services

ServicePortPurpose
postgres5432PostgreSQL database
dex5556OAuth2/OIDC provider
registry5000Docker registry
registry-ui5001Registry web UI

Rise Backend

Single process running HTTP API server + controllers (deployment, project, ECR) as concurrent tokio tasks. Controllers enabled automatically based on config.

Mise Tasks

  • mise docs:serve - Serve docs (port 3001)
  • mise db:migrate - Run migrations
  • mise backend:deps - Start docker-compose services
  • mise backend:run (alias: mise br) - Run backend
  • mise minikube:launch - Start Minikube with local registry

Quick Start

mise install
mise backend:run  # Starts services + backend

Services: http://localhost:3000 (API, Web UI), localhost:5432 (PostgreSQL), http://localhost:5000 (Registry)

Registry Configuration for Local Development

When using Docker Compose with Minikube, you may need different registry URLs for:

  • Deployment controllers (running in Minikube): rise-registry:5000 (Docker internal network)
  • CLI (running on host): localhost:5000 (host network)

Configure this in config/default.yaml:

registry:
  type: "oci-client-auth"
  registry_url: "rise-registry:5000"      # Internal URL for deployment controllers
  namespace: "rise-apps/"
  client_registry_url: "localhost:5000"   # Client-facing URL for CLI push operations

The client_registry_url is optional and defaults to registry_url if not specified. The API returns client_registry_url to CLI clients for push operations, while deployment controllers use registry_url for image references.

Build CLI

cargo build --bin rise

Environment Variables

.envrc (loaded by direnv): DATABASE_URL, RISE_CONFIG_RUN_MODE, PATH

Server config in config/default.toml.

Development Workflow

Making Changes

Backend:

# Edit code
mise backend:reload  # or: mise br

CLI:

cargo build --bin rise
rise <command>

Schema:

sqlx migrate add <migration_name>
# Edit migration in migrations/
sqlx migrate run
cargo sqlx prepare  # Update query cache

Local Kubernetes Development

For testing the Kubernetes controller locally, use Minikube:

mise minikube:launch

This starts a local Kubernetes cluster with an embedded container registry. The backend will automatically deploy applications to Minikube when configured for Kubernetes.

For installation and advanced usage, see the Minikube documentation.

Accessing Database

# Using psql
docker-compose exec postgres psql -U rise -d rise

# Or connection string
psql postgres://rise:rise123@localhost:5432/rise

Default Credentials

PostgreSQL: postgres://rise:rise123@localhost:5432/rise

Dex: admin@example.com / password or test@example.com / password

Code Style

  • Avoid over-engineering; add abstractions only when needed
  • Use anyhow::Result for application code, typed errors only when callers need specific handling
  • Document non-obvious behavior and rationale
  • Update docs when adding features

Testing

cargo test

Commit Messages

Use conventional commits: feat:, fix:, docs:, refactor:

Troubleshooting

See Troubleshooting for common issues.

Reset everything:

docker-compose down -v
cargo clean
mise install
mise backend:run

CLI Basics

The Rise CLI (rise) provides commands for managing projects, teams, deployments, and service accounts. This guide covers common workflows and usage patterns.

Installation

cargo build --bin rise

Binary location: ./target/debug/rise (or use direnv to add to PATH automatically).

Configuration

CLI stores configuration in ~/.config/rise/config.json (created automatically on rise login).

Command Structure

CommandAliasesSubcommands
rise login--
rise projectpcreate (c), list (ls), show (s), update (u), delete (del, rm)
rise teamtcreate (c), list (ls), show (s), update (u), delete (del, rm)
rise deploymentdcreate (c), list (ls), show (s), rollback, stop
rise build--
rise run--
rise backend-server, check-config, dev-oidc-issuer

Use rise --help or rise <command> --help for details.

Backend Commands

Backend commands are used for running and managing the Rise backend server:

# Start the backend server
rise backend server

# Check backend configuration for errors
rise backend check-config

# Run a local OIDC issuer for testing
rise backend dev-oidc-issuer --port 5678

The check-config command is particularly useful for:

  • Validating configuration before deployment
  • Checking for typos in configuration files
  • Identifying unused/deprecated configuration options
  • CI/CD pipeline validation steps

Common Workflows

Authentication

rise login  # Opens browser for OAuth2 via Dex

# Authenticate with a different backend
rise login --url https://rise.example.com

# Use device flow (limited compatibility)
rise login --device

Environment variables:

  • RISE_URL: Set default backend URL
  • RISE_TOKEN: Set authentication token

Project Management

# Create project on backend only (remote mode - auto-selected if rise.toml exists)
rise project create my-app --access-class public
rise project create internal-api --access-class private --owner team:backend

# Create project on backend and rise.toml (remote+local mode - auto-selected if no rise.toml)
rise project create my-new-app

# Explicit mode selection
rise project create my-app --mode remote              # Backend only
rise project create my-app --mode local               # rise.toml only  
rise project create my-app --mode remote+local        # Both backend and rise.toml

# Create from existing rise.toml (auto-detects remote mode, reads name from rise.toml)
rise project create

# Or explicitly with mode flag
rise project create --mode remote

# List
rise p ls

# Update
rise p update my-app --owner team:devops

Project creation modes:

  • --mode remote (default if rise.toml exists): Creates/updates project on backend only
  • --mode local: Creates/updates rise.toml only, does not touch backend
  • --mode remote+local (default if no rise.toml): Creates project on backend AND creates rise.toml
  • Auto-detection: If --mode is not specified, automatically uses remote if rise.toml exists, otherwise remote+local

Deployments

# Deploy from current directory
rise deployment create my-app

# Deploy from specific directory (positional arg)
rise deployment create my-app ./path/to/app

# Deploy pre-built image
rise d c my-app --image nginx:latest --http-port 80

# Deploy to custom group with expiration
rise d c my-app --group mr/123 --expire 7d

# Monitor
rise d show my-app:20241205-1234 --follow --timeout 10m

# Rollback
rise deployment rollback my-app:20241205-1234

# Stop
rise deployment stop my-app --group mr/123

Key deployment options:

  • path (positional): Application directory (defaults to current directory)
  • --group <name>: Deploy to custom group (e.g., mr/123, staging)
  • --expire <duration>: Auto-delete after duration (e.g., 7d, 24h)
  • --image <image>: Use pre-built image (requires --http-port)
  • --http-port <port>: HTTP port application listens on (required with --image, defaults to 8080 for builds)

Local Development

# Build and run locally (defaults to port 8080)
rise run

# Specify directory
rise run ./path/to/app

# Custom port (sets PORT env var and exposes on host)
rise run --http-port 3000

# Expose on different host port
rise run --http-port 8080 --expose 3000

# Load environment variables from a project
rise run --project my-app

# Set runtime environment variables
rise run --run-env DATABASE_URL=postgres://localhost/mydb --run-env DEBUG=true

# With custom build backend
rise run --backend pack

Key options:

  • path (positional): Application directory (defaults to current directory)
  • --project <name>: Project name to load non-secret environment variables from
  • --http-port <port>: HTTP port the application listens on (also sets PORT env var) [default: 8080]
  • --expose <port>: Port to expose on the host (defaults to same as http-port)
  • --run-env <KEY=VALUE>: Runtime environment variables (can be specified multiple times)
  • Build flags: --backend, --builder, --buildpack, --container-cli, etc.

Notes:

  • Sets PORT environment variable automatically
  • Loads non-secret environment variables from the project if --project is specified
  • Secret environment variables are not loaded (their actual values are not retrievable)
  • Runs with docker run --rm -it (automatically removes container on exit)

Team Management

# Create
rise team create backend-team --owners alice@example.com --members bob@example.com

# List
rise t ls

# Add members
rise t update backend-team --add-members charlie@example.com

Advanced Features

Deployment Groups

  • default: Primary deployment
  • Custom groups: Additional deployments (e.g., mr/123, staging)
rise d c my-app --group mr/123 --expire 7d

Auto-Expiration

rise d c my-app --group staging --expire 7d  # Days
rise d c my-app --group preview --expire 24h  # Hours

Supported units: h, d, w.

Next Steps

Authentication

Rise uses JWT tokens issued by Dex OAuth2/OIDC provider for user authentication and service accounts for CI/CD workload identity.

User Authentication

OAuth2 authorization code flow with PKCE:

rise login

CLI starts local HTTP server (ports 8765-8767), opens browser to Dex, exchanges auth code for JWT token.

Device Flow (Not Compatible with Dex)

⚠️ Dex’s device flow doesn’t follow RFC 8628. Use browser flow instead.

Token Storage

Tokens stored in ~/.config/rise/config.json (plain JSON; OS-native secure storage planned).

Backend URL

rise login --url https://rise.example.com

API Usage

Protected endpoints require Authorization: Bearer <token> header (401 if missing/invalid).

Authentication Endpoints

Public: POST /api/v1/auth/code/exchange - Exchange auth code for JWT

Protected: GET /api/v1/users/me, POST /users/lookup

Service Accounts (Workload Identity)

CI/CD systems (GitLab CI, GitHub Actions) authenticate using OIDC JWT tokens.

Process: CI generates JWT → Rise validates signature against OIDC issuer → Matches claims → Deploys if matched

Security: Short-lived tokens, claim-based authorization, project-scoped access

Quick Start

GitLab CI:

rise sa create my-project \
  --issuer https://gitlab.com \
  --claim aud=rise-project-my-project \
  --claim project_path=myorg/myrepo \
  --claim ref_protected=true

Add to .gitlab-ci.yml:

deploy:
  stage: deploy
  id_tokens:
    RISE_TOKEN:
      aud: rise-project-my-project
  script:
    - rise deployment create my-project --image $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
  only:
    - tags

GitHub Actions:

rise sa create my-app \
  --issuer https://token.actions.githubusercontent.com \
  --claim aud=rise-project-my-app \
  --claim repository=myorg/my-app

Add to .github/workflows/deploy.yml:

name: Deploy
on:
  push:
    branches: [main]

permissions:
  id-token: write
  contents: read

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Get OIDC token
        run: |
          TOKEN=$(curl -H "Authorization: bearer $ACTIONS_ID_TOKEN_REQUEST_TOKEN" \
                       "$ACTIONS_ID_TOKEN_REQUEST_URL&audience=rise-project-my-app" | jq -r .value)
          echo "RISE_TOKEN=$TOKEN" >> $GITHUB_ENV
      - name: Deploy
        run: rise deployment create my-app --image ghcr.io/myorg/my-app:$GITHUB_SHA

Creating Service Accounts

rise sa create <project> \
  --issuer <issuer-url> \
  --claim aud=<value> \
  --claim <key>=<value>

Requirements: Must specify aud claim + at least one additional claim for authorization.

Common Use Cases

Protected branches only (production):

rise sa create prod \
  --issuer https://gitlab.com \
  --claim aud=rise-project-prod \
  --claim project_path=myorg/app \
  --claim ref_protected=true

Specific branch (staging):

rise sa create staging \
  --issuer https://gitlab.com \
  --claim aud=rise-project-staging \
  --claim project_path=myorg/app \
  --claim ref=refs/heads/staging

Deploy from tags (releases):

rise sa create releases \
  --issuer https://gitlab.com \
  --claim aud=rise-project-releases \
  --claim project_path=myorg/app \
  --claim ref_type=tag

Available Claims

GitLab CI: project_path, ref, ref_type, ref_protected, environment, pipeline_source - Docs

GitHub Actions: repository, ref, workflow, environment, actor - Docs

Managing Service Accounts

rise sa list <project>
rise sa show <project> <service-account-id>
rise sa delete <project> <service-account-id>

Permissions: Can create/view/list/stop/rollback deployments. Cannot manage projects/teams/service accounts.

Troubleshooting

User Authentication

  • “Failed to start local callback server”: Ports 8765-8767 in use
  • “Code exchange failed”: Check backend/Dex logs
  • Token expired: Run rise login

Service Accounts

  • “The ‘aud’ claim is required”: Add --claim aud=<value>
  • “At least one additional claim required”: Add authorization claims (e.g., project_path)
  • “Multiple service accounts matched”: Make claims more specific to avoid ambiguity
  • “No service account matched”: Check token claims (case-sensitive), verify issuer URL (no trailing slash), ensure ALL claims present
  • “403 Forbidden”: Service accounts can only deploy, not manage projects

Authentication for Rise-Deployed Applications

Rise provides built-in authentication for your deployed applications using JWT tokens. When users authenticate to access your application, Rise issues a signed JWT token that your application can validate to identify the user.

Overview

When a user logs into a Rise-deployed application:

  1. Rise authenticates the user via OAuth2/OIDC (e.g., through Dex)
  2. Rise issues an RS256-signed JWT token with user information
  3. The JWT is stored in the rise_jwt cookie
  4. Your application can access this cookie to identify the user

The rise_jwt cookie contains a JWT token with the following structure:

JWT Header

{
  "alg": "RS256",
  "typ": "JWT",
  "kid": "<key-id>"
}

JWT Claims (Example)

{
  "sub": "CiQwOGE4Njg0Yi1kYjg4LTRiNzMtOTBhOS0zY2QxNjYxZjU0NjYSBWxvY2Fs",
  "email": "admin@example.com",
  "name": "admin",
  "groups": [],
  "iat": 1768858875,
  "exp": 1768945275,
  "iss": "http://rise.local:3000",
  "aud": "http://test.rise.local:8080"
}

Claim Descriptions

  • sub: Unique user identifier from the identity provider (typically a base64-encoded UUID)
  • email: User’s email address
  • name: User’s display name (optional, included if available from IdP)
  • groups: Array of Rise team names the user belongs to (empty array if user has no team memberships)
  • iat: Issued at timestamp (Unix epoch seconds)
  • exp: Expiration timestamp (Unix epoch seconds, default: 24 hours from issue time)
  • iss: Issuer (Rise backend URL, e.g., http://rise.local:3000)
  • aud: Audience (your deployed application’s URL, e.g., http://test.rise.local:8080)

Note: The JWT expiration time is configurable via the jwt_expiry_seconds server setting (default: 86400 seconds = 24 hours).

Validating the JWT

Rise provides the public keys needed to validate JWTs through the standard OpenID Connect Discovery endpoint.

OpenID Connect Discovery

Applications should use the OpenID Connect Discovery 1.0 specification to discover the JWKS endpoint:

  1. Fetch OpenID configuration from ${RISE_ISSUER}/.well-known/openid-configuration
  2. Extract jwks_uri from the configuration response
  3. Fetch JWKS from the jwks_uri endpoint
  4. Cache the JWKS (recommended: 1 hour) to avoid excessive requests
  5. Use the JWKS to validate JWT signatures

Example discovery response:

{
  "issuer": "https://rise.example.com",
  "jwks_uri": "https://rise.example.com/api/v1/auth/jwks",
  "id_token_signing_alg_values_supported": ["RS256", "HS256"],
  "subject_types_supported": ["public"],
  "claims_supported": ["sub", "email", "name", "groups", "iat", "exp", "iss", "aud"]
}

Environment Variables

Your deployed application automatically receives:

  • RISE_ISSUER: Rise server URL (base URL for all Rise endpoints) and JWT issuer for validation (e.g., http://rise.local:3000)
  • RISE_APP_URL: Canonical URL where your app is accessible (primary custom domain or default project URL)
  • RISE_APP_URLS: JSON array of all URLs your app is accessible at (primary ingress + custom domains), e.g., ["http://myapp.rise.local:8080", "https://myapp.example.com"]
  • PORT: The HTTP port your container should listen on (default: 8080)

Example: TypeScript/Node.js

Using the jose library which handles OIDC discovery and JWKS automatically:

import { jwtVerify, createRemoteJWKSet } from 'jose';
import type { Request, Response, NextFunction } from 'express';

const RISE_ISSUER = process.env.RISE_ISSUER || 'http://rise.local:3000';
const RISE_APP_URL = process.env.RISE_APP_URL;

// Create JWKS fetcher (automatically handles caching and discovery)
const JWKS = createRemoteJWKSet(
  new URL(`${RISE_ISSUER}/api/v1/auth/jwks`)
);

interface RiseClaims {
  sub: string;
  email: string;
  name?: string;
  groups?: string[];
}

// Express middleware to verify Rise JWT
async function verifyRiseJwt(req: Request, res: Response, next: NextFunction) {
  const token = req.cookies.rise_jwt;

  if (!token) {
    return res.status(401).send('No authentication token');
  }

  try {
    const { payload } = await jwtVerify<RiseClaims>(token, JWKS, {
      issuer: RISE_ISSUER,
      audience: RISE_APP_URL, // Validates the aud claim
    });

    req.user = {
      id: payload.sub,
      email: payload.email,
      name: payload.name,
      groups: payload.groups || [],
    };

    next();
  } catch (err) {
    return res.status(401).send('Invalid token');
  }
}

Install: npm install jose cookie-parser

Example: Python/Flask

Using joserfc which handles JWKS fetching and JWT validation:

from typing import TypedDict, Optional
from joserfc import jwt
from joserfc.jwk import JWKRegistry
import requests
from flask import request, jsonify, g
import os

class RiseClaims(TypedDict):
    sub: str
    email: str
    name: Optional[str]
    groups: list[str]
    iat: int
    exp: int
    iss: str
    aud: str

class UserInfo(TypedDict):
    id: str
    email: str
    name: Optional[str]
    groups: list[str]

RISE_ISSUER = os.environ.get('RISE_ISSUER', 'http://rise.local:3000')
RISE_APP_URL = os.environ.get('RISE_APP_URL')

# Fetch JWKS once at startup (or cache with TTL)
def get_jwks_registry():
    config_url = f'{RISE_ISSUER}/.well-known/openid-configuration'
    config = requests.get(config_url).json()
    jwks = requests.get(config['jwks_uri']).json()
    return JWKRegistry.import_key_set(jwks)

jwks_registry = get_jwks_registry()

@app.before_request
def authenticate():
    """Middleware to authenticate requests using Rise JWT"""
    token = request.cookies.get('rise_jwt')

    if not token:
        return jsonify({'error': 'No authentication token'}), 401

    try:
        # Verify and decode JWT (including issuer and audience validation)
        claims: RiseClaims = jwt.decode(
            token,
            jwks_registry,
            claims_options={
                "iss": {"value": RISE_ISSUER},
                "aud": {"value": RISE_APP_URL},
            },
        )

        user_info: UserInfo = {
            'id': claims['sub'],
            'email': claims['email'],
            'name': claims.get('name'),
            'groups': claims.get('groups', [])
        }
        g.user = user_info
    except Exception as e:
        return jsonify({'error': 'Invalid token'}), 401

Install: pip install joserfc requests

Authorization Based on Groups

You can use the groups claim to implement team-based authorization:

import type { Request, Response, NextFunction } from 'express';

function requireTeam(teamName: string) {
  return (req: Request, res: Response, next: NextFunction) => {
    if (!req.user) {
      return res.status(401).send('Not authenticated');
    }

    if (!req.user.groups.includes(teamName)) {
      return res.status(403).send('Access denied - not a member of required team');
    }

    next();
  };
}

// Protect routes by team membership
app.get('/admin', requireTeam('admin'), (req: Request, res: Response) => {
  res.send('Admin panel');
});

Best Practices

  1. Always Validate the JWT: Don’t trust the cookie contents without verification
  2. Verify Audience: Always validate the aud claim matches RISE_APP_URL
  3. Use Modern Libraries: Use jose (Node.js) or authlib (Python) - they handle OIDC discovery automatically
  4. Use HTTPS: The rise_jwt cookie is marked as Secure in production
  5. Handle Missing Tokens: Users may not be authenticated - handle gracefully
  6. Let Libraries Cache: Modern JWT libraries automatically cache JWKS with appropriate TTLs

Troubleshooting

Token Validation Fails

  • Check Algorithm: Ensure you’re using RS256, not HS256
  • Verify JWKS: Ensure your library can reach ${RISE_ISSUER}/.well-known/openid-configuration
  • Check Audience: The aud claim must match RISE_APP_URL
  • Check Expiration: Tokens expire after 24 hours by default (configurable)
  • Check Authentication: User may not be logged in
  • Check Access Class: Ensure your project has authentication enabled
  • Check Cookie Domain: For custom domains, cookies may not be shared

Groups Missing

  • Check IdP Configuration: Groups come from your identity provider
  • Check Team Sync: Ensure IdP group sync is enabled in Rise
  • Check Team Membership: User must be a member of Rise teams

Security Considerations

  • The rise_jwt cookie is HttpOnly - JavaScript cannot access it (XSS protection)
  • The JWT is signed with RS256 - public keys fetched via OIDC discovery verify authenticity
  • Tokens expire after 24 hours by default - users must re-authenticate periodically
  • The aud claim ties tokens to specific applications - always validate this claim

Additional Resources

Deployments

Deployments in Rise represent immutable instances of your application running in the container runtime.

What is a Deployment?

A deployment is a specific version of your project that has been built, pushed to a container registry, and deployed to the Kubernetes runtime.

Key characteristics:

  • Immutable: Once created, a deployment’s configuration cannot be changed
  • Timestamped: Each deployment has a unique timestamp ID (e.g., my-app:20241205-1234)
  • Tracked: Deployments have status, health checks, and logs
  • Rollback-able: You can rollback to any previous deployment

Deployment Lifecycle

1. Creation

When you run rise deployment create my-app, the following happens:

  1. Build (optional): If no --image is provided, Rise builds a container image from your application
  2. Push: The image is pushed to the configured container registry
  3. Store: Deployment metadata is saved to the database with a digest-pinned image reference
  4. Deploy: The deployment controller creates/updates the container in the runtime

2. Running

Once deployed, the deployment enters the running state. The deployment controller:

  • Monitors health: Periodically checks container health
  • Updates status: Reflects actual runtime state in the database
  • Handles failures: Marks deployments as failed if containers crash

3. Stopping

Deployments can be stopped manually or automatically:

# Stop all deployments in a group
rise deployment stop my-app --group default

Stopped deployments:

  • Remain in the database
  • Can be rolled back to
  • Don’t consume runtime resources

4. Expiration

Deployments can auto-delete after a specified duration:

# Delete automatically after 7 days
rise d c my-app --group mr/123 --expire 7d

This is useful for:

  • Preview deployments for merge requests
  • Staging environments
  • Temporary testing environments

Deployment Groups

Projects can have multiple active deployments using deployment groups:

Default Group

The default group represents the primary deployment:

rise deployment create my-app
# Accessible at: https://my-app.rise.dev

Custom Groups

Create additional deployments with custom group names:

# Merge request preview
rise d c my-app --group mr/123 --expire 7d

# Staging environment
rise d c my-app --group staging

# Feature branch
rise d c my-app --group feature/new-auth

Custom groups allow:

  • Multiple concurrent deployments of the same project
  • Isolated testing without affecting production
  • Preview environments for code review

Each custom group deployment gets its own URL in the format: https://{project}-{group}.rise.dev

Pre-built Images

Skip the build step by deploying pre-built images:

# Deploy from Docker Hub
rise d c my-app --image nginx:latest

# Deploy from private registry
rise d c my-app --image myregistry.io/my-app:v1.2.3

# Deploy from AWS ECR
rise d c my-app --image 123456789.dkr.ecr.us-east-1.amazonaws.com/my-app:sha256-abc123

When using --image:

  • No build occurs
  • The image is pulled directly from the specified registry
  • The deployment is pinned to the exact digest of the image

Following Deployments

Monitor deployment progress in real-time:

# Follow until deployment reaches terminal state
rise d s my-app:latest --follow

# Follow with timeout
rise d s my-app:latest --follow --timeout 10m

The --follow flag auto-refreshes the deployment status and shows:

  • Current state (pending, running, failed, stopped)
  • Health status
  • Deployment events (future)

Rollback

Rollback creates a new deployment with the same configuration as a previous one:

# Rollback to specific deployment
rise deployment rollback my-app:20241205-1234

How it works:

  1. Fetches the configuration of the target deployment (image digest, env vars, etc.)
  2. Creates a new deployment with the same configuration
  3. Deploys to the runtime

Important: Rollback creates a new deployment; it doesn’t modify the original.

Deployment Status

Deployments can be in one of these states:

StatusDescription
pendingDeployment created, waiting to start
runningContainer is running and healthy
unhealthyContainer is running but health check fails
failedContainer failed to start or crashed
stoppedDeployment was manually stopped
expiredDeployment was auto-deleted due to expiration

Best Practices

Use Expiration for Preview Environments

# Auto-cleanup after 7 days
rise d c my-app --group mr/123 --expire 7d

Pin to Specific Image Tags

# Good: Specific version
rise d c my-app --image myapp:v1.2.3

# Avoid: Mutable tags in production
rise d c my-app --image myapp:latest

Rise automatically pins deployments to image digests for reproducibility.

Use Deployment Groups for Staging

# Staging deployment
rise d c my-app --group staging

# Production deployment
rise d c my-app --group default

Follow Deployments in CI/CD

# Wait for deployment to succeed
rise d c my-app --follow --timeout 5m || exit 1

Auto-Injected Environment Variables

Rise automatically injects the following environment variables into every deployment:

VariableDescriptionExample Value
PORTHTTP port the container should listen on8080
RISE_ISSUERRise server URL (base URL for all Rise endpoints) and JWT issuerhttps://rise.example.com
RISE_APP_URLCanonical URL where your app is accessible (primary custom domain or default project URL)https://myapp.example.com
RISE_APP_URLSJSON array of all URLs where your app can be accessed["https://myapp.rise.dev", "https://myapp.example.com"]

Using RISE_ISSUER for JWT Validation

To validate Rise-issued JWTs, applications should:

  1. Fetch OpenID configuration from ${RISE_ISSUER}/.well-known/openid-configuration
  2. Extract the jwks_uri from the configuration
  3. Fetch the JWKS from jwks_uri
  4. Use the JWKS to validate JWT signatures

See Authentication for Apps for detailed examples.

Next Steps

Building Container Images

Rise CLI supports multiple build backends for creating container images from your application code.

Build Backends

Docker (Dockerfile)

Multiple Docker-based backends are available for building from a Dockerfile:

# Standard docker/podman build (default)
rise build myapp:latest --backend docker
rise build myapp:latest --backend docker:build  # alias for docker

# Docker buildx (with BuildKit features like secrets)
rise build myapp:latest --backend docker:buildx

# Plain buildctl (BuildKit directly, requires buildctl CLI)
rise build myapp:latest --backend buildctl

Backend comparison:

BackendBuild ToolSSL SecretsPush During Build
docker / docker:builddocker buildNoNo (separate push)
docker:buildxdocker buildx buildYesYes (--push)
buildctlbuildctlYesYes

When to use each:

  • docker:build - Simple builds, maximum compatibility
  • docker:buildx - Need BuildKit features (secrets, caching, multi-platform)
  • buildctl - Direct BuildKit access, CI environments without Docker

Pack (Cloud Native Buildpacks)

Uses pack build with Cloud Native Buildpacks:

rise build myapp:latest --backend pack
rise build myapp:latest --backend pack --builder paketobuildpacks/builder-jammy-base
rise deployment create myproject --backend pack

Railpack (Railway Railpacks)

Uses Railway’s Railpacks with BuildKit (buildx or buildctl):

# Railpack with buildx (default)
rise build myapp:latest --backend railpack
rise deployment create myproject --backend railpack

# Railpack with buildctl
rise build myapp:latest --backend railpack:buildctl
rise deployment create myproject --backend railpack:buildctl

Troubleshooting: If railpack builds fail with the error:

ERROR: failed to build: failed to solve: requested experimental feature mergeop has been disabled on the build server: only enabled with containerd image store backend

This occurs when using Docker Desktop’s default builder. Create a custom buildx builder to work around this:

docker buildx create --use

Auto-detection

When --backend is omitted, the CLI automatically detects the build method:

  • If Dockerfile exists → uses docker backend
  • If Containerfile exists (and no Dockerfile) → uses docker backend
  • Otherwise → uses pack backend
# Auto-detect (has Dockerfile → uses docker)
rise build myapp:latest

# Auto-detect (no Dockerfile → uses pack)
rise build myapp:latest

# Explicit backend selection
rise build myapp:latest --backend railpack

Custom Dockerfile Path

By default, Rise looks for Dockerfile or Containerfile in the project directory. Use --dockerfile to specify a different file:

# Use a custom Dockerfile
rise build myapp:latest --dockerfile Dockerfile.prod

# Use Dockerfile from subdirectory
rise build myapp:latest --dockerfile docker/Dockerfile.build

# Works with all Docker-based backends
rise build myapp:latest --backend docker:buildx --dockerfile Dockerfile.dev

In rise.toml:

[build]
backend = "docker"
dockerfile = "Dockerfile.prod"

Build Contexts (Docker/Podman Multi-Stage Builds)

Build contexts allow you to use additional directories or files in your multi-stage Docker builds. This is useful when you need to access files outside the main build context or reference other directories.

CLI Usage:

# Add a single build context
rise build myapp:latest --build-context mylib=../my-library

# Add multiple build contexts
rise build myapp:latest \
  --build-context mylib=../my-library \
  --build-context tools=../build-tools

# Specify custom default build context (the main context directory)
rise build myapp:latest --context ./app

# Combine with other options
rise build myapp:latest \
  --backend docker:buildx \
  --build-context mylib=../my-library \
  --dockerfile Dockerfile.prod

In rise.toml:

[build]
backend = "docker"
dockerfile = "Dockerfile"
build_context = "./app"  # Optional: custom default build context

[build.build_contexts]
mylib = "../my-library"
tools = "../build-tools"
shared = "../shared-components"

Using Build Contexts in Dockerfile:

Once defined, you can reference build contexts in your Dockerfile:

# Copy files from a named build context
FROM alpine AS base
COPY --from=mylib /src /app/lib

# Or use the context as a build stage
FROM scratch AS mylib
# This stage can access files from ../my-library

FROM node:20 AS build
# Copy from the mylib context
COPY --from=mylib /package.json /app/lib/package.json

Configuration Precedence:

  • CLI --build-context flags override config file contexts with the same name
  • CLI --context flag overrides config file build_context
  • Default build context is the app path (project directory) if not specified

Notes:

  • Build contexts are only supported by Docker and Podman backends
  • Paths are relative to the rise.toml file location (typically the project root directory)
  • Available with all Docker-based backends: docker, docker:buildx, buildctl

Build-Time Environment Variables

You can pass environment variables to your build process using the -e or --env flag. This works consistently across all build backends:

# Pass environment variable with explicit value
rise build myapp:latest -e NODE_ENV=production

# Pass environment variable from current environment
export DATABASE_URL=postgres://localhost/mydb
rise build myapp:latest -e DATABASE_URL

# Multiple environment variables
rise build myapp:latest -e NODE_ENV=production -e API_KEY=secret123

# Works with all backends
rise build myapp:latest --backend docker -e BUILD_VERSION=1.2.3
rise build myapp:latest --backend pack -e BP_NODE_VERSION=20
rise build myapp:latest --backend railpack -e CUSTOM_VAR=value

Backend-Specific Behavior

Docker Backend:

  • Environment variables are passed as --build-arg arguments to Docker build
  • Available in Dockerfile ARG declarations and RUN commands
  • Example Dockerfile usage:
    ARG NODE_ENV
    ARG BUILD_VERSION
    RUN echo "Building version $BUILD_VERSION in $NODE_ENV mode"
    

Pack Backend:

  • Environment variables are passed as --env arguments to pack CLI
  • Buildpacks can read these during detection and build phases
  • Common uses: configuring buildpack versions, build flags

Railpack Backend:

  • Environment variables are passed as BuildKit secrets
  • Available in all build steps defined in the Railpack plan
  • Railpack frontend exposes them as environment variables during build

Project Configuration (rise.toml)

You can create a rise.toml or .rise.toml file in your project directory to define default build options. This allows you to avoid repeating CLI flags for every build.

Example rise.toml:

[build]
backend = "pack"
builder = "heroku/builder:24"
buildpacks = ["heroku/nodejs", "heroku/procfile"]
env = ["BP_NODE_VERSION=20"]

Configuration Precedence

Build options are resolved in the following order (highest to lowest):

  1. CLI flags (e.g., --backend pack)
  2. Project config file (rise.toml or .rise.toml)
  3. Environment variables (e.g., RISE_CONTAINER_CLI, RISE_MANAGED_BUILDKIT)
  4. Global config (~/.config/rise/config.json)
  5. Auto-detection/defaults

Vector field behavior:

  • All vector fields (buildpacks, env): CLI values are appended to config values (merged)

This allows you to set common buildpacks and environment variables in the config file and add additional ones via CLI as needed.

Available Options

All CLI build flags can be specified in the [build] section:

FieldTypeDescription
backendStringBuild backend: docker, docker:build, docker:buildx, buildctl, pack, railpack, railpack:buildctl
dockerfileStringPath to Dockerfile (relative to rise.toml location). Defaults to Dockerfile or Containerfile
build_contextStringDefault build context (docker/podman only). The path argument to docker build <path>. Defaults to rise.toml location. Path is relative to rise.toml location.
build_contextsObjectNamed build contexts for multi-stage builds (docker/podman only). Format: { "name" = "path" }. Paths are relative to rise.toml location.
builderStringBuildpack builder image (pack only)
buildpacksArrayList of buildpacks to use (pack only)
envArrayEnvironment variables for build (format: KEY=VALUE or KEY)
container_cliStringContainer CLI: docker or podman
managed_buildkitBooleanEnable managed BuildKit daemon
railpack_embed_ssl_certBooleanEmbed SSL certificate in Railpack builds

Examples

Heroku buildpacks:

[build]
backend = "pack"
builder = "heroku/builder:24"
buildpacks = ["heroku/nodejs", "heroku/procfile"]

Railpack with SSL:

[build]
backend = "railpack"
managed_buildkit = true
railpack_embed_ssl_cert = true

Docker with build args:

[build]
backend = "docker"
env = ["VERSION=1.0.0", "NODE_ENV=production"]

Pack with custom environment:

[build]
backend = "pack"
builder = "paketobuildpacks/builder-jammy-base"
env = ["BP_NODE_VERSION=20.*"]

CLI Override

CLI flags always take precedence over project config:

# Uses docker backend despite project config specifying pack
rise build myapp:latest --backend docker

# Adds to env variables from config
# If config has env = ["NODE_ENV=production"]
# This results in: env = ["NODE_ENV=production", "API_KEY=secret"]
rise build myapp:latest -e API_KEY=secret

# Enable managed BuildKit (shorthand defaults to true)
rise build myapp:latest --managed-buildkit

# Disable managed BuildKit despite config enabling it
rise build myapp:latest --managed-buildkit=false

# Enable SSL certificate embedding (shorthand defaults to true)
rise build myapp:latest --railpack-embed-ssl-cert

File Naming

Both rise.toml and .rise.toml are supported. If both exist in the same directory, rise.toml takes precedence (with a warning).

SSL Certificate Handling (Managed BuildKit Daemon)

When building with BuildKit-based backends (docker, railpack) on macOS behind corporate proxies (Cloudflare, Zscaler, etc.) or environments with custom CA certificates, builds may fail with SSL certificate verification errors.

The Problem

BuildKit runs as a separate daemon and requires CA certificates to be available at daemon startup. This affects two scenarios:

  1. BuildKit daemon operations: Pulling base images, accessing registries
  2. Build-time operations: Application builds (RUN instructions) downloading packages, cloning repos

Solution: Managed BuildKit Daemon

Rise CLI provides an opt-in managed BuildKit daemon feature that automatically creates and manages a BuildKit daemon with SSL certificate support.

Enable via CLI flag:

# Shorthand (defaults to true)
rise build myapp:latest --backend railpack --managed-buildkit
rise deployment create myproject --backend railpack --managed-buildkit

# Explicit values
rise build myapp:latest --backend railpack --managed-buildkit=true
rise build myapp:latest --backend railpack --managed-buildkit=false

Or set environment variable:

export RISE_MANAGED_BUILDKIT=true
rise build myapp:latest --backend railpack

Or configure permanently:

# Set in config file
rise config set managed_buildkit true

How It Works

When --managed-buildkit is enabled, Rise CLI follows this priority order:

  1. Existing BUILDKIT_HOST: If the BUILDKIT_HOST environment variable is already set, Rise uses your existing buildkit daemon
  2. Managed daemon: Otherwise, Rise creates a rise-buildkit daemon container:
    • With SSL certificate mounted at /etc/ssl/certs/ca-certificates.crt if SSL_CERT_FILE is set
    • Without SSL certificate if SSL_CERT_FILE is not set
    • Configured with --platform linux/amd64 for Mac compatibility
  3. Automatic updates: If SSL_CERT_FILE is added, removed, or changed, the daemon is automatically recreated

Warning When Not Enabled

If SSL_CERT_FILE is set but --managed-buildkit is not enabled, you’ll see a warning during builds that require BuildKit (docker, railpack):

Warning: SSL_CERT_FILE is set but managed BuildKit daemon is disabled.

Railpack builds may fail with SSL certificate errors in corporate environments.

To enable automatic BuildKit daemon management:
  rise build --managed-buildkit ...

Or set environment variable:
  export RISE_MANAGED_BUILDKIT=true

For manual setup, see: https://github.com/NiklasRosenstein/rise/issues/18

Note: The managed BuildKit feature works with or without SSL_CERT_FILE - it simply mounts the certificate when available.

Affected Build Backends

  • pack - Already supports SSL_CERT_FILE natively (no managed daemon needed)
  • ⚠️ docker / docker:build - Does not support BuildKit secrets (use docker:buildx instead)
  • docker:buildx - Full SSL support via BuildKit secrets (auto-injected into Dockerfile)
  • buildctl - Full SSL support via BuildKit secrets (auto-injected into Dockerfile)
  • ⚠️ railpack / railpack:buildx - Benefits from managed daemon
  • ⚠️ railpack:buildctl - Benefits from managed daemon

Manual Setup (Advanced)

For users who prefer manual control, you can create your own BuildKit daemon:

# Start BuildKit daemon with certificate
docker run --platform linux/amd64 --privileged --name my-buildkit --rm -d \
  --volume $SSL_CERT_FILE:/etc/ssl/certs/ca-certificates.crt:ro \
  moby/buildkit

# Point Rise CLI to your daemon
export BUILDKIT_HOST=docker-container://my-buildkit
rise build myapp:latest --backend railpack

For more details, see Issue #18.

Build-Time SSL Certificate Embedding (Railpack)

The --railpack-embed-ssl-cert flag embeds SSL certificates directly into the Railpack build plan for use during RUN commands. This complements --managed-buildkit by handling build-time SSL requirements.

Important differences:

  • --managed-buildkit: Injects SSL certs into BuildKit daemon (for pulling images, registry access). Does NOT embed cert into final image.
  • --railpack-embed-ssl-cert: Embeds SSL certs into railpack plan.json as build assets (for RUN commands during build). DOES embed cert into final image.

Both flags can be used together for comprehensive SSL support.

When to use:

  • Application builds need SSL certificates (pip install, npm install, git clone, curl requests)
  • Running behind corporate proxies with certificate inspection
  • Custom or self-signed certificates

Default behavior:

  • Automatically enabled when SSL_CERT_FILE environment variable is set
  • This ensures builds work by default in most SSL certificate scenarios
  • Can be explicitly disabled with --railpack-embed-ssl-cert=false

Usage:

export SSL_CERT_FILE=/path/to/ca-certificates.crt

# Embedding is automatically enabled when SSL_CERT_FILE is set
rise build myapp:latest --backend railpack

# Explicitly disable even when SSL_CERT_FILE is set
rise build myapp:latest --backend railpack --railpack-embed-ssl-cert=false

# Explicitly enable (useful when SSL_CERT_FILE is not set)
rise build myapp:latest --backend railpack --railpack-embed-ssl-cert=true

# Combine with managed BuildKit for comprehensive SSL support
rise build myapp:latest --backend railpack --managed-buildkit

Environment variable support:

export RISE_RAILPACK_EMBED_SSL_CERT=true
rise build myapp:latest --backend railpack
# Embedding is enabled via env var

Config file support:

rise config set railpack_embed_ssl_cert true
rise build myapp:latest --backend railpack
# Embedding is enabled via config

Precedence order: CLI flag > Environment variable > Config file > Default (enabled if SSL_CERT_FILE is set)

Build-Time SSL Certificate Injection (Docker/Buildctl)

When using docker:buildx or buildctl backends with SSL_CERT_FILE set, Rise automatically injects SSL certificates into your Dockerfile’s RUN commands using BuildKit secrets.

How it works:

  1. Rise preprocesses your Dockerfile to add --mount=type=secret,id=SSL_CERT_FILE,target=<path> to each RUN command
  2. The secret mount makes the certificate available at multiple standard system paths during RUN commands
  3. The certificate is passed to BuildKit via --secret id=SSL_CERT_FILE,src=<path>
  4. Certificates are NOT embedded in the final image (only available during build)

Supported certificate paths:

  • /etc/ssl/certs/ca-certificates.crt (Debian, Ubuntu, Arch)
  • /etc/pki/tls/certs/ca-bundle.crt (RedHat, CentOS, Fedora)
  • /etc/ssl/ca-bundle.pem (OpenSUSE, SLES)
  • /etc/ssl/cert.pem (Alpine Linux)
  • /usr/lib/ssl/cert.pem (OpenSSL default)

Example:

export SSL_CERT_FILE=/path/to/ca-certificates.crt

# SSL certificates automatically available during RUN commands
rise build myapp:latest --backend docker:buildx
rise build myapp:latest --backend buildctl

# Debug logging shows the preprocessed Dockerfile
RUST_LOG=debug rise build myapp:latest --backend docker:buildx

What your Dockerfile sees:

Original:

RUN apt-get update && apt-get install -y curl
RUN pip install -r requirements.txt

Processed (internal):

RUN --mount=type=secret,id=SSL_CERT_FILE,target=/etc/ssl/certs/ca-certificates.crt --mount=type=secret,id=SSL_CERT_FILE,target=/etc/pki/tls/certs/ca-bundle.crt ... apt-get update && apt-get install -y curl
RUN --mount=type=secret,id=SSL_CERT_FILE,target=/etc/ssl/certs/ca-certificates.crt --mount=type=secret,id=SSL_CERT_FILE,target=/etc/pki/tls/certs/ca-bundle.crt ... pip install -r requirements.txt

Note: The docker:build backend does not support BuildKit secrets. If SSL_CERT_FILE is set, you’ll see a warning recommending docker:buildx instead.

Proxy Support

Rise CLI automatically detects and injects HTTP/HTTPS proxy environment variables into all build backends. This is useful when your build environment requires going through a corporate proxy to access external resources.

Supported Proxy Variables

Rise automatically detects these standard proxy environment variables:

  • HTTP_PROXY / http_proxy
  • HTTPS_PROXY / https_proxy
  • NO_PROXY / no_proxy

All variants (uppercase and lowercase) are automatically detected from your environment and passed to the appropriate build backend.

Localhost to host.docker.internal Transformation

Since builds execute in containers, localhost and 127.0.0.1 addresses are automatically transformed to host.docker.internal to allow container builds to reach a proxy server running on your host machine.

Example transformations:

  • http://localhost:3128http://host.docker.internal:3128
  • https://127.0.0.1:8080/pathhttps://host.docker.internal:8080/path
  • http://user:pass@localhost:3128http://user:pass@host.docker.internal:3128
  • http://proxy.example.com:8080 → unchanged (not localhost)

Note: NO_PROXY and no_proxy values are passed through unchanged since they contain comma-separated lists, not URLs.

Usage Examples

Set proxy variables in your environment before running rise:

export HTTP_PROXY=http://proxy.example.com:3128
export HTTPS_PROXY=http://proxy.example.com:3128
export NO_PROXY=localhost,127.0.0.1,.example.com

# Proxy settings automatically applied to all builds
rise build myapp:latest ./path
rise deployment create myproject --path ./app

With localhost proxy:

# Proxy running on your host machine
export HTTP_PROXY=http://localhost:3128
export HTTPS_PROXY=http://localhost:3128

# Automatically transformed to host.docker.internal for container builds
rise build myapp:latest --backend pack
rise build myapp:latest --backend railpack
rise build myapp:latest --backend docker

Backend-Specific Behavior

Pack Backend:

  • Proxy variables are passed via --env arguments to the pack CLI
  • Pack forwards these to the buildpack lifecycle containers
  • Works with pack’s --network host networking mode

Railpack Backend:

  • Proxy variables are passed via --secret flags to buildx/buildctl
  • Secret references are added to build steps in plan.json
  • BuildKit provides the secret values from environment variables
  • Railpack frontend makes these available as environment variables in build steps

Docker Backend:

  • Proxy variables are passed via --build-arg arguments
  • Docker automatically respects HTTP_PROXY, HTTPS_PROXY, and NO_PROXY as build args
  • Available during Dockerfile RUN commands

No Configuration Required

Proxy support is completely automatic - no CLI flags or configuration needed. Rise CLI respects the standard proxy environment variables already set in your shell or CI/CD environment.

Local Development with rise run

The rise run command builds and immediately runs your application locally for development purposes. This is useful for testing your application before deploying it to the Rise platform.

Basic Usage

# Build and run from current directory (defaults to port 8080)
rise run

# Specify directory
rise run ./path/to/app

# Custom port
rise run --http-port 3000

# Expose on different host port
rise run --http-port 8080 --expose 3000

With Project Environment Variables

When authenticated, you can load non-secret environment variables from a project:

# Load environment variables from project
rise run --project my-app

Note: Only non-secret environment variables are loaded. Secret values cannot be retrieved from the backend for security reasons.

Setting Runtime Environment Variables

You can set custom runtime environment variables using the --run-env flag:

# Set a single environment variable
rise run --run-env DATABASE_URL=postgres://localhost/mydb

# Set multiple environment variables
rise run --run-env DATABASE_URL=postgres://localhost/mydb --run-env DEBUG=true --run-env API_KEY=test123

# Combine with project environment variables
rise run --project my-app --run-env OVERRIDE_VAR=custom_value

Runtime environment variables set via --run-env take precedence and can override project environment variables if they have the same key.

Build Backend Selection

Use any build backend with rise run:

# Use pack backend
rise run --backend pack

# Use docker backend
rise run --backend docker

# With custom builder
rise run --backend pack --builder paketobuildpacks/builder-jammy-base

How It Works

  1. Build: Builds the container image locally using the selected backend
  2. Tag: Tags the image as rise-local-{project-name} (or rise-local-app if no project specified)
  3. Run: Executes docker run --rm -it -p {expose}:{http-port} -e PORT={http-port} {image}
  4. Environment: Automatically sets PORT environment variable
  5. Project Variables: Loads non-secret environment variables from the project if --project is specified
  6. Cleanup: Container is automatically removed when stopped (--rm flag)

Port Configuration

  • --http-port: The port your application listens on inside the container (sets PORT env var)
  • --expose: The port exposed on your host machine (defaults to same as --http-port)

Example:

# Application listens on port 8080, accessible at http://localhost:3000
rise run --http-port 8080 --expose 3000

Interactive Mode

rise run uses interactive mode (-it) so you can:

  • See real-time logs from your application
  • Press Ctrl+C to stop the container
  • Interact with your application if it accepts input

Complete Example

# Create a project
rise project create my-app

# Set some environment variables
rise env set my-app DATABASE_URL postgres://localhost/mydb
rise env set my-app API_KEY secret123 --secret

# Run locally with project environment variables
rise run --project my-app --http-port 3000

# Application accessible at http://localhost:3000
# PORT=3000 and DATABASE_URL=postgres://localhost/mydb are set
# API_KEY is not loaded (secret values not retrievable)

# Run with additional runtime environment variables
rise run --project my-app --http-port 3000 --run-env DEBUG=true --run-env LOG_LEVEL=verbose

# Application now has PORT, DATABASE_URL, DEBUG, and LOG_LEVEL environment variables set

Configuration Guide

Rise backend uses YAML configuration files with environment variable substitution support. TOML is also supported for backward compatibility.

Configuration Files

Configuration files are located in config/ and loaded in this order:

  1. default.{toml,yaml,yml} - Base configuration with sensible defaults
  2. {RISE_CONFIG_RUN_MODE}.{toml,yaml,yml} - Environment-specific config (optional)
    • development.toml or development.yaml when RISE_CONFIG_RUN_MODE=development
    • production.toml or production.yaml when RISE_CONFIG_RUN_MODE=production
  3. local.{toml,yaml,yml} - Local overrides (not checked into git)

Later files override earlier ones.

File Format: The backend supports both YAML and TOML formats. When multiple formats exist for the same config file (e.g., both default.yaml and default.toml), TOML takes precedence. YAML is the recommended format as it integrates seamlessly with Kubernetes/Helm deployments.

Environment Variable Substitution

Configuration values can reference environment variables using the syntax:

# TOML example
client_secret = "${RISE_AUTH_CLIENT_SECRET:-rise-backend-secret}"
account_id = "${AWS_ACCOUNT_ID}"
public_url = "https://${DOMAIN_NAME}:${PORT}"
# YAML example
auth:
  client_secret: "${RISE_AUTH_CLIENT_SECRET:-rise-backend-secret}"
registry:
  account_id: "${AWS_ACCOUNT_ID}"
server:
  public_url: "https://${DOMAIN_NAME}:${PORT}"

Syntax

  • ${VAR_NAME} - Use environment variable VAR_NAME, error if not set
  • ${VAR_NAME:-default} - Use VAR_NAME if set, otherwise use default

How It Works

  1. Configuration files are parsed as TOML or YAML
  2. String values are scanned for ${...} patterns
  3. Patterns are replaced with environment variable values
  4. Resulting configuration is deserialized into Settings struct

This happens after TOML/YAML parsing but before deserialization, so:

  • ✅ Works in all string values (including nested tables/maps and arrays)
  • ✅ Preserves structure and types
  • ✅ Clear error messages if required variables are missing

Configuration Precedence

Configuration is loaded in this order (later values override earlier ones):

  1. default.{toml,yaml,yml} - Base configuration with defaults
  2. {RISE_CONFIG_RUN_MODE}.{toml,yaml,yml} - Environment-specific (e.g., production.yaml)
  3. local.{toml,yaml,yml} - Local overrides (not in git)
  4. Environment variable substitution - ${VAR} patterns are replaced
  5. DATABASE_URL special case - Overrides [database] url if set

Note: When multiple file formats exist for the same config file, TOML takes precedence over YAML.

Example (TOML):

# In default.toml
client_secret = "${AUTH_SECRET:-default-secret}"

# In production.toml
client_secret = "${AUTH_SECRET}"  # Override: no default, required

# In local.toml
client_secret = "my-local-secret"  # Override: hardcoded value

Example (YAML):

# In default.yaml
auth:
  client_secret: "${AUTH_SECRET:-default-secret}"

# In production.yaml (overrides default.yaml)
auth:
  client_secret: "${AUTH_SECRET}"  # No default, required

Special Cases

DATABASE_URL: For convenience, the DATABASE_URL environment variable is checked after config loading and will override any [database] url setting. This is optional - you can use ${DATABASE_URL} in TOML instead:

# Option 1: Direct environment variable (checked after config loads)
[database]
url = ""  # Empty, DATABASE_URL env var will be used

# Option 2: Explicit substitution (recommended for consistency)
[database]
url = "${DATABASE_URL}"

Note: DATABASE_URL is only required at compile time for SQLX query verification. At runtime, you can set it via either method above.

Examples

Development (default.toml)

[server]
host = "0.0.0.0"
port = 3000
public_url = "http://localhost:3000"

[auth]
issuer = "http://localhost:5556/dex"
client_id = "rise-backend"
client_secret = "${RISE_AUTH_CLIENT_SECRET:-rise-backend-secret}"

Production with Environment Variables (TOML)

# production.toml
[server]
host = "0.0.0.0"
port = "${PORT:-3000}"
public_url = "${PUBLIC_URL}"  # Required, no default
cookie_secure = true

[auth]
issuer = "${DEX_ISSUER}"
client_id = "${OIDC_CLIENT_ID}"
client_secret = "${OIDC_CLIENT_SECRET}"  # Required
admin_users = ["${ADMIN_EMAIL}"]

[registry]
type = "ecr"
region = "${AWS_REGION:-us-east-1}"
account_id = "${AWS_ACCOUNT_ID}"
role_arn = "${ECR_CONTROLLER_ROLE_ARN}"
push_role_arn = "${ECR_PUSH_ROLE_ARN}"

Production with Environment Variables (YAML)

# production.yaml - ideal for Kubernetes/Helm deployments
server:
  host: "0.0.0.0"
  port: "${PORT:-3000}"
  public_url: "${PUBLIC_URL}"  # Required, no default
  cookie_secure: true

auth:
  issuer: "${DEX_ISSUER}"
  client_id: "${OIDC_CLIENT_ID}"
  client_secret: "${OIDC_CLIENT_SECRET}"
  admin_users:
    - "${ADMIN_EMAIL}"

database:
  url: "${DATABASE_URL}"

registry:
  type: "ecr"
  region: "${AWS_REGION:-us-east-1}"
  account_id: "${AWS_ACCOUNT_ID}"
  role_arn: "${ECR_CONTROLLER_ROLE_ARN}"
  push_role_arn: "${ECR_PUSH_ROLE_ARN}"

Environment file:

# .env
PUBLIC_URL=https://rise.example.com
DEX_ISSUER=https://dex.example.com
OIDC_CLIENT_ID=rise-production
OIDC_CLIENT_SECRET=very-secret-value
ADMIN_EMAIL=admin@example.com
AWS_ACCOUNT_ID=123456789012
ECR_CONTROLLER_ROLE_ARN=arn:aws:iam::123456789012:policy/rise-backend
ECR_PUSH_ROLE_ARN=arn:aws:iam::123456789012:role/rise-backend-ecr-push
DATABASE_URL=postgres://rise:${DB_PASSWORD}@db.example.com/rise

Local Overrides (local.toml)

For local development, create local.toml (not checked into git):

# Override just what you need
[auth]
client_secret = "my-local-secret"

[registry]
type = "oci-client-auth"
registry_url = "localhost:5000"

Configuration Reference

Server Settings

[server]
host = "0.0.0.0"              # Bind address
port = 3000                    # HTTP port
public_url = "http://..."      # Public URL (for OAuth redirects)
cookie_domain = ""             # Cookie domain ("" = current host only)
cookie_secure = false          # Set true for HTTPS
jwt_signing_secret = "..."     # JWT signing secret (base64-encoded, min 32 bytes)
jwt_expiry_seconds = 86400     # JWT expiry duration in seconds (default: 24 hours)
jwt_claims = ["sub", "email", "name"]  # Claims to include from IdP
rs256_private_key_pem = "..."  # Optional: RS256 private key (persists JWTs across restarts)
rs256_public_key_pem = "..."   # Optional: RS256 public key (derived if not provided)

JWT Configuration:

  • jwt_signing_secret: Base64-encoded secret for HS256 JWT signing (generate with openssl rand -base64 32)
  • jwt_expiry_seconds: Duration in seconds before JWTs expire (default: 86400 = 24 hours)
  • jwt_claims: Claims to include from IdP token in Rise JWTs
  • rs256_private_key_pem: Optional pre-configured RS256 private key (prevents JWT invalidation on restart)
  • rs256_public_key_pem: Optional RS256 public key (automatically derived from private key if omitted)

Auth Settings

[auth]
issuer = "http://..."          # OIDC issuer URL
client_id = "rise-backend"     # OAuth2 client ID
client_secret = "..."          # OAuth2 client secret
admin_users = ["email@..."]    # Admin user emails (array)

Database Settings

[database]
url = "postgres://..."         # PostgreSQL connection string
                              # Or use DATABASE_URL env var

Registry Settings

AWS ECR

[registry]
type = "ecr"
region = "us-east-1"
account_id = "123456789012"
repo_prefix = "rise/"
role_arn = "arn:aws:iam::..."
push_role_arn = "arn:aws:iam::..."
auto_remove = true

OCI Registry (Docker, Harbor, Quay)

[registry]
type = "oci-client-auth"
registry_url = "registry.example.com"
namespace = "rise-apps"

Controller Settings (Optional)

[controller]
reconcile_interval_secs = 5
health_check_interval_secs = 5
termination_interval_secs = 5
cancellation_interval_secs = 5
expiration_interval_secs = 60
secret_refresh_interval_secs = 3600

Validation

The backend validates configuration on startup:

  • Required fields must be set
  • Invalid values cause startup failure with clear error messages
  • Environment variable substitution errors are reported
  • Unknown configuration fields generate warnings (as of v0.9.0)

Checking Configuration

Use the rise backend check-config command to validate backend configuration:

rise backend check-config

This command:

  • Loads and validates backend configuration files
  • Reports any unknown/unused configuration fields as warnings
  • Exits with an error if configuration is invalid
  • Useful for CI/CD pipelines and deployment validation

Example output:

Checking backend configuration...
⚠️  WARN: Unknown configuration field in backend config: server.typo_field
⚠️  WARN: Unknown configuration field in backend config: unknown_section
✓ Configuration is valid

Unknown Field Warnings

Starting in v0.9.0, Rise warns about unrecognized configuration fields to help catch typos and outdated options:

Backend Configuration (YAML/TOML):

# Warnings appear in logs when starting server or using check-config
WARN rise::server::settings: Unknown configuration field in backend config: server.unknown_field

Project Configuration (rise.toml):

# Warnings appear when loading rise.toml (during build, deploy, etc.)
WARN rise::build::config: Unknown configuration field in ./rise.toml: build.?.typo_field

These are warnings, not errors - your configuration will still load and work. The warnings help you:

  • Catch typos in field names
  • Identify outdated configuration options after upgrades
  • Ensure your configuration is being used as intended

Run with RUST_LOG=debug to see configuration loading details:

RUST_LOG=debug cargo run --bin rise -- backend server

Custom Domains

Rise supports custom domains for projects, allowing you to serve your applications from your own domain names instead of (or in addition to) the default project URL.

Primary Custom Domains

Each project can designate one custom domain as primary. The primary domain is used as the canonical URL for the application and is exposed via the RISE_APP_URL environment variable.

RISE_APP_URL Environment Variable

Rise automatically creates a RISE_APP_URL deployment environment variable containing the canonical URL for the application. This variable is determined at deployment creation time and persisted in the database:

  • If a primary custom domain is set: RISE_APP_URL contains the primary custom domain URL (e.g., https://example.com)
  • If no primary domain is set: RISE_APP_URL contains the default project URL (e.g., https://my-app.rise.dev)

Since this is a deployment environment variable, you can view it via the API or CLI along with your other environment variables.

This environment variable is useful for:

  • Generating absolute URLs in your application (e.g., for email links, OAuth redirects)
  • Implementing canonical URL redirects (redirect all traffic to the primary domain)
  • Setting the correct domain for cookies and CORS headers

Example usage in your application:

// Node.js
const canonicalUrl = process.env.RISE_APP_URL;

// Redirect to canonical domain
app.use((req, res, next) => {
  const requestUrl = `${req.protocol}://${req.get('host')}`;
  if (requestUrl !== canonicalUrl) {
    return res.redirect(301, `${canonicalUrl}${req.url}`);
  }
  next();
});
# Python
import os

canonical_url = os.environ.get('RISE_APP_URL')

# Flask: Set SERVER_NAME
app.config['SERVER_NAME'] = canonical_url.replace('https://', '').replace('http://', '')

Managing Custom Domains

Via Frontend:

  1. Navigate to your project’s Domains tab
  2. Add custom domains using the “Add Domain” button
  3. Click the star icon next to a domain to set it as primary
  4. The primary domain will show a filled yellow star and a “Primary” badge

Via API:

# List custom domains
curl https://rise.dev/api/projects/my-app/domains

# Add a custom domain
curl -X POST https://rise.dev/api/projects/my-app/domains \
  -H "Authorization: Bearer $TOKEN" \
  -d '{"domain": "example.com"}'

# Set domain as primary
curl -X PUT https://rise.dev/api/projects/my-app/domains/example.com/primary \
  -H "Authorization: Bearer $TOKEN"

# Unset primary status
curl -X DELETE https://rise.dev/api/projects/my-app/domains/example.com/primary \
  -H "Authorization: Bearer $TOKEN"

DNS Configuration

Before adding a custom domain, you must configure your DNS to point to your Rise deployment:

# A record for root domain
example.com.  IN  A  <rise-ingress-ip>

# CNAME for subdomain
www.example.com.  IN  CNAME  <rise-ingress-hostname>

Custom domains are added to the ingress for the default deployment group only.

TLS/SSL

Custom domains use the same TLS configuration as the default project URL:

  • If your Rise deployment uses a wildcard certificate, custom domains will use HTTP unless configured with per-domain TLS
  • Configure custom_domain_tls_mode in the Kubernetes controller settings for automatic HTTPS on custom domains

Behavior

  • Automatic reconciliation: Setting or unsetting a primary domain triggers reconciliation of the active deployment to update the RISE_APP_URL environment variable
  • Deletion protection: You can delete a primary domain; RISE_APP_URL will fall back to the default project URL
  • Multiple domains: You can add multiple custom domains to a project, but only one can be primary
  • Environment variable list: All custom domains (primary and non-primary) are also available in the RISE_APP_URLS environment variable as a JSON array

Container Registries

Rise generates temporary credentials for pushing container images to registries. The backend acts as a credential broker, abstracting provider-specific authentication.

Supported Providers

AWS ECR

Amazon Elastic Container Registry with scoped credentials via STS AssumeRole.

Configuration:

[registry]
type = "ecr"
region = "us-east-1"
account_id = "123456789012"
repo_prefix = "rise/"
role_arn = "arn:aws:iam::123456789012:role/rise-backend"
push_role_arn = "arn:aws:iam::123456789012:role/rise-backend-ecr-push"
auto_remove = false  # Tag as orphaned instead of deleting

How it works:

  1. Backend assumes push_role_arn with inline session policy scoped to specific project
  2. Backend calls AWS GetAuthorizationToken API with scoped credentials
  3. Returns credentials valid for 12 hours, scoped to single project repository
  4. CLI uses credentials to push images

Image path: {account}.dkr.ecr.{region}.amazonaws.com/{repo_prefix}{project}:{tag}

Example: 123456789012.dkr.ecr.us-east-1.amazonaws.com/rise/my-app:latest

Docker Registry

Works with any Docker-compatible registry (Docker Hub, Harbor, Quay, local registries).

Configuration:

[registry]
type = "oci-client-auth"
registry_url = "localhost:5000"
namespace = "rise-apps"

How it works:

  1. Backend returns registry URL to CLI
  2. CLI uses existing Docker credentials from ~/.docker/config.json
  3. No credential generation - relies on pre-authentication via docker login

Common use cases:

  • Local development: docker-compose registry (port 5000)
  • Docker Hub: registry_url = "docker.io", namespace = "myorg"
  • Harbor: registry_url = "harbor.company.com", namespace = "project"

Local Development Registry

For local development, Rise includes a Docker registry in docker-compose:

registry:
  image: registry:2
  ports:
    - "5000:5000"
  volumes:
    - registry_data:/var/lib/registry

Start:

mise backend:deps  # Starts all services including registry

Access:

  • Registry API: http://localhost:5000
  • Registry UI: http://localhost:5001 (browse images)

Usage:

# List repositories
curl http://localhost:5000/v2/_catalog

# List tags
curl http://localhost:5000/v2/my-app/tags/list

# Deploy (automatically uses local registry)
rise deployment create my-app

⚠️ Production Warning: Local registry uses HTTP, has no auth, and uses Docker volumes. For production, use AWS ECR, GCR, or similar.

AWS ECR Production Setup

Architecture: Two-Role Pattern

Controller Role (rise-backend):

  • Create/delete ECR repositories
  • Tag repositories (managed, orphaned)
  • Configure repository settings
  • Assume the push role

Push Role (rise-backend-ecr-push):

  • Push/pull images to ECR (under rise/* prefix)
  • Used by backend to generate scoped credentials for CLI

Why two roles?

  • Separation: Controller manages infrastructure, push handles images
  • Least privilege: Scoped credentials limited to single repository
  • Temporary: 12-hour max lifetime, can’t delete repositories

Terraform Module

Use the provided modules/rise-aws module:

module "rise_ecr" {
  source = "../modules/rise-aws"

  name        = "rise-backend"
  repo_prefix = "rise/"
  auto_remove = false

  tags = {
    Environment = "production"
    ManagedBy   = "terraform"
  }
}

output "rise_ecr_config" {
  value = module.rise_ecr.rise_config
}

Apply:

cd terraform
terraform init
terraform apply
terraform output rise_ecr_config

With EKS + IRSA

Configure module for IRSA:

module "rise_ecr" {
  source = "../modules/rise-aws"

  name                   = "rise-backend"
  repo_prefix            = "rise/"
  irsa_oidc_provider_arn = module.eks.oidc_provider_arn
  irsa_namespace         = "rise-system"
  irsa_service_account   = "rise-backend"
}

Helm values:

serviceAccount:
  create: true
  iamRoleArn: "arn:aws:iam::123456789012:role/rise-backend"

config:
  registry:
    type: "ecr"
    region: "us-east-1"
    account_id: "123456789012"
    repo_prefix: "rise/"
    role_arn: "arn:aws:iam::123456789012:role/rise-backend"
    push_role_arn: "arn:aws:iam::123456789012:role/rise-backend-ecr-push"
    # NO static credentials with IRSA

With IAM User (Non-AWS)

For running Rise outside AWS:

module "rise_ecr" {
  source = "../modules/rise-aws"

  name            = "rise-backend"
  repo_prefix     = "rise/"
  create_iam_role = false
  create_iam_user = true
}

# Store credentials securely
resource "aws_secretsmanager_secret_version" "rise_ecr_creds" {
  secret_id = aws_secretsmanager_secret.rise_ecr_creds.id
  secret_string = jsonencode({
    access_key_id     = module.rise_ecr.access_key_id
    secret_access_key = module.rise_ecr.secret_access_key
  })
}

Configuration

Registry configuration is in config/ directory. See the registry examples at the top of this document for TOML configuration format.

Configuration file precedence (highest to lowest):

  1. local.yaml (not checked into git, for local overrides)
  2. {RISE_CONFIG_RUN_MODE}.yaml (e.g., production.yaml, development.yaml)
  3. default.yaml

Environment variable substitution: You can reference environment variables in config files using ${VAR_NAME} or ${VAR_NAME:-default} syntax.

API Endpoint

Request credentials for a project:

GET /registry/credentials?project=my-app
Authorization: Bearer <jwt-token>

Response:

{
  "credentials": {
    "registry_url": "123456.dkr.ecr.us-east-1.amazonaws.com",
    "username": "AWS",
    "password": "eyJwYXlsb2FkIjoiS...",
    "expires_in": 43200
  },
  "repository": "my-app"
}

Security

ECR Credential Scope:

  • Scoped to specific project using STS AssumeRole with inline session policies
  • Credentials for my-app can only push to {repo_prefix}my-app*
  • 12-hour lifespan (AWS enforced)

Docker:

  • No credential scoping
  • Use registry-specific access controls

Best Practices:

  1. Use IAM roles (ECR): Avoid static credentials
  2. Enable HTTPS: Always use TLS in production
  3. Monitor access: Track credential requests and usage
  4. Rotate credentials: For Docker registries, rotate regularly
  5. Least privilege: Scope credentials to minimum permissions

Troubleshooting

“Access Denied” when pushing (ECR):

  1. Verify controller role can assume push role
  2. Check push role permissions
  3. Ensure repository exists with correct prefix
  4. Verify STS session policy scope

“Connection refused” to registry (Docker):

docker-compose ps registry
docker-compose logs registry
docker-compose restart registry

Images not persisting (Docker):

docker volume ls | grep registry
docker-compose down -v  # Removes volumes!

Extending Registry Support

To add a new registry provider:

  1. Implement RegistryProvider trait in rise-backend/src/registry/providers/
  2. Add provider to RegistryConfig enum
  3. Register provider in create_registry_provider()

Potential future providers: JFrog Artifactory, GCR, ACR, GHCR, Quay.io

Kubernetes Deployment Backend

The Kubernetes deployment backend deploys applications to Kubernetes clusters using Deployments, Services, and Ingresses.

Overview

The Kubernetes controller manages application deployments on Kubernetes by:

  • Creating namespace-scoped resources for each project
  • Deploying applications as Deployments with rolling updates
  • Managing traffic routing with Services and Ingresses
  • Implementing blue/green deployments via Service selector updates
  • Automatically refreshing image pull secrets for private registries

Configuration

TOML Configuration

[kubernetes]
# Optional: path to kubeconfig (defaults to in-cluster if not set)
kubeconfig = "/path/to/kubeconfig"

# Ingress class to use
ingress_class = "nginx"

# Ingress URL template for production (default) deployment group
# Supports both subdomain and sub-path routing (must contain {project_name})
production_ingress_url_template = "{project_name}.apps.rise.local"

# Optional: Ingress URL template for staging (non-default) deployment groups
# Must contain both {project_name} and {deployment_group} placeholders
staging_ingress_url_template = "{project_name}-{deployment_group}.preview.rise.local"

# Or for sub-path routing:
# production_ingress_url_template = "rise.local/{project_name}"
# staging_ingress_url_template = "rise.local/{project_name}/{deployment_group}"

# Namespace format (must contain {project_name})
namespace_format = "rise-{project_name}"

# Custom domain TLS mode
# - "per-domain": Each custom domain gets its own tls-{domain} secret (for cert-manager)
# - "shared": All custom domains share ingress_tls_secret_name
custom_domain_tls_mode = "per-domain"  # Default

# Annotations for custom domain ingresses (e.g., cert-manager integration)
[kubernetes.custom_domain_ingress_annotations]
"cert-manager.io/cluster-issuer" = "letsencrypt-prod"

Kubeconfig Options

The controller supports two authentication modes:

In-cluster mode (recommended for production):

  • Omit kubeconfig setting
  • Uses service account mounted at /var/run/secrets/kubernetes.io/serviceaccount/
  • Requires RBAC permissions for the controller’s service account

External kubeconfig:

  • Set kubeconfig path explicitly
  • Useful for development or external cluster access
  • Falls back to ~/.kube/config if path not specified

How It Works

Resources Managed

The Kubernetes controller creates and manages the following resources per project:

ResourceScopePurpose
NamespaceOne per projectIsolates project resources
DeploymentOne per deploymentRuns application pods
ServiceOne per deployment groupRoutes traffic to active deployment
IngressOne per deployment groupExposes HTTP/HTTPS endpoints
SecretOne per projectStores image pull credentials

Naming Scheme

Resources follow consistent naming patterns:

ResourcePatternExample
Namespacerise-{project}rise-my-app
Deployment{project}-{deployment_id}my-app-20251207-143022
Service{escaped_group}default, mr--26
Ingress{escaped_group}default, mr--26
Secretrise-registry-credsrise-registry-creds

Character escaping: Deployment group names containing invalid Kubernetes characters (e.g., /, @) are escaped with --. For example, mr/26 becomes mr--26.

Deployment Groups and URLs

Each deployment group gets its own Service and Ingress with a unique URL:

GroupURL PatternExample (Subdomain)Example (Sub-path)
defaultproduction_ingress_url_templatemy-app.apps.rise.localrise.local/my-app
Custom groupsstaging_ingress_url_templatemy-app-mr--26.preview.rise.localrise.local/my-app/mr--26

Sub-path vs Subdomain Routing

Rise supports two Ingress routing modes configured globally via URL templates:

Subdomain Routing (traditional approach):

  • Production: {project_name}.apps.rise.local
  • Staging: {project_name}-{deployment_group}.preview.rise.local
  • Each project gets a unique subdomain
  • Ingress path: / (Prefix type)
  • No path rewriting needed

Sub-path Routing (shared domain):

  • Production: rise.local/{project_name}
  • Staging: rise.local/{project_name}/{deployment_group}
  • All projects share the same domain with different paths
  • Ingress path: /{project}(/|$)(.*) (ImplementationSpecific type with regex)
  • Nginx automatically rewrites paths

Path Rewriting

For sub-path routing, Nginx automatically rewrites paths so your application receives requests at / while preserving the original path prefix:

  • Client request: GET https://rise.local/myapp/api/users
  • Application receives: GET /api/users
  • Headers added: X-Forwarded-Prefix: /myapp

The controller uses the built-in nginx.ingress.kubernetes.io/x-forwarded-prefix annotation to add this header. Configure your application to use the X-Forwarded-Prefix header when generating URLs to ensure links and assets work correctly.

Example configuration:

[kubernetes]
production_ingress_url_template = "rise.local/{project_name}"
staging_ingress_url_template = "rise.local/{project_name}/{deployment_group}"
auth_backend_url = "http://rise-backend.default.svc.cluster.local:3000"
auth_signin_url = "https://rise.local"

Private Project Authentication

The Kubernetes controller implements ingress-level authentication for private projects using Nginx auth annotations and Rise-issued JWTs.

Overview

  • Public projects: Accessible without authentication
  • Private projects: Require user authentication AND project access authorization
  • Authentication method: OAuth2 via configured identity provider (Dex)
  • Token security: Rise-issued JWTs scoped to specific projects
  • Cookie isolation: Separate cookies prevent projects from accessing Rise APIs

Configuration

Private project authentication requires JWT signing configuration:

[server]
# JWT signing secret for ingress authentication (base64-encoded, min 32 bytes)
# Generate with: openssl rand -base64 32
# REQUIRED: The backend will fail to start without this
jwt_signing_secret = "YOUR_BASE64_SECRET_HERE"

# Optional: JWT claims to include from IdP token (default shown)
jwt_claims = ["sub", "email", "name"]

# Cookie settings for subdomain sharing
cookie_domain = ".rise.local"  # Allows cookies to work across *.rise.local
cookie_secure = false          # Set to false for local development (HTTP)
[kubernetes]
# Internal cluster URL for Nginx auth subrequests
auth_backend_url = "http://rise-backend.default.svc.cluster.local:3000"

# Public backend URL for browser redirects during authentication
auth_signin_url = "http://rise.local"  # Use http:// for local development

Generate JWT signing secret:

openssl rand -base64 32

Authentication Flow

When a user visits a private project, the following flow occurs:

User → myapp.apps.rise.local (private)
  ↓
Nginx calls GET /api/v1/auth/ingress?project=myapp
  - 🍪 NO COOKIE or invalid JWT
  ↓ Returns 401 Unauthorized
  ↓
Nginx redirects to /api/v1/auth/signin?project=myapp&redirect=http://myapp.apps.rise.local
  ↓
GET /api/v1/auth/signin (Pre-Auth Page):
  - Renders auth-signin.html.tera
  - Shows: "Project 'myapp' is private. Sign in to access."
  - Button: "Sign In" → /api/v1/auth/signin/start?project=myapp&redirect=...
  ↓
User clicks "Sign In" button
  ↓
GET /api/v1/auth/signin/start (OAuth Start):
  - Stores project_name='myapp' in OAuth2State (PKCE state)
  - Redirects to Dex IdP authorize endpoint
  ↓
User completes OAuth at Dex
  ↓
Dex redirects to /api/v1/auth/callback?code=xyz&state=abc
  ↓
GET /api/v1/auth/callback (Token Exchange):
  - Retrieve OAuth2State (includes project_name='myapp' for UI context only)
  - Exchange code for IdP tokens
  - Validate IdP JWT
  - Extract claims (sub, email, name) and expiry
  - Issue Rise JWT with user claims (NOT project-scoped!)
  - 🍪 SET COOKIE: _rise_ingress = <Rise JWT>
       (Domain: .rise.local, HttpOnly, Secure=false, SameSite=Lax)
  - Renders auth-success.html.tera
  - Shows: "Authentication successful! Redirecting in 3s..."
  - JavaScript auto-redirects to http://myapp.apps.rise.local
  ↓
After 3 seconds, browser redirects to http://myapp.apps.rise.local
  ↓
Nginx calls GET /api/v1/auth/ingress?project=myapp
  - 🍪 READS COOKIE: _rise_ingress
  - Verifies Rise JWT signature (HS256)
  - Validates expiry
  - Checks user has project access via database query (NOT JWT claim!)
  ↓ Returns 200 OK + headers (X-Auth-Request-Email, X-Auth-Request-User)
  ↓
Nginx serves app
  - 🍪 Rise JWT cookie is sent to app (but app cannot decode it - HttpOnly)
  - App does NOT have access to Rise APIs (different cookie name)

JWT Structure

Rise issues symmetric HS256 JWTs with the following claims:

{
  "sub": "user-id-from-idp",
  "email": "user@example.com",
  "name": "User Name",
  "iat": 1234567890,
  "exp": 1234571490,
  "iss": "http://rise.local",
  "aud": "rise-ingress"
}

Key features:

  • NOT project-scoped: JWTs do NOT contain a project claim because the cookie is set at rise.local domain and shared across all *.apps.rise.local subdomains. Project access is validated separately in the ingress auth handler by checking database permissions.
  • Configurable claims: Include only necessary user information
  • Expiry matching: Token expiration matches IdP token (typically 1 hour)
  • Symmetric signing: HS256 with shared secret for fast validation

Two separate cookies are used for different purposes:

CookiePurposeContentsAccess
_rise_sessionRise API authenticationIdP JWTFrontend JavaScript
_rise_ingressProject access authenticationRise JWTHttpOnly (no JS access)

Security attributes:

  • HttpOnly: Prevents JavaScript access (XSS protection)
  • Secure: HTTPS-only transmission
  • SameSite=Lax: CSRF protection while allowing navigation
  • Domain: Shared across subdomains (e.g., .rise.local)
  • Max-Age: Matches JWT expiration

Access Control

For private projects, the ingress auth endpoint validates:

  1. JWT validity: Signature, expiration, issuer, audience
  2. User permissions: Database query to check if user is owner or team member

Access check logic:

#![allow(unused)]
fn main() {
// User can access if:
// - User is the project owner (owner_user_id), OR
// - User is a member of the team that owns the project (owner_team_id)
//
// NOTE: JWTs are NOT project-scoped - the same JWT can be used across all projects
// because the cookie is set at rise.local domain level and shared across *.apps.rise.local
}

Ingress Annotations

For private projects, the controller adds these Nginx annotations:

annotations:
  nginx.ingress.kubernetes.io/auth-url: "http://rise-backend.default.svc.cluster.local:3000/api/v1/auth/ingress?project=myapp"
  nginx.ingress.kubernetes.io/auth-signin: "http://rise.local/api/v1/auth/signin?project=myapp&redirect=$escaped_request_uri"
  nginx.ingress.kubernetes.io/auth-response-headers: "X-Auth-Request-Email,X-Auth-Request-User"

How it works:

  • auth-url: Nginx calls this endpoint for every request to validate authentication
    • Returns 2xx (200): Access granted
    • Returns 401/403: Access denied, redirect to auth-signin
    • Returns 5xx or unreachable: Access denied (fail-closed) - ensures security even if auth service is misconfigured or down
  • auth-signin: Where to redirect unauthenticated users
  • auth-response-headers: Headers to pass from auth response to the application

The application receives authenticated requests with these additional headers:

  • X-Auth-Request-Email: User’s email address
  • X-Auth-Request-User: User’s ID

Troubleshooting Authentication

Infinite redirect loop:

  • Check cookie_domain matches your domain structure
  • Verify cookies are being set (check browser DevTools → Application → Cookies)
  • Ensure cookie_secure is false for HTTP development environments

Browser always redirects HTTP to HTTPS:

  • Some TLDs (e.g., .dev) are on the HSTS preload list and browsers will always force HTTPS
  • Use .local TLD for local development to avoid HSTS issues
  • The default configuration uses rise.local which works correctly with HTTP
  • If you must use a different TLD, check if it’s on the HSTS preload list at https://hstspreload.org/

“Access denied” or 403 Forbidden error:

  • User is authenticated but not authorized for this project
  • Check project ownership: rise project show <project-name>
  • Add user to project’s team if needed

“No session cookie” error:

  • Cookie expired or not set
  • Cookie domain mismatch
  • Browser blocking third-party cookies
  • Check cookie_domain configuration

Private projects accessible without authentication:

  • Check ingress controller logs for auth subrequest errors: kubectl logs -n ingress-nginx <ingress-controller-pod>
  • Verify auth_backend_url in config includes the correct service URL and port
  • Ensure the auth service is reachable from the ingress controller (test with curl from ingress pod)
  • Check that ingress annotations are correctly set: kubectl get ingress -n rise-<project> -o yaml
  • All auth endpoints are under /api/v1 prefix (e.g., /api/v1/auth/ingress)

Authentication succeeds but access denied:

  • User is authenticated but not authorized for this project
  • Check project ownership: rise project show <project-name>
  • Add user to project’s team if needed

JWT signing errors in logs:

Error: Failed to initialize JWT signer: Invalid base64
  • JWT signing secret is not valid base64
  • Regenerate with: openssl rand -base64 32
  • Ensure secret is at least 32 bytes when decoded

Blue/Green Deployments

The controller implements blue/green deployments using Service selector updates:

  1. Deploy new Deployment: Create new Deployment with deployment-specific labels
  2. Wait for health: Wait until new Deployment pods are ready and pass health checks
  3. Switch traffic: Update Service selector to point to new deployment labels
  4. Previous deployment: Old Deployment remains but receives no traffic

This ensures zero-downtime deployments with instant rollback capability.

Labels

All resources are labeled for management and selection:

labels:
  rise.dev/managed-by: "rise"
  rise.dev/project: "my-app"
  rise.dev/deployment-group: "default"
  rise.dev/deployment-id: "20251207-143022"
  rise.dev/deployment-uuid: "550e8400-e29b-41d4-a716-446655440000"

Custom Domains and TLS

Rise supports custom domains for projects, allowing you to serve your application at your own domain names (e.g., app.example.com) instead of or in addition to the default project URL pattern.

Overview

When custom domains are configured for a project:

  • Rise creates a separate Ingress resource specifically for custom domains
  • Custom domains always route to the root path (/) regardless of the default ingress URL pattern
  • TLS certificates can be automatically provisioned using cert-manager integration

TLS Certificate Management

Rise provides two modes for TLS certificate management on custom domains:

Per-Domain Mode (Recommended for cert-manager)

When custom_domain_tls_mode is set to per-domain (the default), each custom domain gets its own TLS secret named tls-{domain}. This mode is designed to work with cert-manager for automatic certificate provisioning:

deployment_controller:
  type: kubernetes
  # ... other settings ...
  
  # TLS mode - per-domain creates separate secrets for each custom domain
  custom_domain_tls_mode: "per-domain"  # Default
  
  # Annotations to apply to custom domain ingresses (for cert-manager)
  custom_domain_ingress_annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    # Or use a specific issuer per namespace:
    # cert-manager.io/issuer: "letsencrypt-prod"

With this configuration:

  • Each custom domain (e.g., app.example.com) will have its own TLS secret (tls-app.example.com)
  • cert-manager will automatically provision Let’s Encrypt certificates
  • Certificates are automatically renewed by cert-manager
  • No manual TLS secret management required

Shared Mode

When custom_domain_tls_mode is set to shared, all custom domains share the same TLS secret specified by ingress_tls_secret_name:

deployment_controller:
  type: kubernetes
  # ... other settings ...
  
  # Shared TLS secret for all hosts (primary + custom domains)
  ingress_tls_secret_name: "my-wildcard-cert"
  
  # Use shared mode
  custom_domain_tls_mode: "shared"

This mode is useful when you have a wildcard certificate or want to manage certificates externally.

Cert-Manager Setup

To use cert-manager with Rise custom domains:

  1. Install cert-manager in your cluster:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml
  1. Create a ClusterIssuer for Let’s Encrypt:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    # Let's Encrypt production server
    server: https://acme-v02.api.letsencrypt.org/directory
    email: your-email@example.com
    privateKeySecretRef:
      name: letsencrypt-prod-key
    solvers:
      - http01:
          ingress:
            class: nginx
  1. Configure Rise to use cert-manager:
deployment_controller:
  type: kubernetes
  # ... other settings ...
  
  custom_domain_tls_mode: "per-domain"
  custom_domain_ingress_annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
  1. Add a custom domain to your project:
rise domain add my-project custom-domain.example.com

cert-manager will automatically:

  • Create an ACME challenge
  • Validate domain ownership
  • Issue a Let’s Encrypt certificate
  • Store it in the tls-custom-domain.example.com secret
  • Automatically renew certificates before expiration

DNS Configuration

For custom domains to work, you must configure DNS records to point to your Kubernetes ingress:

custom-domain.example.com.  A  <ingress-ip-address>

Or for CNAMEs:

custom-domain.example.com.  CNAME  <ingress-hostname>

Troubleshooting Custom Domain TLS

Certificate not being issued:

  • Check cert-manager logs: kubectl logs -n cert-manager deployment/cert-manager
  • Check certificate status: kubectl get certificate -n rise-<project>
  • Verify DNS is correctly configured and resolves to your ingress
  • Check ClusterIssuer/Issuer status: kubectl describe clusterissuer letsencrypt-prod

“Certificate not ready” error:

  • cert-manager is still working on the challenge - wait a few minutes
  • Check challenge status: kubectl get challenges -n rise-<project>
  • Verify ingress controller can handle ACME challenges

Multiple certificate requests:

  • Check that custom_domain_ingress_annotations are correctly configured
  • Verify you’re not mixing cert-manager annotations in ingress_annotations and custom_domain_ingress_annotations

Kubernetes Resources

Namespace

Created once per project:

apiVersion: v1
kind: Namespace
metadata:
  name: rise-my-app
  labels:
    rise.dev/managed-by: "rise"
    rise.dev/project: "my-app"

Secret (Image Pull Credentials)

Created/refreshed automatically for private registries:

apiVersion: v1
kind: Secret
metadata:
  name: rise-registry-creds
  namespace: rise-my-app
  annotations:
    rise.dev/last-refresh: "2025-12-07T14:30:22Z"
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: <base64-encoded-docker-config>

Auto-refresh: Secrets are automatically refreshed every hour to handle short-lived credentials (e.g., ECR tokens expire after 12 hours).

Configuring Image Pull Secrets

The Kubernetes controller supports three modes for managing image pull secrets:

1. Automatic Management (with registry provider)

  • When a registry provider is configured (e.g., AWS ECR), the controller automatically creates and refreshes the rise-registry-creds secret in each project namespace
  • Credentials are fetched from the registry provider on-demand
  • Secrets are automatically refreshed every hour
  • No additional configuration needed

2. External Secret Reference

  • For static Docker registries where credentials are managed externally (e.g., manually created secrets, sealed-secrets, external-secrets operator)
  • Configure the secret name in the deployment controller settings:
deployment_controller:
  type: kubernetes
  # ... other settings ...
  image_pull_secret_name: "my-registry-secret"
  • The controller will reference this secret name in all Deployments
  • The secret must exist in each project namespace before deployments can succeed
  • The controller will NOT create or manage this secret
  • Useful when:
    • Using a static registry that doesn’t support dynamic credential generation
    • Managing secrets through GitOps tools like sealed-secrets or external-secrets operator
    • Using a cluster-wide image pull secret that’s pre-configured in all namespaces

3. No Image Pull Secret

  • When no registry provider is configured and no image_pull_secret_name is set
  • Deployments will not include any imagePullSecrets field
  • Only works with public container images or when using Kubernetes cluster defaults

Example configurations:

Using AWS ECR (automatic):

registry:
  type: ecr
  region: us-east-1
  account_id: "123456789012"
  # ... other ECR settings ...

deployment_controller:
  type: kubernetes
  # No image_pull_secret_name needed - automatically managed

Using external secret:

registry:
  type: oci-client-auth
  registry_url: "registry.example.com"
  # ... other registry settings ...

deployment_controller:
  type: kubernetes
  # ... other settings ...
  image_pull_secret_name: "my-registry-secret"

For external secrets, ensure the secret exists in each namespace:

# Create secret in namespace
kubectl create secret docker-registry my-registry-secret \
  --docker-server=registry.example.com \
  --docker-username=myuser \
  --docker-password=mypassword \
  -n rise-my-app

Deployment

One per deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-20251207-143022
  namespace: rise-my-app
  labels:
    rise.dev/managed-by: "rise"
    rise.dev/project: "my-app"
    rise.dev/deployment-group: "default"
    rise.dev/deployment-id: "20251207-143022"
    rise.dev/deployment-uuid: "550e8400-e29b-41d4-a716-446655440000"
spec:
  replicas: 1
  selector:
    matchLabels:
      rise.dev/project: "my-app"
      rise.dev/deployment-group: "default"
      rise.dev/deployment-id: "20251207-143022"
      rise.dev/deployment-uuid: "550e8400-e29b-41d4-a716-446655440000"
  template:
    metadata:
      labels:
        rise.dev/project: "my-app"
        rise.dev/deployment-group: "default"
        rise.dev/deployment-id: "20251207-143022"
        rise.dev/deployment-uuid: "550e8400-e29b-41d4-a716-446655440000"
    spec:
      imagePullSecrets:
        - name: rise-registry-creds
      containers:
        - name: app
          image: registry.example.com/my-app@sha256:abc123...
          ports:
            - containerPort: 8080

Service

One per deployment group (updated via server-side apply):

apiVersion: v1
kind: Service
metadata:
  name: default
  namespace: rise-my-app
  labels:
    rise.dev/managed-by: "rise"
    rise.dev/project: "my-app"
spec:
  type: ClusterIP
  selector:
    rise.dev/project: "my-app"
    rise.dev/deployment-group: "default"
    rise.dev/deployment-id: "20251207-143022"  # Updated on traffic switch
    rise.dev/deployment-uuid: "550e8400-e29b-41d4-a716-446655440000"  # Updated on traffic switch
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP

Ingress

One per deployment group:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: default
  namespace: rise-my-app
  labels:
    rise.dev/managed-by: "rise"
    rise.dev/project: "my-app"
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
    - host: my-app.apps.rise.local
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: default
                port:
                  number: 80

Running the Controller

Starting the Controller

# Run Kubernetes deployment controller
rise backend controller deployment-kubernetes

The controller will:

  1. Connect to Kubernetes using configured kubeconfig or in-cluster credentials
  2. Start reconciliation loop for deployments in Pushed, Deploying, Healthy, or Unhealthy status
  3. Start image pull secret refresh loop (runs hourly)
  4. Process deployments continuously until stopped

Required RBAC Permissions

The controller requires the following Kubernetes permissions:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: rise-controller
rules:
  - apiGroups: [""]
    resources: ["namespaces"]
    verbs: ["get", "list", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["secrets", "services"]
    verbs: ["get", "list", "create", "update", "patch", "delete"]
  - apiGroups: ["apps"]
    resources: ["deployments", "replicasets"]
    verbs: ["get", "list", "create", "update", "patch", "delete"]
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "list"]
  - apiGroups: ["networking.k8s.io"]
    resources: ["ingresses"]
    verbs: ["get", "list", "create", "update", "patch", "delete"]

Basic Troubleshooting

Permission errors:

Error: Forbidden (403): namespaces is forbidden
  • Verify service account has required RBAC permissions
  • Check kubectl auth can-i for each required verb/resource

Connection errors:

Error: Failed to connect to Kubernetes API
  • Verify kubeconfig path is correct
  • Check network connectivity to API server
  • Ensure credentials are valid

Image pull failures:

Pod status: ImagePullBackOff
  • Check secret exists: kubectl get secret rise-registry-creds -n rise-{project}
  • Verify registry credentials are valid
  • Check secret refresh logs in controller output
  • Ensure image reference is correct

Pods not becoming ready:

  • Check pod logs: kubectl logs -n rise-{project} {pod-name}
  • Check pod events: kubectl describe pod -n rise-{project} {pod-name}
  • Verify application listens on configured HTTP port
  • Check resource limits and node capacity

Production Setup

Guidelines for deploying Rise in production environments.

Overview

This guide covers security, configuration, database setup, monitoring, and operational considerations for running Rise in production.

Deployment Backend

Rise deploys applications to Kubernetes clusters. For local development, use Minikube:

mise minikube:launch

See Kubernetes Backend for Kubernetes-specific configuration and operation.

Security Best Practices

Registry Credentials

  • Use IAM roles (IRSA on EKS, instance profiles on EC2)
  • Avoid long-lived IAM user credentials
  • Rise generates scoped push credentials (single repo, 12-hour max)

Network Isolation

  • Deploy backend in private subnets with ALB in public subnets
  • Enable TLS (HTTPS), terminate at load balancer
  • Restrict database to backend security group only

Secrets Management

Use AWS Secrets Manager or HashiCorp Vault for: DATABASE_URL, OAuth2 client secrets, registry credentials, JWT signing keys.

Authentication

  • Use trusted OIDC providers (Dex, Auth0, Okta)
  • Configure redirect URLs, enable PKCE
  • Dex production: external storage backend (PostgreSQL/etcd), configure SSO connectors, enable TLS. See Dex docs

Environment Variables

Key environment variables for production:

# Database (explicitly supported)
DATABASE_URL="postgres://rise:password@rds-endpoint:5432/rise"

# Configuration system
RISE_CONFIG_DIR="/etc/rise/config"         # Path to config directory
RISE_CONFIG_RUN_MODE="production"          # Which config file to load (production.yaml)

Note: Additional configuration should be placed in YAML config files rather than environment variables. See the configuration files in config/ directory for all available options (registry, Kubernetes, auth, etc.).

Database Setup

PostgreSQL Configuration

Use managed database (AWS RDS, Cloud SQL, Azure Database).

RDS settings: db.t3.medium+, 100GB GP3 with autoscaling, Multi-AZ, 7-30 day backup retention, encryption, PostgreSQL 16+

Running Migrations

export DATABASE_URL="postgres://rise:password@rds-endpoint:5432/rise"
sqlx migrate run

Database Backups

Enable automated backups (7+ days), take manual snapshots before major changes, enable point-in-time recovery.

Connection Pooling

Rise uses SQLx with connection pooling. Configure pool size based on load in config/production.toml if needed.

High Availability

Multi-Process Architecture

ProcessPurposeScaling
backend-serverHTTP API, OAuthHorizontal
backend-deploymentDeployment controllerSingle instance*
backend-projectProject lifecycleSingle instance*
backend-ecrECR managementSingle instance*

*Leader election for controllers planned for future.

Health Checks

  • GET /health - Overall health
  • GET /ready - Readiness (database connectivity)

LB config: /health, 30s interval, 2/3 thresholds, 5s timeout

Database Failover

RDS Multi-AZ: automatic failover (1-2 min), backend reconnects automatically.

Monitoring

Key Metrics

  • Request rate/latency (P50, P95, P99), error rate (4xx/5xx)
  • Active deployments, build/push times
  • CPU/memory, DB connection pool, disk I/O
  • Projects created, deployments/day, active users

Logging

Rise uses structured JSON logs. Aggregate with CloudWatch, Cloud Logging, ELK, or Loki+Grafana.

Alerting

Critical: DB connection failures, >5% 5xx rate, controller not reconciling, low disk space Warning: Slow queries (>1s), high CPU (>80%), memory leaks, old deployments

Disaster Recovery

Backup Strategy

Backup: Database (RDS snapshots), config (git), secrets (Secrets Manager) Don’t backup: Container images (in ECR), credentials, binaries

Recovery

  1. Restore database from snapshot
  2. Deploy backend from git
  3. Run migrations: sqlx migrate run
  4. Restore config/secrets
  5. Start processes, verify health

Operational Tasks

Updating Rise

git pull origin main
cargo build --release --bin rise
sqlx migrate run
# Restart processes (method depends on deployment: systemd, K8s, etc.)

Cleanup

Deployments with --expire auto-delete. Manual: rise deployment stop my-app:20241105-1234

Monitoring Database Size

SELECT pg_size_pretty(pg_database_size('rise'));
SELECT tablename, pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename))
FROM pg_tables WHERE schemaname = 'public' ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;

Cost Optimization

  • Database: Right-size instances (start db.t3.medium), auto-scale storage, use Reserved Instances
  • ECR: Lifecycle policies, image compression, cleanup unused repos
  • Compute: Right-size instances/nodes, spot instances for non-critical, auto-scaling

Next Steps

Database

Rise uses PostgreSQL for data storage with SQLX for compile-time verified SQL queries and migrations.

Overview

Schema: Projects, Teams, Deployments, Service Accounts, Users

Schema Management

Rise uses SQLX migrations for database schema versioning.

Migrations Directory

Migrations in ./migrations/ (project root) with timestamp-based names.

Creating Migrations

sqlx migrate add <description>

Creates migrations/<timestamp>_<description>.sql. Edit and add SQL.

Running Migrations

Development: mise db:migrate (auto-run by mise backend:run)

Production: sqlx migrate run

Migration Best Practices

  1. Test on production copy first
  2. Use CREATE INDEX CONCURRENTLY in PostgreSQL
  3. Avoid blocking operations on large tables
  4. Test rollback procedures

SQLX Compile-Time Verification

.sqlx/ directory contains query metadata for offline builds.

cargo sqlx prepare              # Generate metadata
cargo sqlx prepare --check      # Verify cache

Regenerate after migrations or SQL query changes.

Writing Queries

Use sqlx::query! macro for compile-time verification (syntax, types, columns).

Database Access

Development

Connect to the local PostgreSQL database:

# Using psql
docker-compose exec postgres psql -U rise -d rise

# Or with connection string
psql postgres://rise:rise123@localhost:5432/rise

Common queries:

-- List all projects
SELECT * FROM projects;

-- Show deployment status
SELECT name, status, created_at FROM deployments ORDER BY created_at DESC LIMIT 10;

-- Count users
SELECT COUNT(*) FROM users;

-- Show team membership
SELECT t.name, u.email
FROM teams t
JOIN team_members tm ON t.id = tm.team_id
JOIN users u ON tm.user_id = u.id;

Production

Use read-only access for debugging:

# Connect with read-only user
psql postgres://rise_readonly:password@rds-endpoint:5432/rise

# Limit query results
\set LIMIT 100
SELECT * FROM projects LIMIT :LIMIT;

Never run write queries directly on production. Use migrations instead.

Resetting the Database

Development

Completely reset the development database:

# Remove database volume
docker-compose down -v

# Start fresh
mise backend:run

This deletes all data and re-runs migrations.

Soft Reset (Keep Schema)

Delete data without removing the schema:

# Connect to database
psql postgres://rise:rise123@localhost:5432/rise

# Truncate tables (preserves schema)
TRUNCATE deployments, projects, teams, team_members, users, service_accounts RESTART IDENTITY CASCADE;

Common Patterns

Transactions

Use transactions for multi-step operations:

#![allow(unused)]
fn main() {
let mut tx = pool.begin().await?;

sqlx::query!(
    "INSERT INTO projects (name, owner_type, owner_id) VALUES ($1, $2, $3)",
    name,
    "user",
    user_id
)
.execute(&mut *tx)
.await?;

sqlx::query!(
    "INSERT INTO audit_log (action, user_id) VALUES ($1, $2)",
    "create_project",
    user_id
)
.execute(&mut *tx)
.await?;

tx.commit().await?;
}

Optional Fields

Handle NULL columns:

#![allow(unused)]
fn main() {
let deployment = sqlx::query!(
    r#"
    SELECT id, name, expires_at
    FROM deployments
    WHERE id = $1
    "#,
    deployment_id
)
.fetch_one(&pool)
.await?;

// expires_at is Option<DateTime<Utc>>
if let Some(expiry) = deployment.expires_at {
    println!("Expires at: {}", expiry);
}
}

Custom Types

Use Postgres ENUM types:

CREATE TYPE visibility AS ENUM ('public', 'private');

ALTER TABLE projects ADD COLUMN visibility visibility NOT NULL DEFAULT 'public';

In Rust:

#![allow(unused)]
fn main() {
#[derive(Debug, sqlx::Type)]
#[sqlx(type_name = "visibility", rename_all = "lowercase")]
enum Visibility {
    Public,
    Private,
}
}

Performance Considerations

Indexes

Create indexes for frequently queried columns:

-- Lookups by owner
CREATE INDEX idx_projects_owner ON projects(owner_type, owner_id);

-- Deployment status queries
CREATE INDEX idx_deployments_status ON deployments(status) WHERE status != 'stopped';

-- Expiration cleanup
CREATE INDEX idx_deployments_expires_at ON deployments(expires_at) WHERE expires_at IS NOT NULL;

Connection Pooling

Configure connection pool size in config/production.toml based on load and database limits.

Query Optimization

Use EXPLAIN ANALYZE to optimize slow queries:

EXPLAIN ANALYZE
SELECT * FROM deployments
WHERE project_id = 123 AND status = 'running'
ORDER BY created_at DESC;

Troubleshooting

“Migrations have not been run”

Problem: Backend can’t start because migrations are pending.

Solution:

mise db:migrate

“SQLX cache is out of date”

Problem: Query metadata doesn’t match actual database schema.

Solution:

cargo sqlx prepare

“Connection refused”

Problem: Can’t connect to PostgreSQL.

Solution:

# Check if PostgreSQL is running
docker-compose ps postgres

# Check logs
docker-compose logs postgres

# Restart
docker-compose restart postgres

Deadlocks

Problem: Transactions blocking each other.

Solution:

  • Keep transactions short
  • Always acquire locks in the same order
  • Use SELECT ... FOR UPDATE NOWAIT to fail fast

Next Steps

Testing

Guidelines for testing Rise components.

Overview

Rise uses multiple testing strategies:

  • Unit tests: Test individual functions and modules
  • Integration tests: Test API endpoints and database interactions
  • End-to-end tests: Test full workflows via CLI

Integration Tests

Integration tests are in the tests/ directory and test API endpoints with a real database.

Setup

Integration tests use a test database:

#![allow(unused)]
fn main() {
// tests/common.rs
pub async fn setup_test_db() -> PgPool {
    let database_url = std::env::var("DATABASE_URL")
        .unwrap_or_else(|_| "postgres://rise:rise123@localhost:5432/rise_test".to_string());

    let pool = PgPool::connect(&database_url).await.unwrap();

    // Run migrations
    sqlx::migrate!("../migrations")
        .run(&pool)
        .await
        .unwrap();

    pool
}

pub async fn cleanup_test_db(pool: &PgPool) {
    sqlx::query("TRUNCATE projects, deployments, teams, users CASCADE")
        .execute(pool)
        .await
        .unwrap();
}
}

Example Integration Test

#![allow(unused)]
fn main() {
// tests/projects_api.rs
use rise_backend::app;
use axum::http::StatusCode;

#[tokio::test]
async fn test_create_project() {
    let pool = common::setup_test_db().await;
    let app = app(pool.clone()).await;

    let response = app
        .oneshot(
            Request::builder()
                .method("POST")
                .uri("/api/v1/projects")
                .header("content-type", "application/json")
                .body(r#"{"name": "test-app", "visibility": "public"}"#)
                .unwrap(),
        )
        .await
        .unwrap();

    assert_eq!(response.status(), StatusCode::CREATED);

    common::cleanup_test_db(&pool).await;
}
}

Best Practices

  • Use test database: Never test against production or development databases
  • Clean up after tests: Truncate tables or use transactions
  • Test authentication: Mock JWT tokens for protected endpoints
  • Test error responses: Verify 400, 401, 404 responses

End-to-End Tests

Test full workflows using the CLI:

#!/bin/bash
# tests/e2e/deploy_workflow.sh

# Login
rise login --email test@example.com --password password

# Create project
rise project create e2e-test --visibility public

# Deploy
rise deployment create e2e-test --image nginx:latest

# Verify deployment
STATUS=$(rise deployment show e2e-test:latest --format json | jq -r '.status')
if [ "$STATUS" != "running" ]; then
  echo "Deployment failed"
  exit 1
fi

# Cleanup
rise project delete e2e-test

Test Data

Development Accounts

Use these pre-configured accounts for testing:

Admin user:

  • Email: admin@example.com
  • Password: password

Test user:

  • Email: test@example.com
  • Password: password

Creating Test Projects

# Create test projects
rise project create test-app-1 --visibility public
rise project create test-app-2 --visibility private

Mock Data

For unit tests, create mock data:

#![allow(unused)]
fn main() {
fn mock_project() -> Project {
    Project {
        id: Uuid::new_v4(),
        name: "test-app".to_string(),
        owner_type: OwnerType::User,
        owner_id: Uuid::new_v4(),
        visibility: Visibility::Public,
        created_at: Utc::now(),
        updated_at: Utc::now(),
    }
}
}

Troubleshooting

“Database connection failed”

Ensure PostgreSQL is running:

docker-compose up -d postgres

Set DATABASE_URL:

export DATABASE_URL="postgres://rise:rise123@localhost:5432/rise_test"

“Migration not found”

Run migrations before tests:

cd rise-backend
sqlx migrate run

Tests are slow

Use cargo test --release for faster execution (but slower compilation).

Run specific tests instead of the full suite:

cargo test test_projects

Test Coverage

Measuring Coverage

Use cargo-tarpaulin for code coverage:

# Install tarpaulin
cargo install cargo-tarpaulin

# Run coverage
cargo tarpaulin --out Html --output-dir coverage

Open coverage/index.html to view results.

Coverage Goals

  • Critical paths: 90%+ coverage (authentication, deployments)
  • Utility functions: 80%+ coverage
  • Overall: 70%+ coverage

Next Steps

Troubleshooting

Common issues and solutions for Rise.

Build Issues

Buildpack: CA Certificate Verification Errors

Symptoms:

===> ANALYZING
[analyzer] ERROR: failed to initialize analyzer: validating registry read access to <registry>
ERROR: failed to build: executing lifecycle: failed with status code: 1

Cause: Pack lifecycle container cannot verify SSL certificates when accessing the registry.

Solution:

export SSL_CERT_FILE=/path/to/your/ca-cert.crt
rise deployment create my-app

Rise CLI automatically injects the certificate into the pack lifecycle container.

Manual workaround (pack CLI directly):

pack build my-image \
  --builder paketobuildpacks/builder-jammy-base \
  --env SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt \
  --volume $SSL_CERT_FILE:/etc/ssl/certs/ca-certificates.crt:ro

Still failing?

  1. Verify certificate format: openssl x509 -in /path/to/ca-cert.crt -text -noout
  2. Use verbose logging: rise deployment create my-app --verbose
  3. Test registry access: curl --cacert /path/to/ca-cert.crt https://your-registry.example.com/v2/

Railpack: BuildKit Experimental Feature Error

Symptom:

ERROR: failed to build: failed to solve: requested experimental feature mergeop has been disabled on the build server: only enabled with containerd image store backend

Cause: Docker Desktop’s default builder doesn’t support experimental features needed by Railpack.

Solution:

docker buildx create --use

See Building Images for more details on SSL certificates and managed BuildKit daemon.

Authentication Issues

“Failed to start local callback server”

Cause: Ports 8765, 8766, and 8767 are all in use.

Solution:

  1. Close applications using these ports
  2. Use device flow (if using a compatible OAuth2 provider): rise login --device

“Code exchange failed”

Common causes:

  1. Backend is not running
  2. Dex is not configured properly
  3. Network connectivity issues

Check:

# Backend logs
docker-compose logs rise-backend

# Dex logs
docker-compose logs dex

Token Expired

Symptom: 401 Unauthorized on API requests.

Solution:

rise login

Tokens expire after 1 hour (default).

Service Account Issues

“The ‘aud’ claim is required”

Add --claim aud=<unique-value>:

rise sa create my-project \
  --issuer https://gitlab.com \
  --claim aud=rise-project-my-project \
  --claim project_path=myorg/myrepo

“At least one claim in addition to ‘aud’ is required”

Add authorization claims:

rise sa create my-project \
  --issuer https://gitlab.com \
  --claim aud=rise-project-my-project \
  --claim project_path=myorg/myrepo  # Required

“Multiple service accounts matched this token”

Cause: Ambiguous claim configuration.

Solution: Make claims unique:

# Unprotected branches
rise sa create dev --claim aud=rise-project-app-dev \
  --claim project_path=myorg/app --claim ref_protected=false

# Protected branches
rise sa create prod --claim aud=rise-project-app-prod \
  --claim project_path=myorg/app --claim ref_protected=true

“No service account matched the token claims”

Debug steps:

  1. Check token claims (CI/CD systems usually log them)
  2. Verify exact match (case-sensitive)
  3. Check issuer URL (must match exactly, no trailing slash)
  4. Ensure ALL service account claims are present in the token

“403 Forbidden” (Service Account)

Service accounts can only deploy, not manage projects. Use a regular user account for project operations.

Database Issues

“Connection refused” to PostgreSQL

Check if running:

docker-compose ps postgres
docker-compose logs postgres

Restart:

docker-compose restart postgres

Verify health:

docker-compose exec postgres pg_isready -U rise

Registry Issues

“Access Denied” when pushing (ECR)

Causes:

  1. Controller role can’t assume push role
  2. Push role permissions insufficient
  3. Repository doesn’t exist with correct prefix
  4. STS session policy scope incorrect

Debug:

# Check IAM role trust policy
aws iam get-role --role-name rise-backend-ecr-push

# Check repository exists
aws ecr describe-repositories --repository-names rise/my-app

“Connection refused” to registry (Docker)

Check if running:

docker-compose ps registry
docker-compose logs registry

Restart:

docker-compose restart registry

Images not persisting (Docker)

Check volume:

docker volume ls | grep registry

Warning: docker-compose down -v removes volumes and deletes all images!

Development Environment Issues

“Address already in use” on port 3000

Find process:

lsof -i :3000

Solution: Kill the process or change port in config/local.toml

Docker Compose services won’t start

Check logs:

docker-compose logs

Reset (deletes data):

docker-compose down -v
mise backend:deps

Getting Help

  • Check logs: docker-compose logs <service>
  • Verbose CLI output: rise <command> --verbose
  • Backend logs: RUST_LOG=debug cargo run --bin rise -- backend server
  • Report issues: https://github.com/anthropics/rise/issues