Multi-Tenant Operator (MTO)#
Multi-Tenant Operator (MTO) is the orchestration engine that enforces tenancy within SCO. It manages the full lifecycle of projects — creating namespaces, applying quotas, configuring network isolation, and binding RBAC — from a single Tenant custom resource.
Role in SCO#
Every project in SCO maps to a Tenant in MTO. When a consumer creates a project, SCO creates a Tenant resource on the management cluster. MTO takes that Tenant and drives all the downstream provisioning:
Project claim (tenant.cloud.stakater.com/v1 Project)
↓
SCO controller creates Tenant (tenantoperator.stakater.com/v1alpha1)
↓
MTO reconciles:
├── Namespaces (with standard labels and annotations)
├── ResourceQuotas and LimitRanges
├── NetworkPolicies (inter-namespace isolation)
├── RBAC (RoleBindings scoped to the tenant)
└── TemplateGroupInstances (standard tooling deployed into namespaces)
Platform operators configure MTO's behaviour through IntegrationConfig and namespace Templates. Consumers interact only with projects and claims; MTO is entirely internal to the platform.
Core Concepts#
Tenant#
A Tenant resource represents a single isolated unit of tenancy — an SCO project. It declares:
- Which namespaces belong to this tenant
- What resource quotas apply
- Which users and groups have access, and at what privilege level
- Which namespace templates to apply
MTO watches Tenant resources and reconciles the full set of downstream Kubernetes objects continuously. If a namespace is deleted out of band, MTO recreates it. If a quota is manually edited, MTO reverts it to the declared value.
Namespace Templates#
Template resources define reusable sets of objects to inject into every namespace belonging to a tenant. Platform operators use templates to standardise tooling across all projects:
- Network policies (default deny, allow within tenant)
- LimitRanges (per-container defaults)
- Standard RBAC roles
- Monitoring configuration
- Pre-provisioned secrets (image pull credentials, operator configuration)
Templates are versioned and can be selectively applied to specific tenant types.
IntegrationConfig#
IntegrationConfig is the global configuration resource for MTO. It controls platform-wide behaviour:
- Which namespace prefixes are reserved
- Default quota profiles
- Which cluster-scoped resources tenants can access
- Webhook enforcement settings
- Privileged namespaces (excluded from tenant management)
Project Isolation Model#
MTO enforces isolation at multiple levels simultaneously.
Namespace isolation#
Each project namespace carries MTO-managed labels that identify its owning tenant. Network policies generated by MTO allow traffic within the tenant's namespaces and deny traffic from other tenants by default.
Tenant: proj-frontend
Namespaces: [proj-frontend-dev, proj-frontend-staging, proj-frontend-prod]
↓
NetworkPolicy in each namespace:
- Allow: ingress from same tenant namespaces
- Deny: ingress from all other tenant namespaces
- Allow: egress to platform services (DNS, monitoring)
Resource quotas#
Every tenant has quota applied at the namespace level. Quota is declared in the Tenant spec and propagated by MTO to ResourceQuota objects in each namespace:
quota:
hard:
requests.cpu: "8"
requests.memory: 16Gi
limits.cpu: "16"
limits.memory: 32Gi
persistentvolumeclaims: "10"
pods: "50"
Consumers cannot exceed these limits regardless of what they apply to their project's API endpoint.
RBAC scoping#
MTO creates RoleBinding resources in tenant namespaces based on the user and group membership declared in the Tenant. Users are granted access only within their tenant's namespaces. No cross-tenant access is possible because the bindings are namespace-scoped and MTO prevents tenants from escalating their own privileges.
Tenant Lifecycle#
Creation#
When a project claim is applied, SCO generates a Tenant resource. MTO responds by:
- Creating all declared namespaces with standard labels
- Applying
ResourceQuotaandLimitRangeto each namespace - Applying
NetworkPolicyresources for intra-tenant and external-egress rules - Creating
RoleBindingresources for declared users and groups - Applying all referenced namespace templates (deploying standard tooling)
The project is ready when all MTO-managed resources reach a healthy state.
Updates#
Tenant resources are continuously reconciled. Platform operators can update quotas, add namespaces, or modify RBAC by updating the Tenant spec. MTO applies changes and corrects any drift between the declared state and the cluster state.
Consumers trigger updates by modifying their project claim. SCO translates changes to the Tenant spec and MTO propagates them.
Deletion#
When a project is deleted, MTO removes all associated namespaces and their contained resources in the correct order. Quota, network policies, and RBAC are cleaned up automatically. No manual cleanup is required.
Distributed Mode#
MTO supports a distributed mode where a management cluster controls tenancy on spoke clusters. In this configuration:
Tenantresources are declared on the management cluster- MTO synchronises namespaces, quotas, network policies, and RBAC to the target spoke cluster
- Platform operators manage all tenancy from a single control plane
SCO uses this capability when provisioning hosted OpenShift clusters as a service. The hosted cluster is registered as an MTO spoke, and project-level tenancy on that cluster is managed centrally.
Platform Operator Configuration#
IntegrationConfig#
apiVersion: tenantoperator.stakater.com/v1alpha1
kind: IntegrationConfig
metadata:
name: tenant-operator-config
namespace: stakater-tenant-operator
spec:
tenantRoles:
default:
owner:
clusterRoles:
- admin
editor:
clusterRoles:
- edit
viewer:
clusterRoles:
- view
quota:
validRange:
startRange: "0"
endRange: "100Gi"
namespaceAccessPolicy:
deny:
privilegedNamespaces:
groups:
- system:masters
namespaces:
- ^default$
- ^kube-.*
- ^openshift.*
- ^stakater-.*
Namespace Template example#
apiVersion: tenantoperator.stakater.com/v1alpha1
kind: Template
metadata:
name: project-defaults
resources:
manifests:
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
- apiVersion: v1
kind: LimitRange
metadata:
name: container-limits
spec:
limits:
- type: Container
default:
cpu: 500m
memory: 512Mi
defaultRequest:
cpu: 100m
memory: 128Mi
What's Next?#
- Project Architecture — How projects map to tenants
- Creating Projects — Consumer guide to creating projects
- Architecture — Overall platform architecture