FastAPI + PostgreSQL RLS: Multi-Tenant Done Right

FastAPI + PostgreSQL RLS: Multi-Tenant Done Right

When I started building LucraHub — a management platform for Amazon Brazil sellers — the first architectural decision that kept me up at night was: how do I isolate data between sellers without duplicating infrastructure?

The classic SaaS multi-tenant scenario: dozens (or hundreds) of companies using the same system, but each one seeing only their own data. There are basically three approaches:

StrategyIsolationInfra costComplexity
Database per tenantTotalHighHigh operational
Schema per tenantHighMediumMedium
Shared table + RLSHighLowLow (after setup)

We chose Row Level Security (RLS) in PostgreSQL with shared tables. In this post I’ll show you exactly how we implemented it in LucraHub from scratch, with real code.

Context: LucraHub has a microservices architecture — each service has its own PostgreSQL database. RLS is applied per service individually; there’s no “central tenant database.” This scales well and avoids a single point of failure.

What is Row Level Security?

Row Level Security is a native PostgreSQL feature that lets you define per-row access policies directly in the database. Unlike application-level filters, the database physically refuses to return rows that don’t belong to the active tenant — even if a developer forgets to filter.

The core idea is simple: each row has a tenant_id, and PostgreSQL compares that field against a session variable we set before each query. No match — the row is invisible.

Initial database setup

1. Table structure

Every LucraHub model inherits from a mixin with a required tenant_id:

-- Example: products table
CREATE TABLE products (
    id          UUID        PRIMARY KEY DEFAULT gen_random_uuid(),
    tenant_id   UUID        NOT NULL,
    asin        TEXT        NOT NULL,
    title       TEXT,
    price       NUMERIC(10, 2),
    created_at  TIMESTAMPTZ DEFAULT now()
);

-- Composite index: tenant + most common query fields
CREATE INDEX idx_products_tenant
    ON products (tenant_id, created_at DESC);

2. Enabling RLS and creating the policy

-- Enable RLS on the table
ALTER TABLE products ENABLE ROW LEVEL SECURITY;

-- Access policy: only see rows belonging to the current tenant
CREATE POLICY tenant_isolation ON products
    USING (
        tenant_id = current_setting('app.current_tenant')::UUID
    );

-- INSERT policy: ensures new records belong to the correct tenant
CREATE POLICY tenant_insert ON products
    FOR INSERT WITH CHECK (
        tenant_id = current_setting('app.current_tenant')::UUID
    );

-- The app user CANNOT bypass RLS
-- (only superuser/BYPASSRLS can)
ALTER TABLE products FORCE ROW LEVEL SECURITY;

Important detail: FORCE ROW LEVEL SECURITY is essential. Without it, the table owner (usually the same role as the application) can bypass the policies. Always use it.

Integrating with FastAPI

The flow is: request arrives → middleware extracts tenant from JWT → before each query, we inject SET LOCAL app.current_tenant = '...' into the PostgreSQL session. RLS handles the rest.

Request (JWT) → Auth Middleware → get_db() → SET LOCAL app.current_tenant → Query (RLS filters)

SQLAlchemy model with tenant mixin

import uuid
from sqlalchemy import Column, String
from sqlalchemy.dialects.postgresql import UUID
from sqlalchemy.orm import DeclarativeBase


class Base(DeclarativeBase):
    pass


class TenantMixin:
    """Mixin applied to all LucraHub models."""

    tenant_id: UUID = Column(
        UUID(as_uuid=True),
        nullable=False,
        index=True,
        # no default: RLS enforces this via SET LOCAL
    )


class Product(TenantMixin, Base):
    __tablename__ = "products"

    id       = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
    asin     = Column(String, nullable=False)
    title    = Column(String)
    price    = Column(Numeric(10, 2))

Dependency: injecting the tenant into the session

from typing import AsyncGenerator
from uuid import UUID

from sqlalchemy.ext.asyncio import (
    AsyncSession, async_sessionmaker, create_async_engine
)
from sqlalchemy import text

from app.config import settings

engine = create_async_engine(
    settings.DATABASE_URL,
    pool_size=10,
    max_overflow=5,
    pool_pre_ping=True,
)

AsyncSessionLocal = async_sessionmaker(
    engine,
    expire_on_commit=False,
    class_=AsyncSession,
)


async def get_db(
    tenant_id: UUID,           # comes from JWT via Depends()
) -> AsyncGenerator[AsyncSession, None]:
    """
    Dependency that yields a session already configured
    for the correct tenant. Postgres RLS handles the rest.
    """
    async with AsyncSessionLocal() as session:
        # SET LOCAL: scoped to this transaction only
        await session.execute(
            text("SET LOCAL app.current_tenant = :tid"),
            {"tid": str(tenant_id)},
        )
        yield session

Why SET LOCAL and not SET? SET LOCAL is scoped to the transaction. When the transaction ends, the variable disappears. With connection pooling this is critical — it guarantees a tenant never “leaks” into the next query from another user reusing the same connection.

Extracting the tenant from the JWT

from uuid import UUID
from fastapi import Depends, HTTPException, status
from fastapi.security import OAuth2PasswordBearer
import jwt

oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/auth/token")


async def get_current_tenant(
    token: str = Depends(oauth2_scheme),
) -> UUID:
    try:
        payload = jwt.decode(
            token,
            settings.JWT_SECRET,
            algorithms=["HS256"],
        )
        tenant_id = payload.get("tenant_id")
        if not tenant_id:
            raise ValueError
        return UUID(tenant_id)

    except (jwt.InvalidTokenError, ValueError):
        raise HTTPException(
            status_code=status.HTTP_401_UNAUTHORIZED,
            detail="Invalid or expired token",
        )


# Combined dependency: tenant_id + configured session
async def get_tenant_db(
    tenant_id: UUID = Depends(get_current_tenant),
) -> AsyncGenerator[AsyncSession, None]:
    async for db in get_db(tenant_id):
        yield db

Using it in endpoints

from fastapi import APIRouter, Depends
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select

from app.auth.dependencies import get_tenant_db
from app.models import Product

router = APIRouter(prefix="/products")


@router.get("/")
async def list_products(
    db: AsyncSession = Depends(get_tenant_db),
):
    # Zero manual filtering: RLS guarantees isolation
    result = await db.execute(select(Product))
    return result.scalars().all()


@router.post("/")
async def create_product(
    payload: ProductCreate,
    db: AsyncSession = Depends(get_tenant_db),
    tenant_id: UUID = Depends(get_current_tenant),
):
    product = Product(
        **payload.model_dump(),
        tenant_id=tenant_id,  # explicit on insert
    )
    db.add(product)
    await db.commit()
    return product

Gotchas that caught us off guard

Background tasks and workers

Celery, ARQ, async jobs — any processing outside the request/response cycle has no JWT. In those cases, the tenant_id must come from the task payload, and you inject it manually with SET LOCAL before queries. Never rely on RLS “automatically” in workers.

Migrations with Alembic

Alembic uses a different role (usually superuser or BYPASSRLS). That’s fine for running migrations, but it means your seed/fixtures scripts can silently bypass RLS if you use the same connection string. Use separate roles per responsibility.

Administrative queries

Internal operations — consolidated reports, billing, support — need to see data across all tenants. For that, create a separate role with BYPASSRLS and never expose it in the main application.

-- Application role: subject to RLS
CREATE ROLE lucrahub_app LOGIN;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO lucrahub_app;

-- Admin role: bypasses RLS
CREATE ROLE lucrahub_admin LOGIN BYPASSRLS;
GRANT ALL ON ALL TABLES IN SCHEMA public TO lucrahub_admin;

Testing isolation

Just as important as implementing it is having tests that prove isolation works. At LucraHub we have a dedicated suite for this:

import pytest
from uuid import uuid4


@pytest.mark.asyncio
async def test_tenant_cannot_see_other_tenant_data(db_factory):
    tenant_a = uuid4()
    tenant_b = uuid4()

    # Create a product under tenant A
    async with db_factory(tenant_a) as db:
        product = Product(
            asin="B08XYZ",
            title="Tenant A Product",
            tenant_id=tenant_a,
        )
        db.add(product)
        await db.commit()

    # Tenant B must not see tenant A's product
    async with db_factory(tenant_b) as db:
        result = await db.execute(select(Product))
        products = result.scalars().all()

    assert len(products) == 0, "Tenant B should not see Tenant A's data!"

Performance: does RLS add overhead?

The short answer: negligible, if your indexes are right.

PostgreSQL evaluates the RLS policy as an additional predicate on the query — it’s equivalent to you manually adding WHERE tenant_id = $1. With an index on (tenant_id, sort_field), the planner uses an index scan as normal.

At LucraHub, after adding the correct composite indexes, the overhead observed on the most frequent queries was under 2ms compared to the no-RLS baseline.

Conclusion

Row Level Security is one of PostgreSQL’s most underrated features. When implemented well, it turns the database into an active guardian of your data — eliminating an entire class of bugs where a developer forgets to filter by tenant.

At LucraHub, the FastAPI + RLS combo gave us:

  • ✅ Isolation enforced by the database, not the application
  • ✅ Clean endpoint code, with no WHERE tenant_id = scattered everywhere
  • ✅ Performance equivalent to normal queries with correct indexes
  • ✅ Clear separation of concerns: authentication in FastAPI, data authorization in Postgres

The setup cost is real — migrations, roles, isolation tests — but it pays dividends as the product grows. Absolutely worth it.