Hi Derrick,
Thanks for your response!
rfcs
I’m not sure what you mean by this, as I’m not requesting a code change, but rather, as you mentioned, just some information/insights. I’m trying to figure out what would happened if I made those changes in the PG DB from strapi’s perspective.
composite unique constraints
As strapi works with surrogate primary keys, the way to avoid duplications in place of a composite natural PK would be composite unique constraint. For example, in a system where we have multiple tenants, and each tenant has classes
class:
| Key | Column |
|---|---|
| PK | class_id |
| tenant_id | |
| class_name |
for each tenant, class_name should be unique. Therefore, a natural candidate key here is the pair (tenant_id, class_name), but since we have a surrogate PK, the matching composite unique constraint here is for the same pair. To elaborate further:
In the following table, you can have:
| class_id | tenant_id | class_name |
|---|---|---|
| 1 | t1 | “morning yoga” |
| 2 | t2 | “morning yoga” |
but if then you try to add (3, t1, “morning yoga”) it should throw an error, despite having a unique PK.
postgres specifics
I understand that you want to keep the same functionality across DBs, but postgres does have some features that makes it very advanced - row level security being one of them.
How would you recommend that we expand upon the postgres specific database layer? If not to contribute back to the community, then at least for our own use.
sharding
my comment about scalability was not about read replicas, but rather about partitioning of tables, or sharding. Having tenant_id as part of the PK would cause the sharding/paritioning mechanism to keep all tenant data in a single shard. I believe MariaDB has a similar mechanism but I’m just not sure.
should tenant data partition across shards, then each transaction, join, or FK constraint update/cascade would require network I/O to communicate between the partitions/shards that contain the data