Back to list
Views:   8.4K
Replies:  3

SQL Database Pattern - Implement Centralized Data


I have read SQL + Database Design Pattern Framework, at this point I am still looking for the best way to centralized data...

One of the points made in the SQL Database Pattern Framework is that of "User - friendly data", which suggests to use the auto increment id rather than a GUID

I agree with this, because I think if using a GUID would be quite difficult to be filtered..
I want to ask how to centralize the data, and to keep using the concept "User - friendly data" ?
I have also included images for easier under consideration

Best Regards,
Eric Santoso

Eric Santoso, Jul 22, 2015
Reply 1
Hi Jack,

Thanks for the response,
I never thought this way in the development of a centralized database. This could be a very good choice
Thanks Jack.

Best Regards,
Eric Santoso
Eric Santoso, Jul 30, 2015
Reply 2
Hello Eric:

If I understand your scenario you're sending data to a central repository. In that case you're pretty much stuck with a GUID. Should you decide to go that route, be sure your primary key index is not clustered. If not, then with each INSERT your table rows will need to be re-arranged and this is expensive.


Here is another elegant solution I can think of:

Give the identity fields in each database a different seed value and an increment of 100. 
For example
  • Database A: seed:1, increment 100 (all tables)
  • Database B: seed 2, increment 100 (all tables)
  • Database C: seed 3, increment 100 (all tables)
This way, each database will have its own unique identity values.

Database A will have 1, 101, 201, 301, etc
Database B will have 2, 102, 202, 302, etc
Database C will have 3, 103, 203, 303, etc

With this approach, the last two digits represent the originating database: 01 is A, 02 is B, etc.
Using an increment of 100 means allows you to have up to 100 separate databases.  You could choose 10 if you know there will never be more than 10 databases. 

Anyhow, with 100 and using an int datatype as your identity then the max number of rows will be about 21 million.. (2.1 billion divided by 100).  If this is not enough you can change to bigint and this will give you around 9E+16 unique numbers in each table (9E+18 / 100): a huge number. 

A disadvantage is that for each database the data model's identity definitions are different. 

I don't see a reason why this should not work. However, I am not sure I would recommend this as it seems fragile.
If anyone has thoughts about this approach, please let Eric and me know, 

Hope this helps.

Jack Poorte, Jul 22, 2015
Reply 3
Sorry, there are mistakes in the picture, should the location of A, B, and C

Eric Santoso
Eric Santoso, Jul 22, 2015
Stay Inspired!
Join other developers and designers who have already signed up for our mailing list.
Terms     Privacy     Cookies       Do Not Sell       Licensing      
Made with    in Austin, Texas.  - vsn 44.0.0
© Data & Object Factory, LLC.