Thanks for the response,
I never thought this way in the development of a centralized database. This could be a very good choice
Eric Santoso, Jul 30, 2015
If I understand your scenario you're sending data to a central repository. In that case you're pretty much stuck with a GUID. Should you decide to go that route, be sure your primary key index is not clustered. If not, then with each INSERT your table rows will need to be re-arranged and this is expensive.
Here is another elegant solution I can think of:
Give the identity fields in each database a different seed value and an increment of 100.
Database A will have 1, 101, 201, 301, etc
Database B will have 2, 102, 202, 302, etc
Database C will have 3, 103, 203, 303, etc
With this approach, the last two digits represent the originating database: 01 is A, 02 is B, etc.
Using an increment of 100 means allows you to have up to 100 separate databases. You could choose 10 if you know there will never be more than 10 databases.
Anyhow, with 100 and using an int datatype as your identity then the max number of rows will be about 21 million.. (2.1 billion divided by 100). If this is not enough you can change to bigint and this will give you around 9E+16 unique numbers in each table (9E+18 / 100): a huge number.
A disadvantage is that for each database the data model's identity definitions are different.
I don't see a reason why this should not work. However, I am not sure I would recommend this as it seems fragile.
If anyone has thoughts about this approach, please let Eric and me know,
Hope this helps.
Jack Poorte, Jul 22, 2015
Sorry, there are mistakes in the picture, should the location of A, B, and C
Eric Santoso, Jul 22, 2015