Mar 24, 2014
Spark 4.5 is a data first model. You start with a datamodel that adheres to some simple conventions. Then the domain model and highly optimized data access code can be created quickly and easily (in particular with the PRO package which includes a code generator).
In the case of an existing database, they also need to adhere to the conventions or else it will not work. For example, the primary key for each table needs to be an integer identity named 'Id'. And there are a couple other simple naming standards. That is all.
Hope this helps.
Jun 18, 2014
From your answer to this question, it is better to use Spark Framework on a brand NEW project started from scratch, I guess.
For example, if we have an existing table, named as "Artists" instead of "Artist", with pk of "ArtistID" instead of "id", the Spark will NOT work in this case, correct?
As I know, many many developers work on existing database, and they can't change anything to conform to the conventions, because there are other programs may be using those tables in the same database.
In this case, is it possible to build a "MAP" between the existing database and Spark? If so, does this "MAP" layer hurt the performance?
Jun 26, 2014
SPARK is an architecture and as such is difficult to apply to an existing system or architecture.
Specifically, the data model must adhere to some conventions, one of which is that each table has an integer identity key named Id. Also, the ORM is highly optimized and replaces the Entity Framework entirely.
You can apply certain aspects of SPARK to existing projects, but it is most useful in 'green field' projects that need a sound architecture and need to be build quickly and effectively.
Hope this helps.