Back to list
Views:   15.3K
Replies:  1

Spark platform: how to perform a JOIN and get the results in a strongly typed collection?

Are there any examples on this in the Spark application?
May I know what's the best practice, and how to do so using Spark?
Parity Bit, Sep 17, 2013
Reply 1
Parity Bit: 

Great question. You probably discovered that the Spark sample application (called Art Shop) does not use a single JOIN!   This is by design. It keeps the application code very simple and it improves performance. 

Spark suggests that rather than JOINing, you cache the joined in tables. This is not only true for lookup tables, but also others. Here is an example: 

Suppose you're building a marketplace website (like ebay) where products can be sold.  When a user searches for products the page displays a list of products with information about the seller, such as, name, number of sales, and a star rating.  To avoid thousands and thousands of JOINs you can cache the Seller data, so you only retrieve the product records from the database.  The seller records are stored in a Dictionary on the server and these are then resolved against the SellerId in the product records easily and quickly. Again, this is extremely fast. 

What is nice is that you don't have to cache all data for all sellers. All you need is the SellerId, Name, NumberSold, and StarRating. These are partial records, also called a 'Ghost Records' (which actually is a Design Pattern). 

So, Spark strongly recommends caching over JOINs. 


Suppose this does not work for you. Perhaps the number of records is prohibitive. How do you handle this?
First of all, the Spark platform comes with an API that allows you to execute any SQL (including JOINs) which returns dynamic objects which will hold any data types.  This works very well but the problem is that these types are not type safe which increases the risk of run-time errors.  Because they are not type safe, Intellisense does not work in Visual Studio which also is a disadvantage. 

How do you return type-safe collections in Spark with JOINs. Here is a good solution. 

Using the Art database, suppose your SQL is something like this:

SELECT P.Title, A.FirstName, A.LastName
  FROM Product P JOIN Artist A ON P.ArtistId = A.Id
Next you create a matching domain object named Item, like so: 

public class Item
    public string Title { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }

    public Item(dynamic dyn)
        Title = dyn.Title;
        FirstName = dyn.FirstName;
        LastName = dyn.LastName;

Notice the constructor which accepts an argument of type dynamic.  Its members need to exactly match the members of Item.

Finally you execute the code like this:

string sql =
    @"SELECT P.Title, A.FirstName, A.LastName
        FROM Product P JOIN Artist A ON P.ArtistId = A.Id";

var items = ArtContext.Query(sql).Select(i => new Item(i)).ToList();

string first = items[0].FirstName;   // returns Paul
string last= items[0].LastName;      // returns Cezanne
Notice that the last two statement are fully type safe. 

Of course, you cannot UPDATE or DELETE the Item object instances because this does not make sense since they are based on a table JOIN.  

I hope this answers the question. 

Jack Poorte, Sep 18, 2013
Stay Inspired!
Join other developers and designers who have already signed up for our mailing list.
Terms     Privacy     Cookies       Do Not Sell       Licensing      
Made with    in Austin, Texas.  - vsn 44.0.0
© Data & Object Factory, LLC.