Suppose I have a chunk of SQL queries to extract transactionCount and warehouseID from a hundred or so warehouses:
--this is just an illustration and not the main part of my question
select transactionCount, warehouseID from warehouse1.itrn
union select transactionCount, warehouseID from warehouse2.itrn
union select transactionCount, warehouseID from warehouse3.itrn
.
.
.
union select transactionCount, warehouseID from warehouse117.itrn
From here, I then want to
a) group by country (warehouseID has an identifier for country code)
b) count the total transactions
c) get the top ten warehouses with highest transactionCount
Each of these a, b, c will be displayed on separate charts
My question would be: which is faster,
1) having all of the above (chunk+a+b+c) in one master data set in BIRT and linking that data set to all the charts,
or
2) having each chunk+a, chunk+b, chunk+c in three separate data sets and linking those to each chart individually?
#1 is easier to maintain if the chunk ever requires editing (e.g. a new warehouse is added), but #2 produces more readable table results (#1 will require FULL JOINing table results for a, b and c into one) so I guess performance will be the deciding factor
I guess a higher-level question would be how BIRT processes data sets - does it strictly process them one-by-one for each chart regardless of repeated queries, or does it "cache" query results so that it can skip future repeated queries in the same view session?
For context: my database uses SQL Oracle, I am on BIRT 4.6.0, and I'm running the chunks+a+b+c as
WITH (...) AS x, (....) AS y, (...) AS z] clauses.