Home
Analytics
Multiple tables based on one dataset
Tubal
Hi there. I'm fairly new to Birt but I think I have the basics down.
I'm having a problem with a report that I'm designing that uses multiple tables based on the same sql based dataset. The way I've got it now is the first table displays the complete result set, then I've got 8 more tables underneath it that are displaying a filtered version of the same result set.
The dataset is based on a somewhat complex query that takes several minutes to run, and can be more based on the report parameters the user passes.
What appears to be happening is the query runs once for each table. So it takes 5 minutes * 9 to complete the report.
Is there an easy way around this?
Find more posts tagged with
Comments
robbinma
<blockquote class='ipsBlockquote' data-author="'Tubal'" data-cid="84920" data-time="1320469272" data-date="04 November 2011 - 10:01 PM"><p>
edited <br />
What appears to be happening is the query runs once for each table. So it takes 5 minutes * 9 to complete the report.<br />
<br />
Is there an easy way around this?<br /></p></blockquote>
<br />
I am having the same problem.<br />
<br />
I think this is a design limitation of BIRT.<br />
Have a look at this article about <a class='bbc_url' href='
http://www.informit.com/articles/article.aspx?p=686171&seqNum=2'>How
Event Handlers fire</a>.<br />
<br />
"Report body processing phase<br />
<br />
BIRT processes a report body by processing all the report items that are not contained in other report items. BIRT processes the items, going from left to right and proceeding a row at a time toward the bottom right. A report item that is not contained in another report item is called a top-level report item. Every report has at least one top-level report item, usually a grid, a list, or a table. If a report has more than one top-level report item, BIRT processes the top-level items in order, from left to right and top to bottom."<br />
<br />
Figure 8-8 Table and list setup execution sequence shows that the dataset is opened and fetched for each table.<br />
<br />
If you do figure out a different way then I would be interested to hear about it.
robbinma
You don't say which version of BIRT you use but this may be of interest:
http://www.birt-exchange.org/org/forum/index.php/blog/3/entry-303-cut-that-waiting-time/
It may offer a small time saving for each data set.
It refers to the commercial version though so I can't use it.
Tubal
<blockquote class='ipsBlockquote' data-author="'robbinma'" data-cid="84930" data-time="1320575278" data-date="06 November 2011 - 03:27 AM"><p>
I am having the same problem.<br />
<br />
I think this is a design limitation of BIRT.<br />
Have a look at this article about <a class='bbc_url' href='
http://www.informit.com/articles/article.aspx?p=686171&seqNum=2'>How
Event Handlers fire</a>.<br />
<br />
"Report body processing phase<br />
<br />
BIRT processes a report body by processing all the report items that are not contained in other report items. BIRT processes the items, going from left to right and proceeding a row at a time toward the bottom right. A report item that is not contained in another report item is called a top-level report item. Every report has at least one top-level report item, usually a grid, a list, or a table. If a report has more than one top-level report item, BIRT processes the top-level items in order, from left to right and top to bottom."<br />
<br />
Figure 8-8 Table and list setup execution sequence shows that the dataset is opened and fetched for each table.<br />
<br />
If you do figure out a different way then I would be interested to hear about it.<br /></p></blockquote>
<br />
I can't try it until tomorrow, but theoretically, I could nest all 9 tables inside a "master" table, and it would only run the query once for the master table.
johnw
BIRT "should" cache the results upon the first run of the dataset. From what you are describing it is not. I would submit this as a bug.
In the mean time, you can get around this by adding your own cacheing. The idea would be that you create a global variable containing a Java list or an array. You would create it in either the initialize event or the beforeFactory event like so:
var copyOfDataSet = new Packages.java.util.ArrayList();
reportContext.setGlobalVariable("dsCopy", copyOfDataSet);
In your Data Set, you will basically copy all of your columns into either a List or an Array to add to this global variable, basically creating a copy in memory. So, in the onFetch event, you would do something like:
var copy = reportContext.getGlobalVariable("dsCopy");
var columns = new Packages.java.util.ArrayList();
columns.add(row["column1"]);
columns.add(row["column2"]);
etc...
copy.add(columns);
You would need to create a second data set that is a Scripted Data Set that will read this in memory data set.
so the open method would look like:
copyOfDataset = reportContext.getGlobalVariable("dsCopy");
count = 0;
and the fetch would look something like:
var currentRow = copyOfDataset.get(count);
if (currentRow != null)
{
row["column1"] = currentRow.get(0);
row["column2"] = currentRow.get(1);
row["column3"] = currentRow.get(3);
count++;
return (true);
}
return (false);
The definition of that data set should match what you have for the SQL data set. Then, you only need your first table to populate it in memory, and every subsequent table should use the scripted data set that will use the copy. That should be the basic idea, but that will at least let you run the report once
John
robbinma
<blockquote class='ipsBlockquote' data-author="'johnw'" data-cid="84982" data-time="1320689052" data-date="07 November 2011 - 11:04 AM"><p>
BIRT "should" cache the results upon the first run of the dataset. From what you are describing it is not. I would submit this as a bug.<br />
<br />
John<br /></p></blockquote>
<br />
Thanks John.<br />
in my case I am using a scripted dataset.<br />
<br />
Would you expect the results to be cached for a scripted dataset or is the type irrelevant?
johnw
I don't recall seeing the code to cache results in the Oda Scripted Data Source packages, but that is an area I haven't really looked through. I wouldn't expect it to cache. If your using the scripted data source, I would cache using a method similar to what I described.