Home
Analytics
Increase memory usage with birt 4.2.2?
Deros
Hi,
I've got some Birt-Application I would like to update from Birt 3.7.0 to 4.2.2.
But while testing with Birt 4.2.2 I got some Memory Heap Size Errors. After increasing the memory size for my Tomcat, I had a look how much Memory my Tomcat now needs I was a little scared. The attached pic shows what i mean. With Birt 3.7.2 it takes me like 280mb, with Birt 4.2.2 nearly 900mb.
I just change the Birt libs nothing else, what could be the problem or I am doing something completely wrong?
Perhaps some useful information. I'm using Tomcat 7.0.19 and Oracle 11g Database.
Greets
Michael
Find more posts tagged with
Comments
mwilliams
Are you using the viewer? Or are you using the APIs with Java? Are you running both runtimes at the same time? If so, can you remove the 3.7.2 version and just run the 4.2.2 version to see what your numbers show?
Deros
I'm using the Java-API and only one runtime at a time. The little down peak before generating the 3. document is the redeployment. But I run into the same memory problems even if I test my reports with the preview from the eclipse plugin.
Deros
I've tested awhile today and the problem seem to be the jdbc-statement. Even the "Preview Results" in the DataSet over 700mb heap. The statement isn't extraordinary but 1 row with 34 columns from 15 tables. And it seems that just the amount of joins takes the memory in Birt 4.2.2.
i was using 2 different jdbc-driver without noticeable difference ( Oracle JDBC Driver version - "11.1.0.7.0-Production" and Oracle JDBC Driver version - "10.2.0.1.0".
Nobody else this kind of problem?
mwilliams
I've not seen anyone else with this issue, yet. I talked with a colleague of mine and he hadn't either. Can you please log a bug for this in the eclipse.org/birt bugzilla? Be sure to attach your image from above in it. Also, could you post the bug info in here, for future reference?
CBR
Hi,
there are multiple reasons for increased heap:
1) JDBC is configured to not stream the results...so it will load all rows before even returning the first row
2) You have set some value for DataEngine.MEMORY_BUFFER_SIZE. Starting with 2.6 the default is set to unlimited allowing BIRT to use unlimited RAM before starting to swap to disc...set some value to restrict RAM usage
If the error even occurs on preview inside eclipse it is much likely that 1 is the issue.
Can you post a screenshot of the extended properties of the JDBC connection configured in BIRT? Are you using the query builder DTE connection or the normal JDBC connection?
Deros
<a class='bbc_url' href='
https://bugs.eclipse.org/bugs/show_bug.cgi?id=405368'>bugzilla
entry</a><br />
<br />
<blockquote class='ipsBlockquote' data-author="'cbrell'" data-cid="115887" data-time="1365597344" data-date="10 April 2013 - 05:35 AM"><p>
Hi,<br />
<br />
there are multiple reasons for increased heap:<br />
1) JDBC is configured to not stream the results...so it will load all rows before even returning the first row<br />
2) You have set some value for DataEngine.MEMORY_BUFFER_SIZE. Starting with 2.6 the default is set to unlimited allowing BIRT to use unlimited RAM before starting to swap to disc...set some value to restrict RAM usage<br />
<br />
If the error even occurs on preview inside eclipse it is much likely that 1 is the issue.<br />
Can you post a screenshot of the extended properties of the JDBC connection configured in BIRT? Are you using the query builder DTE connection or the normal JDBC connection?<br /></p></blockquote>
<br />
1) The result of the statement is just 1 row so I don't think this can be the problem.<br />
2) I didn't change any values. Even if I take a new Eclipse with the birt plugin, create a complete new report and take the jdbc-statement, it takes this amount of memory in "Preview Results" in the DataSet.<br />
<br />
What do you mean with "extended properties"? I'm using the normal JDBC connection.
mwilliams
Thanks for posting the bug info, Deros.
Deros
<a class='bbc_url' href='
https://bugs.eclipse.org/bugs/show_bug.cgi?id=402243'>seems
like I'm not the only one with the problem</a><br />
<br />
setting the &RowFetchSize to a value of 10 in the DataSet Properties takes the memory usage to the old level
mwilliams
Thanks for the update. Glad it's working for you, now. Let us know whenever you have questions.
hvb
<p>This is just another example for the bugs 417084, 402243, 405368, 407299, 406191.</p><p>which probably all come from the default row fetch size changed from the JDBC driver default to the ridiculously huge value 10000, which was obiously introduced to improve performance for very high volume reports with a simple select statement, but causes much more trouble than it helps.</p><p> </p><p>The OOM is because Oracle JDBC has to allocate enough memory in advance for the worst case, that is, the fetch result is 10000 [=row fetch size] rows, all filled up with data to the max. If a query contains a varchar2 function call, the parser can make no assumption about the length of the function's result, so it assumes 4000 characters. That is 4000*10000 = 40 MB for each varchar2-function column (maybe even double the size, because the length of a char is > 1 byte for UTF8). Quite often, views are based on functions, too, and thus sometimes contain several varchar2(4000) values as well (for example, think of a view made for language translation).</p><p> </p><p>To be honest, I'm quite disappointed by some of the BIRT developers' responses... They seem to just not understand that this is actually a huge issue and makes BIRT 4.2.2 / 4.3.0 unusable for many of us.</p><p> </p><p>Please vote for Bug 407299!</p><p> </p><p> </p>