Discussions
Categories
Groups
Community Home
Categories
INTERNAL ENABLEMENT
POPULAR
THRUST SERVICES & TOOLS
CLOUD EDITIONS
Quick Links
MY LINKS
HELPFUL TIPS
Back to website
Home
Web CMS (TeamSite)
heap size errors in datadeploy on Solaris
bscott1
I am running TS 5.5 and TS 5.0.1 on Solaris (conversion process underway) and have edited the iwdd.ipl on the 5.0.1 server to allow the extra heap size for data deploy (as noted in tech note 2016). Also, I have modified my Perl script on the 5.5 server to pass in on the command line the same figures:
my $min_heap= "-ms256m";
my $max_heap= "-mx512m";
I am still seeing the "too many open files" error along with a failure of datadeploy. Anyone have any thoughts as to if I may need to set the max even higher? Anyone else experienced this even after setting heap size?
Find more posts tagged with
Comments
Migrateduser
Can you post the DD log/output. Also do you have all solaris patches required for JRE 1.3 installed.
bscott1
The dd ouput just says "datadeploy error".. "too many open files." I have followed up with a case to IW and found that they have recommended Solaris patches. I will try that and see if thast helps the problem. Is there any set number that max heap size should be set to? Does anyone have any recommendations or rules of thumb?
Migrateduser
setting the heapsize will not help in the too many open files problem. In general you can set the max heapsize to a value <= to the size of your available physical memory.
Regarding the too many open files problem can you check what is your file descriptor limit by running ulimit -a and increase the value if its too small. Also you can run lsof command to check the number open files whild DD is running.
bscott1
Thanks for the good info. I am currently researching the lsof tool to monitor the IW processes during a data deployment. Can you by chance tell me the syntax I should put to do that? I have never used it before. I currently have descriptor limits set at 64 on our production server and 1024 on test. Any thoughts on why I might still be getting to "too many open files" error on test with that high of a descriptor limit?
Migrateduser
you can just run lsof and see what files are open and by which application. usage> lsof
Also you might want to increase the number of file descriptors 64 seems low. set the following variables in /etc/system file set rlim_fd_max=4096 and set rlim_fd_cur=4096, you may need a reboot after setting these variables.
bscott1
Thank you! That helps a lot. I will test it out.