Hi all - I'm guessing that nobody outside of OpenText makes use of the profiler functionality ( https://knowledge.opentext.com/knowledge/llisapi.dll/fetch/-15106263/15106294/15106295/16376487/67794710/67790529/-/documentation/profiler.html ), however if you do make use of this: could you provide feedback on how you go about processing the profiling data?
I'm asking since we are looking into ways to make interpreting the profiler data easier. Step 1 of this process is to figure out what people do today (if anything). Hence: this request 🙂
Hi Dave,
You asked the question - and you know the answer - there's always going to be one person at least who's interested or using the feature :)
We've used it a couple of times - usually around tracing potential performance considerations. When I say "used" a better description would be "muddled our way through". I'd say it's more a case for us of trying to understand how to read the information available and process it in a logical way.
For example - what is a "cycle"?
When I'm looking at a call graph, what do the numbers represent?
So far, I've also not figured out how to establish a call graph and costs for functions triggered by a specific action in Content Suite - e.g. viewing a folder (and again this is likely my familiarity with the profile format and cachegrind tool).
I somewhat expect our use of the profiling will increase moving forward - though to what extent I can't predict.
If I were to do an exec summary - ideally we'd be able to execute a function (e.g. browse to a folder, open a perspective) and be able from the profile data to isolate that user interaction and identify all functions called in producing the response, the time (ms) each function took (if possible, how much of that involved DB query etc).
Seems rather that the profile data is for the entire thread duration - again, I may be mistaken, but so far I seem only to be able to understand the total time the thread spent in a function/call, rather than that time relative to a specific user's request.
Regards,
David
Thanks for this info, especially the 'exec summary' bit - this is super helpful
This is a feature in some callgrind viewers to 'hide' calls that it considers cyclic / recursive. You can turn this off in kcachegrind / qcachegrind via the "View -> Detect Cycles" menu option. Doing so will provide more information. Details @ https://valgrind.org/docs/manual/cl-manual.html#cl-manual.cycles
Well there's a few numbers, so a quick overview of is as follows:
For example, in the following graph:
... does this make sense? If not lemme know 🙂
I've also not figured out how to establish a call graph and costs for functions triggered by a specific action in Content Suite
I'm guessing you're enabling the profiler by adding some lines in the opentext.ini file like the following:
ProfileFormat=<###>
Profile=<###>
... is that correct? If so then yeah that's messy - you get data on everything while the server 9s very, very slowly) running. Another option is to add Profiler.Start() & Stop() calls in functions of interest via CSIDE's module explorer. That way you can ensure the generated file contains everything that's of interest & nothing that isn't.
One of the things I'm looking to do with the new feature is enable / disable the profiler via a right-click menu. That way you could turn it on, hit the URL(s) of interest in your browser, and then turn it off.
Yep - we use the Profile=### setting in the ot.ini file to enable. I'd not thought about adding explicit profile start/stop calls - as the times we've used the profiling we've been interested in relationships between other functions and ours.
The ability to turn off/on via a right click menu sounds good.
Thanks for the explanation on the numbers in the graph. Mostly makes sense. Sometimes I see relative percentages > 100.
e.g.
But I think that only occurs when the functions called by the function I've got focus on spend more time in them.
I'm not sure that if the callgrind format permits, but I can envisage it might be handy to have the min, max ,avg and stdev of time spent in each function. If that were possible, it'd help identify functions that have a "spread" of performance - i.e. perhaps specific conditions that trigger additional overhead.
Sometimes I see relative percentages > 100.
That sounds funky. I'll see if I can trigger something similar to that over here. Alternately: feel free to send the file to dcarpene@opentext.com & I'll see if I can figure out why this is. Also of interest: the version specifics of the application that generated the graph.
it might be handy to have the min, max, avg and stdev
It is absolutely possible to include these sort of things given the data in the file. It would be good to include this information - functions that are highly variant in their execution time might merit investigation based on that fact. I'll see about adding these into the viewer. Thanks for this - appreciated.