Hi all,
I am working on a piece of code I want to run in the DA framework. My code uses nodeCrawler to walk a folder hierarchy, and for what I'm attempting to do, it works pretty nicely. Now my dilemna is I have something which on one hand lends itself nicely to a job chain, but on the other, doesn't lend itself to the job chain's method of breaking up the job which for the most part assumes a min and a max DataID and the range gets chunked.
Since I'm using a node crawler, my process more lends itself to specifying a fixed number of items processed at a time, and more importantly, the exact number is not always known when you kick off the crawler - you keep running it until it says "complete".
With that last thing, I thought that the best method maybe to use a simple job and have the simple job schedule a new version of itself if the node crawler isn't yet complete. For a large crawl, this could introduce a lot of daisy chained simple jobs.
I'm currently working in a 10.0 environment but want this to wok in a 10.5 and 16 environmnt. Am I heading towards any pitfals?
-Hugh