Can one break one large global low-resolution docking run into smaller runs using -run:constant_seed and -run:jran=######## and just assign different ####### seed to each run.
Specifically, if I would like to generate 30,000 low-res. decoys. Rather than do it as one docking run, and since I assume all decoys are based on random generator, why not break up the run into three seperate runs of 10,000 (three seperate processors) running simultaneously, each assigned a different seed? Would this be equivalent to single 30,000 run to generate 30,000 decoys? In fact, since I have access to over 1000 single processors, why not do 1000 runs specifying 30 decoy structures--guess I would have to come up with 1000 seeds. Can the seeds be larger than 7 digits. Probablyl better to spread the differences in seed size as much as possible, I assume.
Obviously, the time savings would help. But not sure if there are differences one should be aware of if choosing such a route. Or maybe I'm completely missing something.
Thanks in Advance