You are here

Docking on mpi.

2 posts / 0 new
Last post
Docking on mpi.
#1

Hi guys

Our local cluster has recently ,finallly, helped me to get the mpi Rosetta version installed.
RosettaDock seems to be working though I get some error messages. None of which i observe on my local computer.
1st:
Created 6242 residue types
Number of residue types is greater than MAX_RESIDUE_TYPES. Rerun with -override_rsd_type_limit. Or if you have introduced a bunch of patches, consider declaring only the ones you want to use at the top of your app (with the options) with the command option[ chemical::include_patches ].push_back( ... ).

Can it create an issue to pass the " -override_rsd_type_limit" flag and why is this happening on the mpi and not rosetta3.5?

2nd:
protocols.docking.DockMCMProtocol: in DockMCMProtocol.apply
basic.io.database: Database file opened: scoring/score_functions/disulfides/fa_SS_distance_score
core.pack.dunbrack: cannot find binary dunbrack library under name /appl/rosetta/2014.20.56383.mpi/database/rotamer/bbdep02.May.sortlib.Dunbrack02.lib.bin
core.pack.dunbrack: Dunbrack library took 5.51 seconds to load from ASCII
core.pack.dunbrack: Random tempname will be: /appl/rosetta/2014.20.56383.mpi/database/4dun02_binary
core.pack.dunbrack: Opening file /appl/rosetta/2014.20.56383.mpi/database/4dun02_binary for output.
core.pack.dunbrack: Unable to open temporary file in rosetta database for writing the binary version of the Dunbrack02 library.
protocols.docking.DockMCMProtocol: Using the DockingTaskFactory.
core.pack.rotamer_set.UnboundRotamersOperation: Adding 'unbound' rotamers from /zhome/b8/e/66822/1ijk_nosaxs/1IJK_unbound2_re.pdb

From this I gather that Rosetta cannot write temporary files to the place it is installed. As I understand from the technician this is not really an issue as she wrote the following:

"However, it tries to write it in the subdirectory of the installation directory, as you noticed, and this is not allowed on a shared system as ours.
However, it it fails, as in this case, this should not give any error, but just a add a certain slowing down to your calculation.
It should be possible to tell the program not to try to write/read this file with the option -no_binary_dunlib"

Is she correct on this issue or do we need to set it up in some other way?

My main issue is of course that I can be sure that Rosetta performs exactly as on my local computer.

Best
Pernille

Category: 
Post Situation: 
Mon, 2014-07-28 08:40
Pernille

On the first issue - We recently made changes to the default ResidueTypeSet to reduce the memory footprint of Rosetta. That error message was added at the same time to keep things from bloating. Unfortunately, the ResidueTypeSet size reduction was done in the default Talaris context, but not in the older score12 context, so if you do a -restore_pre_talaris_2013_behavior, you'll get the error. There's no issue with supplying -override_rsd_type_limit on the commandline, especially with -restore_pre_talaris_2013_behavior: you'll use the same amount of memory for runs as you would with Rosetta 3.5, or for weeklies with -restore_pre_talaris_2013_behavior. The scientific results should not be affected. (Note that this is independent of the MPI issue - it's entirely due to the version of the Rosetta weekly release installed.)

On the second issue - Your technician is correct. The Dunbrack library takes a bit of time to parse and load (5.51 seconds in your case), so we normally store a "binary" form that's faster to load (typically 0.25 seconds or so). If the binary version can't be written, you'll just re-parse from text each time. The -no_binary_dunlib option will tell Rosetta to ignore reading/writing the binary format and go directly to text parsing.

Note that the binary version only needs to be written once. If you can get someone with write access to run a short Rosetta protocol (a simple scoring run would suffice) that should write the binaries to the database for all future runs. (Note that you'd want to run it once with -restore_pre_talaris_2013_behavior and once without, to make binaries for both the dun02 and dun10 libraries.)

All this is strictly for performance issues, though, and should in no way affect the scientific results.

A final note on matching behavior - it sounds like the weekly release version on the cluster is different from the one you installed locally. If consistency between the two is a big issue for you, I'd suggest downloading and installing local the same version that the cluster admins installed remotely. -- Depending on what you're doing, it might not be a big deal, though.

Mon, 2014-08-04 14:51
rmoretti