You are here

About loophash and batchrelax.

15 posts / 0 new
Last post
About loophash and batchrelax.
#1

It stopped here.Is there any wrong in my inputfile?Or I need to wait more time?see attachment~

AttachmentSize
QQ截图20130723145038.png59.91 KB
Post Situation: 
Mon, 2013-07-22 23:54
Run

Loophash and Batchrelax are MPI applications, and are intended to be run under an MPI framework across multiple (dozens to thousands) of processors/nodes. You need to launch them with an MPI launcher program. (Something like "mpirun", but what you use depends on the MPI framework on your system.) It looks like you simply ran the executable by itself, not with an MPI launcher. Thus it only started node zero, which for these protocols is an "emperor" (or master) node which simply organizes the other nodes and doesn't do any processing itself. You don't have any other nodes running, so it just sits around waiting for the non-existent nodes to check in with it.

Talk to your local sysadmin or MPI guru and find out how to run MPI programs across multiple nodes on the cluster you're using. (Each cluster can be different, so you need to check details with your local cluster expert.)

Tue, 2013-07-23 11:28
rmoretti

But I have used the mpirun.My command is mpirun loophash_mpi.mpi.linuxrelease @flags.I have installed the open-mpi in my personal computer.

Tue, 2013-07-23 22:36
Run

You need to pass the node specification to the mpirun command. How many processors do you want to use? Which computers are these to be run on? For example, on the one I use, I have to pass "-np 12" to mpirun to get it to spread 12 jobs across the current node. With loophash and batch relax you'll want to use many more processors (probably hundreds to thousands) spread across a large number of nodes. You'll need to pass options to split out the run onto those processors. If you don't, mpirun may just default to running a single instance on the local node, resulting in the errors you see.

Wed, 2013-07-24 13:17
rmoretti

And in the flags,there is -in:file:silent ##the starting population.What kind of file I need to use as inputfile?I can't understand the meaning of ( Silent input filename(s). [FileVector]) .

Wed, 2013-07-24 01:32
Run

The format is a "silent file". This is a Rosetta-specific file format. A large number of Rosetta protocols can take and output silent files. If you have a bunch of PDBs, the easiest way to create a silent file is probably with the score_jd2 application

score_jd2.default.linuxgccrelease -out:file:silent [silent file name] -out:file:silent_struct_type binary -s [list of PDBs to put into the silent file]

The "Silent input filename(s). [FileVector]" description simply means that the -in:file:silent takes a list of files in silent struct format. Generally you only pass a single file, though. (A single silent file can contain multiple different structures - although they should all be related. Basically, the same length, if not the same sequence.)

Wed, 2013-07-24 13:23
rmoretti

Thank you!I have solved some of my questions!
This is my running details.It stopped here.Is there anything wrong?Or I just need to wait it to finish calculation?And I just run the mpi in my own personal computer and just have one cpu(dual core).
[run@localhost loophash]$ mpirun -np 2 loophash_mpi.mpi.linuxgccrelease @flags
core.init: (0) Mini-Rosetta version unknown from unknown
core.init: (0) command: loophash_mpi.mpi.linuxgccrelease @flags
core.init: (0) 'RNG device' seed mode, using '/dev/urandom', seed=-1186202399 seed_offset=0 real_seed=-1186202399
core.init.random: (0) RandomGenerator:init: Normal mode, seed=-1186202399 RG_type=mt19937
core.init: (1) Mini-Rosetta version unknown from unknown
core.init: (1) command: loophash_mpi.mpi.linuxgccrelease @flags
core.init: (1) 'RNG device' seed mode, using '/dev/urandom', seed=-1270611067 seed_offset=0 real_seed=-1186202398
core.init.random: (1) RandomGenerator:init: Normal mode, seed=-1186202398 RG_type=mt19937
core.chemical.ResidueTypeSet: (0) Finished initializing fa_standard residue type set. Created 6225 residue types
core.chemical.ResidueTypeSet: (1) Finished initializing fa_standard residue type set. Created 6225 residue types
core.pack.task: (0) Packer task: initialize from command line()
core.pack.task: (1) Packer task: initialize from command line()
core.chemical.ResidueTypeSet: (0) Finished initializing centroid residue type set. Created 1980 residue types
basic.io.database: (0) Database file opened: scoring/score_functions/rama/Rama_smooth_dyn.dat_ss_6.4
core.chemical.ResidueTypeSet: (1) Finished initializing centroid residue type set. Created 1980 residue types
basic.io.database: (1) Database file opened: scoring/score_functions/rama/Rama_smooth_dyn.dat_ss_6.4
basic.io.database: (0) Database file opened: scoring/score_functions/EnvPairPotential/env_log.txt
basic.io.database: (0) Database file opened: scoring/score_functions/EnvPairPotential/cbeta_den.txt
basic.io.database: (0) Database file opened: scoring/score_functions/EnvPairPotential/pair_log.txt
basic.io.database: (0) Database file opened: scoring/score_functions/EnvPairPotential/cenpack_log.txt
basic.io.database: (0) Database file opened: scoring/score_functions/SecondaryStructurePotential/phi.theta.36.HS.resmooth
basic.io.database: (0) Database file opened: scoring/score_functions/SecondaryStructurePotential/phi.theta.36.SS.resmooth
LoopHashLibrary: (0) HASHSIZE: 10
LoopHashMap: (0) Setting up hash_: Size: 10
LoopHashMap: (0) Setting up hash_: Size: 10
LoopHashLibrary: (0) HASHSIZE: 15
LoopHashMap: (0) Setting up hash_: Size: 15
LoopHashMap: (0) Setting up hash_: Size: 10
LoopHashLibrary: (0) HASHSIZE: 20
LoopHashMap: (0) Setting up hash_: Size: 20
LoopHashMap: (0) Setting up hash_: Size: 10
LoopHashLibrary: (0) Reading merged bbdb_ (BackboneDatabase) .part1of64 with extras
LoopHashLibrary: (0) Reading ./backbone.db
BackboneDB: (0) Reading in proteins 0 to 0 out of 4
BackboneDB: (0) Data_ size 4
LoopHashLibrary: (0) Reading loopdb (LoopHashDatabase) .part1of64 with loop size 10
LoopHashMap: (0) Loophashmap range 0 3
LoopHashLibrary: (0) Reading loopdb (LoopHashDatabase) .part1of64 with loop size 15
LoopHashMap: (0) Loophashmap range 0 3
LoopHashLibrary: (0) Reading loopdb (LoopHashDatabase) .part1of64 with loop size 20
LoopHashMap: (0) Loophashmap range 0 3
LoopHashLibrary: (0) Read MergedLoopHash Library from disk: 0 seconds
LoopHashLibrary: (0) Hash: 10
LoopHashMap: (0) loopdb_: 420 Size: 6720
LoopHashMap: (0) BackboneIndexMap: 420 Size: 5040
LoopHashLibrary: (0) Hash: 15
LoopHashMap: (0) loopdb_: 400 Size: 6400
LoopHashMap: (0) BackboneIndexMap: 400 Size: 4800
LoopHashLibrary: (0) Hash: 20
LoopHashMap: (0) loopdb_: 380 Size: 6080
LoopHashMap: (0) BackboneIndexMap: 380 Size: 4560
LoopHashLibrary: (0) BackboneDB: 196
MPI_WUM: (0) Starting MPI_WorkUnitManager..
MPI_WUM: (0) This is node 0 Nprocs: 2
MPI.LHR: (0) IDENT: myjob
MPI.LHR: (0) Interlace dumps: 1374753920 1374753012 908 3600
MPI.LHR.Emperor: (0) Init Master: 0
MPI.LHR.Emperor: (0) STARTLIB:
MPI.LHR: (0) N:0 ----
MPI.LHR.Emperor: (0) Emperor Node: Waiting for data ...
MPI.LHR: (0) STATL: 0s 0 0 Acc: 0.000 CPU: 0 hrs r/l: 0.00 LHav: 0s BRav: 0s MEM: 0 kB
basic.io.database: (1) Database file opened: scoring/score_functions/EnvPairPotential/env_log.txt
basic.io.database: (1) Database file opened: scoring/score_functions/EnvPairPotential/cbeta_den.txt
basic.io.database: (1) Database file opened: scoring/score_functions/EnvPairPotential/pair_log.txt
basic.io.database: (1) Database file opened: scoring/score_functions/EnvPairPotential/cenpack_log.txt
basic.io.database: (1) Database file opened: scoring/score_functions/SecondaryStructurePotential/phi.theta.36.HS.resmooth
basic.io.database: (1) Database file opened: scoring/score_functions/SecondaryStructurePotential/phi.theta.36.SS.resmooth
LoopHashLibrary: (1) HASHSIZE: 10
LoopHashMap: (1) Setting up hash_: Size: 10
LoopHashMap: (1) Setting up hash_: Size: 10
LoopHashLibrary: (1) HASHSIZE: 15
LoopHashMap: (1) Setting up hash_: Size: 15
LoopHashMap: (1) Setting up hash_: Size: 10
LoopHashLibrary: (1) HASHSIZE: 20
LoopHashMap: (1) Setting up hash_: Size: 20
LoopHashMap: (1) Setting up hash_: Size: 10
LoopHashLibrary: (1) Reading merged bbdb_ (BackboneDatabase) .part2of64 with extras
LoopHashLibrary: (1) Reading ./backbone.db
BackboneDB: (1) Reading in proteins 0 to 0 out of 4
BackboneDB: (1) Data_ size 4
LoopHashLibrary: (1) Reading loopdb (LoopHashDatabase) .part2of64 with loop size 10
LoopHashMap: (1) Loophashmap range 0 3
LoopHashLibrary: (1) Reading loopdb (LoopHashDatabase) .part2of64 with loop size 15
LoopHashMap: (1) Loophashmap range 0 3
LoopHashLibrary: (1) Reading loopdb (LoopHashDatabase) .part2of64 with loop size 20
LoopHashMap: (1) Loophashmap range 0 3
LoopHashLibrary: (1) Read MergedLoopHash Library from disk: 0 seconds
LoopHashLibrary: (1) Hash: 10
LoopHashMap: (1) loopdb_: 420 Size: 6720
LoopHashMap: (1) BackboneIndexMap: 420 Size: 5040
LoopHashLibrary: (1) Hash: 15
LoopHashMap: (1) loopdb_: 400 Size: 6400
LoopHashMap: (1) BackboneIndexMap: 400 Size: 4800
LoopHashLibrary: (1) Hash: 20
LoopHashMap: (1) loopdb_: 380 Size: 6080
LoopHashMap: (1) BackboneIndexMap: 380 Size: 4560
LoopHashLibrary: (1) BackboneDB: 196
MPI_WUM: (1) Starting MPI_WorkUnitManager..
MPI_WUM: (1) This is node 1 Nprocs: 2
MPI.LHR: (1) IDENT: myjob
MPI.LHR: (1) Interlace dumps: 1374756021 1374753012 3009 3600
MPI.LHR.Master: (1) Init Master: 1
MPI.LHR: (1) Reading in structures...
core.io.silent: (1) Reading all structures from ./inputfile/default.out
core.io.silent: (1) Finished reading 1 structures from ./inputfile/default.out
core.scoring.ScoreFunctionFactory: (1) SCOREFUNCTION: standard
core.scoring.ScoreFunctionFactory: (1) SCOREFUNCTION PATCH: score12
core.scoring.etable: (1) Starting energy table calculation
core.scoring.etable: (1) smooth_etable: changing atr/rep split to bottom of energy well
core.scoring.etable: (1) smooth_etable: spline smoothing lj etables (maxdis = 6)
core.scoring.etable: (1) smooth_etable: spline smoothing solvation etables (max_dis = 6)
core.scoring.etable: (1) Finished calculating energy tables.
basic.io.database: (1) Database file opened: scoring/score_functions/PairEPotential/pdb_pair_stats_fine
basic.io.database: (1) Database file opened: scoring/score_functions/hbonds/standard_params/HBPoly1D.csv
basic.io.database: (1) Database file opened: scoring/score_functions/hbonds/standard_params/HBFadeIntervals.csv
basic.io.database: (1) Database file opened: scoring/score_functions/hbonds/standard_params/HBEval.csv
basic.io.database: (1) Database file opened: scoring/score_functions/EnvPairPotential/env_log.txt
basic.io.database: (1) Database file opened: scoring/score_functions/EnvPairPotential/cbeta_den.txt
basic.io.database: (1) Database file opened: scoring/score_functions/EnvPairPotential/pair_log.txt
basic.io.database: (1) Database file opened: scoring/score_functions/EnvPairPotential/cenpack_log.txt
basic.io.database: (1) Database file opened: scoring/score_functions/EnvPairPotential/env_log.txt
basic.io.database: (1) Database file opened: scoring/score_functions/EnvPairPotential/cbeta_den.txt
basic.io.database: (1) Database file opened: scoring/score_functions/EnvPairPotential/pair_log.txt
basic.io.database: (1) Database file opened: scoring/score_functions/EnvPairPotential/cenpack_log.txt
basic.io.database: (1) Database file opened: scoring/score_functions/MembranePotential/CEN6_mem_env_log.txt
basic.io.database: (1) Database file opened: scoring/score_functions/MembranePotential/CEN10_mem_env_log.txt
basic.io.database: (1) Database file opened: scoring/score_functions/MembranePotential/memcbeta_den.txt
basic.io.database: (1) Database file opened: scoring/score_functions/MembranePotential/mem_pair_log.txt
basic.io.database: (1) Database file opened: scoring/score_functions/P_AA_pp/P_AA
basic.io.database: (1) Database file opened: scoring/score_functions/P_AA_pp/P_AA_n
basic.io.database: (1) Database file opened: scoring/score_functions/P_AA_pp/P_AA_pp
core.pack.dunbrack: (1) Dunbrack library took 0.45 seconds to load from binary
MPI.LHR: (1) Loaded 1 starting structures
MPI.LHR: (1) Added 1 structures to library
MPI.LHR.Master: (1) Using default sample weight of 50 for every residue
MPI.LHR.Master: (1) STARTLIB:
MPI.LHR: (1) N:1 ----
MPI.LHR: (1) LIB: [ 1 _0 | -264.9 | -264.9 | 0.0 | 12.3 | 0 | 0 | 1 |0]
MPI.LHR.Master: (1) Master Node: Waiting for job requests...
MPI.LHR: (1) STATL: 3s 0 0 Acc: 0.000 CPU: 0 hrs r/l: 0.00 LHav: 0s BRav: 0s MEM: 8 kB
MPI.LHR.Master: (1) Added 39 loophash WUs to queue. ssid=1
MPI.LHR.Master: (1) WARNING: 0 1

Thu, 2013-07-25 07:05
Run

Unfortunately for you, Loophash (batch relax, too) is a massively parallel application. Instead of just a single master node, it uses multiple masters and then has a master-of-master nodes (the emperor). You're running into the same problem as before, but instead of a single emperor node sitting around waiting for work, you have an emperor and master, both waiting for nonexistent worker nodes to show up.

When I said dozens to thousands of nodes, I meant it. You're really not going to be able to successfully run loophash and batchrelax on a single dualcore machine. They're applications which are intended to be use on large multi-node computational clusters.

If you absolutely need to run them on a dual core machine, I can see if you can fake it, but it will be painful and exceedingly slow. You're much better off trying to get access to a cluster. If you're at a university there's probably one you can get time on, or failing that there are a number of contract clusters out there (Rosetta@Cloud http://rosetta.insilicos.com/ (mentioned for informational purposes only, no endorsement intended or implied) is one such service tailored to Rosetta use, but there are other out there which may fit your needs better.)

Thu, 2013-07-25 12:24
rmoretti

Thank you for your reply.
I am a sophomore,and I don't have the large multi-node computational clusters.I just have one dual core machine.I have no choice.If I still want to run on a dual core machine,What should I do?How can I fake it?I just want to run a small protein like enkephalin.

Thu, 2013-07-25 17:10
Run

Um, are you affiliated with any of the RosettaCommons labs, by any chance?

The reason I ask is that apparently neither loophash nor batchrelax have been released yet. The general public shouldn't have access to them yet. Only if you are a Rosetta developer or closely associated with a RosettaCommons lab would you be able to compile and run them in the first place.

Thu, 2013-07-25 18:15
rmoretti

I got the C files from someone in the lab,and I compiled it by myself.I am not in the lab,I have difficulties in running them,so I turn to you for help. I want to be a Rosetta developer.But actually,I don't know how to do.

Thu, 2013-07-25 19:08
Run

Rosetta isn't a Free Software/Open Source program in the same model something like Linux is. Instead it's explicitly licensed directly from the RosettaCommons - a no cost license for academic users, and a for-pay license for commercial use. Because of administrative issues, being a Rosetta developer isn't really open to the general public. You need to be a member of a RosettaCommons lab, or be closely associated with one of them. All official* Rosetta development needs to be funneled through one of the RosettaCommons member PIs (most of which are listed on the left hand side of https://www.rosettacommons.org/home). If you're interested in contributing to Rosetta in the long term (as opposed to simply using it), your best bet is to get a position or affiliation with one of those labs (e.g. as a graduate student or postdoc), or at the very least develop a close relationship with one of them.

For your current purposes, I would talk to the "someone in the lab" from whom you got the files. It could be that they got a pre-release version of loophash and batchrelax from a RosettaCommons member collaborator. If that's the case, they can put you in contact with their RosettaCommons contact, who could possibly provide you with the non-mpi versions of loophash and batchrelax. (Non-MPI version of both are available in the development version, which will run suitably - but more slowly - on a single processor. I've actually been informed that the non-MPI batchrelax application was the first version of batchrelax to be developed, and works more reliably than the MPI version; I misspoke earlier when I said the protocol was by default an MPI application.)

*) Official meaning development which is included in the Rosetta releases. My understanding of the license (don't quote me - I'm not a lawyer) is that development and modification of Rosetta for your own internal use is permitted, a long as you don't go distributing it to other people without clearing it with the Rosetta licensing people first.

Fri, 2013-07-26 11:12
rmoretti

Thank you for your reply.

And do you have the non-mpi versions of loophash and batchrelax?

Fri, 2013-07-26 16:23
Run

I do have access to the developers' version of Rosetta, but as the loophash and batchrelax applications aren't mine, I don't feel comfortable distributing them to others. Again, talk to the person who provided you with the pre-release MPI versions of the programs. The Rosetta developer who provided them would be your best contact in obtaining the non-MPI versions.

Failing that, your best bet is to wait until the programs are officially released. (I unfortunately don't know when that will be.) In the mean time, you may want to consider other similar approaches. For example, batchrelax is an extension of the FastRelax protocol, and simply running FastRelax with many output structures can do more or less what batchrelax can do, albeit less efficently. Similarly, there are a number of other loop remodeling protocols in Rosetta which can be applied to many of the same problems that loophash is used for. You may want to investigate those, and see if their output will be acceptable.

Mon, 2013-07-29 12:58
rmoretti

But I sent email to the person who provided me with the pre-release MPI versions of the programs,he hadn't replied me for several days.He maybe the person who wrote the loophash and batchrelax.And I guess the non-MPI versions of loophash and batchrelax will not exist in official version,if it is released.I hope you can help me,I just use it to optimize the structure of protein,do some contribution to this field.Thank you!

Wed, 2013-07-31 02:51
Run