Dear Rosetta community
I'm getting some really unrealistic scores when I'm using RosettaLigand to simulate a protein - peptide/inhibitor binding.
I'm following the guide/protocol for Rosetta 3.5 from the Documentation page along with a relax protocol. The steps I run are the following:
- python clean_pdb_keep_ligand.py your_structure_original.pdb -ignorechain
- Using pymol to create the apo-protein and the ligand/peptide.
- Babel to convert to SDF, Avogadro to fix mistakes.
- OpeneEye - Omega with the suggested flags.
- The 'assign charges' script.
- Then the 'mol-file to params' script.
- The repack protocol (using same ex-flags as for the docking, along with suggested flags) is first run following a relaxation step (-extra_res_fa ligand.params -relax:constrain_relax_to_start_coords -relax:coord_constrain_sidechains -relax:ramp_constraints false).
- Choosing lowest energy pose with either ligand conformation 0001 or 0002, depending on purpose.
I have included the FLAGS file along with the resulting scores.
The PDB I used is 2AOD.pdb.
My interface_delta is in the millions and the fa_rep is also quite high, from what I can understand browsing the forum.
I have tried several tweekings of the FLAGS files and I've tried on many different proteins - some with inhibitors and some with native peptides (hiv-1 protease) all results with the same order of magnitude.
I'm leaving out the co-factors (waters, DMS and GOL) in the PDB - I only use the ligand (2nc) and the protease.
Am I doing something completely wrong?
Thanks for your time.