I am currently working to develop a protocol to compute the destabilization of the interface between two proteins upon mutation. The characteristics that I am looking for are a methodology that will distinguish between the perturbation induced by a mutation at the surface and one that is buried, and a perturbation that leads to destabilization of the interface versus one that does not. Thus my general strategy is to compute the energy of binding for the wild type and for the mutant and then take the difference. Because the initial structures are taken from crystal structures, I allow for repacking of the sidechains for both wildtype and mutant before computing the binding energy. From perusing the forums and sundry papers recommended there I either modified or created three different protocols. As far as I understand, the primary distinction lies in either scoring or the packing used however I am not clear how so. From my perspective they should in principle produce the same trends though perhaps not the same numbers, they do not and I am left a bit perplexed. I found that protocol 3 produces the qualitative behaviors I expected however I am somewhat unsure at this point how it differs from protocols 1 and 2. I know that in one fashion or another this question has been hashed and rehashed however having spent alot of time trying to figure out why these different methods produce wildly different results, I figured it might be nice to clarify this in a single spot. Your thoughts and insight regarding how the protocols differ would be very welcome.
InterfaceAnalyzer.default.linuxgccrelease @options -s <FILENAME>
The initial sidechains are built externally and a prebuilt PDB was fed in. This was done because as I understood it the InterfaceAnalyzer when called alone did not handle resfiles. As I understand, it uses the default score12 weights. While the mutation is done in an external program I thought that since the sidechain is repacked, this would not introduce a significant issue.
rosetta_scripts.linuxgccrelease @options -s <FILENAME>
<InterfaceAnalyzerMover name=fullanalyze scorefxn=s12_prime packstat=1 pack_input=1 pack_separated=1 jump=1 tracer=0 use_jobname=1 resfile=0/>
As I understand this protocol primarily differs from the 1st in terms of possibly allowing for extra chi angles in the rotamer sampling.
Here we add two delta G filters to calculate the delta G of binding before and after mutations are made.
We specify a jump number of 3 because want to calculate the delta G of binding the ligand, which is chain D
in the given PDB file.
<Ddg name=dg_wt threshold=1000 repeats=50 jump=1/>
<Ddg name=dg_mut threshold=1000 repeats=50 jump=1/>
Here we add a task operation used to specify that we want only to repack residues without design
<RestrictToRepacking name=repack_only />
Here we specify the location of the resfile to use for design.
<ReadResfile name=resfile filename=mut.resfile/>
Here is a mover to relax the crystal structure
Here we pack the rotamers without any design
<PackRotamersMover name=pack scorefxn=talaris2013 task_operations=repack_only/>
Here we pack the rotamers with design. We specify to read the resfile containing the mutation we want to design
<PackRotamersMover name=mut_and_pack scorefxn=talaris2013 task_operations=resfile/>
Here we include the movers and filters in the order we want them to run
EX 1 EX 2
<RESIDUE NUMBER> A PIKAA <RESIDUE CHANGE>
The last protocol differ from the first two in the fact that mutations are handled using a resfile rather than reading in a pre-mutated pdb, it also allows for the specification of several more repeats of the initial sampling (50 repeats in this case), rather than requiring repeated runs from the external commandline (not shown) to obtain statistical sampling. However other than these distinctions I am not clear how exactly it differs from the preceding protocol. As I said before thoughts and suggestions would be most welcome. Thank you for your consideration in advance.