Rosetta currently doesn't have an option for stochastic gradient descent; since it's very rapid to compute the whole gradient vector, there was never any reason to do partial gradients. There are, however, several flavours, most of which use different approximations of the inverse of the second-derivative Hessian matrix. (True gradient descent using only gradients is implemented as the "linmin_iterated" minimization type, but this converges slowly and is recommended only for debugging. The default type is "lbfgs_armijo_nonmonotone", which is a quasi-Newtonian gradient descent methods that uses the low-memory Broyden–Fletcher–Goldfarb–Shannon algorithm to approximate the inverse of the Hessian matrix. This converges more quickly).
The minimizer is the Rosetta module that performs gradient-descent minimization. In PyRosetta, this is most easily accessed using the MinMover (https://graylab.jhu.edu/PyRosetta.documentation/pyrosetta.rosetta.protocols.minimization_packing.html#pyrosetta.rosetta.protocols.minimization_packing.MinMover).
Thanks for your quick reply. Can I change this optimization method to something else like stochastic gradient descent?
Rosetta currently doesn't have an option for stochastic gradient descent; since it's very rapid to compute the whole gradient vector, there was never any reason to do partial gradients. There are, however, several flavours, most of which use different approximations of the inverse of the second-derivative Hessian matrix. (True gradient descent using only gradients is implemented as the "linmin_iterated" minimization type, but this converges slowly and is recommended only for debugging. The default type is "lbfgs_armijo_nonmonotone", which is a quasi-Newtonian gradient descent methods that uses the low-memory Broyden–Fletcher–Goldfarb–Shannon algorithm to approximate the inverse of the Hessian matrix. This converges more quickly).