InvariantPointAttention
Documentation for InvariantPointAttention.
InvariantPointAttention.BackboneUpdateInvariantPointAttention.IPAInvariantPointAttention.IPACacheInvariantPointAttention.IPAStructureModuleLayerInvariantPointAttention.IPCrossAInvariantPointAttention.IPCrossAStructureModuleLayerInvariantPointAttention.IPA_settingsInvariantPointAttention.dotproductsInvariantPointAttention.get_TInvariantPointAttention.get_T_batchInvariantPointAttention.get_rotationInvariantPointAttention.get_translationInvariantPointAttention.left_to_right_maskInvariantPointAttention.right_to_left_maskInvariantPointAttention.right_to_left_maskInvariantPointAttention.softmax1
InvariantPointAttention.BackboneUpdate — TypeProjects the frame embedding => 6, and uses this to transform the input frames.
InvariantPointAttention.IPA — TypeStrictly Self-IPA initialization
InvariantPointAttention.IPACache — MethodIPACache(settings, batchsize)Initialize an empty IPA cache.
InvariantPointAttention.IPAStructureModuleLayer — TypeSelf IPA Partial Structure Module initialization - single layer - adapted from AF2.
InvariantPointAttention.IPCrossA — TypeIPCrossA(settings)Invariant Point Cross Attention (IPCrossA). Information flows from L (Keys, Values) to R (Queries).
Get settings with IPA_settings
InvariantPointAttention.IPCrossAStructureModuleLayer — TypeCross IPA Partial Structure Module initialization - single layer - adapted from AF2. From left to right.
InvariantPointAttention.IPA_settings — MethodIPA_settings(
dims;
c = 16,
N_head = 12,
N_query_points = 4,
N_point_values = 8,
c_z = 0,
Typ = Float32,
use_softmax1 = false,
scaling_qk = :default,
)Returns a tuple of the IPA settings, with defaults for everything except dims. This can be passed to the IPA and IPCrossAStructureModuleLayer.
InvariantPointAttention.dotproducts — Methodfunction RoPEdotproducts(iparope::IPARoPE, q, k; chain_diffs = nothing)
chain_diffs is either nothing or a array of 0's and 1's describing the ij-pair as pertaining to the same chain if the entry at ij is 1, else 0.InvariantPointAttention.get_T — Methodget_T(coords::Array{<:Real, 3})Get the assosciated SE(3) frame for all residues in a protein backbone represented as a 3x3xL array of coordinates.
InvariantPointAttention.get_T_batch — MethodGet the associated SE(3) frames for all residues in a batch of proteins
InvariantPointAttention.get_rotation — Methodget_rotation([T=Float32,] dims...)Generates random rotation matrices of given size.
InvariantPointAttention.get_translation — Methodget_translation([T=Float32,] dims...)Generates random translations of given size.
InvariantPointAttention.left_to_right_mask — Methodleft_to_right_mask([T=Float32,] L::Integer, R::Integer; step::Integer = 10)InvariantPointAttention.right_to_left_mask — Methodright_to_left_mask([T=Float32,] L::Integer, R::Integer; step::Integer = 10)InvariantPointAttention.right_to_left_mask — Methodright_to_left_mask([T=Float32,] N::Integer)Create a right-to-left mask for the self-attention mechanism. The mask is a matrix of size N x N where the diagonal and the lower triangular part are set to zero and the upper triangular part is set to infinity.
InvariantPointAttention.softmax1 — Methodsoftmax1(x, dims = 1)Behaves like softmax, but as though there was an additional logit of zero along dims (which is excluded from the output). So the values will sum to a value between zero and 1.
See https://www.evanmiller.org/attention-is-off-by-one.html