Hey Folks,
Could someone please share correct implementation of backprop in siamese networks? The explanation on the original paper is not super detailed.
I found this random implementation on github, ref. The inputs are passed one after the other, loss is computed for the last two inputs and the weight is updated after. Is this the correct implementation?
Another implementation I could think of is to have two copies of same network like Bi-encoder. Two inputs are passed simultaneously, loss is backprop'd and weights are updated for both the networks, and both network weights are replaced with aggregate(mean) of both networks before next forward pass.
Which one is correct?
Please clarify.
[link] [comments]




