- How to escape a whole string in a : command?
- Importing eps file format in Fluent or CFX
- What can I use to replace a missing battery cover?
- Effects of variables on the demand for money?
- What is the relationship between bank deposit rate and inflation?
- Why do Reserve Banks only move in 0.25 basis point 'leaps'
- What is Tinbergen's “all other income when paid out” measuring?
- Does input-output analysis really reflect indirect and total output?
- Imagination orrrrrrrr Fact?
- Does Karl Popper argue against Baysian inference?
- Is a very tall mountain in the middle of a desert hot and dry?
- What happens when the date becomes too long?
- Effectiveness of Nuclear Fire in Space
- What to do in relation to temporary emails
- Vidya with out guru
- Did Yama and Shani ever confront each other?
- Website not indexing
- Craft CMS and JS/MVVM frameworks
- So easy, everyone can (and should!) do it!
- About whom am I talking about?
Right way to Fine Tune - Train a fully connected layer as a separate step
I'm using Fine Tuning with caffenet and it works really well but then I read this in Keras blog entry on Fine Tuning (They use a trained VGG16 model):
"in order to perform fine-tuning, all layers should start with properly trained weights:
for instance you should not slap a randomly initialized fully-connected network on top of a pre-trained convolutional base.
This is because the large gradient updates triggered by the randomly initialized weights would wreck the learned weights in the convolutional base.
In our case this is why we first train the top-level classifier, and only then start fine-tuning convolutional weights alongside it."
So as a separate step in Fine tuning they save the output of the last layer before the fully connected layer (the "bottleneck features") and then they train a "small fully-connected model" on those features and only then they put the newly trained fully connected layer on top of the whole net and train the "last convolutional block".