- Create a changeable maze
- Oh no! Something's wrong with PSE!
- Training data batch size
- Which machine learning algorithm I can apply on my data set? What kind of data analysis should be performed?
- Binary Classifier making only one prediction
- Masthead module 'Jumping' just after page load
- Long stay visa and 90/180 rule
- Renting an apartment in Austria: what is a “double broker”?
- Private product exchange from abroad
- Use MQ2 Sensor with Arduino UNO without breadboard?
- Cannot find solution to “variable out of scope”
- Function pointer or abstract class?
- Delay in displaying data received from UART when used with PCINT on PIN0
- Prevent Raspberry Pi compute module corrupt data
- Monitoring MTP device activity in windows
- Java equivalent to NumPy
- Submersion alarm
- Nginx directory disclosures?
- after unceck option = exit Tor error
Right way to Fine Tune - Train a fully connected layer as a separate step
I'm using Fine Tuning with caffenet and it works really well but then I read this in Keras blog entry on Fine Tuning (They use a trained VGG16 model):
"in order to perform fine-tuning, all layers should start with properly trained weights:
for instance you should not slap a randomly initialized fully-connected network on top of a pre-trained convolutional base.
This is because the large gradient updates triggered by the randomly initialized weights would wreck the learned weights in the convolutional base.
In our case this is why we first train the top-level classifier, and only then start fine-tuning convolutional weights alongside it."
So as a separate step in Fine tuning they save the output of the last layer before the fully connected layer (the "bottleneck features") and then they train a "small fully-connected model" on those features and only then they put the newly trained fully connected layer on top of the whole net and train the "last convolutional block".