- What would happen to the US Economy if taxes exceed 20% of GDP
- What is the economic incentive to cheat? How does an experiment capture exogenous deviations?
- Can my character have a pet mammoth?
- Where is the mention of Surya Loka(सूर्य लोकं) in Hinduism?
- Mysql_close ()-why few people apply
- 25x25 slitherlink puzzle
- Is there a quick way to speed up ICP in python using a cached KD-tree
- HC 05, Arduino Uno
- Remove lag between PS3 input via bluetooth to arduino
- Use esp32 as a secure sockets
- Best way to organize many pdf's?
- Is EPUB ready for most phones?
- Where is the fuel stored on an aircraft’s wing?
- What are the aileron lengths of commercial and military aircraft?
- Do jet aircraft have an emergency propeller?
- Are there different configurations of cabin crew seating arrangement for one aircraft model?
- How do PPL, CPL, and ATPL compare?
- To what extent is remuneration under a PPL enforced in the UK?
- What criteria are used for exiting an airplane in an emergency?
- Getting my dog ready to Adopt a new cat
Deep Learning: What are the differences between DeepMind's Learning to Learn method and a grid search of a network's hyperparameters?
If we have a meta-learner that trains an optimizer (who contains certain hyperparameters) and the optimizer is fine-tuned by the meta-learner depending on how it performs, how is it different from a usual grid search of the best hyperparameters? One way I think it's different is that the meta-learner can supposedly 'intelligently' find the best optimizer but a gridsearch is rather a brute-force method. But a grid search would likely include human knowledge on the range of hyperparameters where the model is likely to perform well.
My current impression is that the meta-learner simply tweaks the hyperparameters of the optimizer (which are usually fixed in many other cases) after certain number of epochs, evaluates the performance, then dynamically changes how the hyperparameters should be tweaked. Is this what the authors of the paper did?