- Weird Permissions Issue on SharePoint 2013
- Missing Site Templates error while Visual upgrade from SharePoint 2010 to SharePoint 2013
- ShrePoint 2016 + workflow 2013 export\import
- SharePoint copyTo moveTo with document library error
- Importing Data into Existing SharePoint List
- Move SP Site from staging to production?
- Maximum characters in Multiple lines of text Column
- Managing Huge Site Collections
- Huge db file size in WSS_UsageApplication DB
- Absent Father, should I reach out?
- Evidence that kids from big families are happier and better-functioning
- 2008 Ford Escape Hybrid Stop Safely Now - P1A0C
- Loctite vs anti-sieze
- hook_menu() custom URL with argument
- hook_menu() with a query string parameter?
- “Page not found” on hook_menu() items
- Dynamic parameters passed to page callback
- Implementing callback function in hook_menu()
- hook_menu() not available for ajax loaded links
Deep Learning: What are the differences between DeepMind's Learning to Learn method and a grid search of a network's hyperparameters?
If we have a meta-learner that trains an optimizer (who contains certain hyperparameters) and the optimizer is fine-tuned by the meta-learner depending on how it performs, how is it different from a usual grid search of the best hyperparameters? One way I think it's different is that the meta-learner can supposedly 'intelligently' find the best optimizer but a gridsearch is rather a brute-force method. But a grid search would likely include human knowledge on the range of hyperparameters where the model is likely to perform well.
My current impression is that the meta-learner simply tweaks the hyperparameters of the optimizer (which are usually fixed in many other cases) after certain number of epochs, evaluates the performance, then dynamically changes how the hyperparameters should be tweaked. Is this what the authors of the paper did?