rsnt_translations
56,420
edits
No edit summary |
No edit summary |
||
Line 98: | Line 98: | ||
... you should consider grouping many jobs into one. [[META: A package for job farming|META]], [[GLOST]], and [[GNU Parallel]] are available to help you with this. | ... you should consider grouping many jobs into one. [[META: A package for job farming|META]], [[GLOST]], and [[GNU Parallel]] are available to help you with this. | ||
== Experiment | == Experiment tracking and hyperparameter optimization == <!--T:27--> | ||
<!--T:28--> | <!--T:28--> | ||
Line 110: | Line 110: | ||
Note that Comet and Wandb are not currently available on Graham. | Note that Comet and Wandb are not currently available on Graham. | ||
== Large | == Large-scale machine learning (big data) == <!--T:40--> | ||
<!--T:41--> | <!--T:41--> | ||
Modern | Modern deep learning packages like Pytorch and TensorFlow include utilities to handle large-scale training natively and tutorials on how to do it abound. Scaling classic machine learning (i.e., not deep learning) methods, however, is not as widely discussed and can often be a frustrating problem to solve. [[Large_Scale_Machine_Learning_(Big_Data)|This guide]] contains ideas and practical options, along with tutorials, to tackle training classic ML models on very large datasets. | ||
== Troubleshooting == <!--T:31--> | == Troubleshooting == <!--T:31--> |