Translations:AI and Machine Learning/16/en

Revision as of 15:04, 26 July 2019 by FuzzyBot (talk | contribs) (Importing a new version from external source)

If your computations are long, you should use checkpointing. For example, if your training time is 3 days, you could split it in 3 chunks of 24 hours. This would prevent you from losing all the work in case of an outage, and would give you an edge in terms of priority (more nodes are available for short jobs). Most machine learning libraries natively support checkpointing. Please see our suggestions about resubmitting jobs for long running computations. If your program does not natively support this, we provide a general checkpointing solution.