User contributions for Lucasn1
Jump to navigation
Jump to search
23 October 2024
- 20:4320:43, 23 October 2024 diff hist +3,005 Apptainer No edit summary
17 September 2024
- 18:0518:05, 17 September 2024 diff hist −1 m Huggingface →Downloading the Model and the Dataset
- 18:0418:04, 17 September 2024 diff hist −4 m Huggingface →Downloading the Model and the Dataset
- 18:0118:01, 17 September 2024 diff hist +12 m Huggingface →Downloading the Model and the Dataset
- 18:0018:00, 17 September 2024 diff hist −12 m Huggingface →Downloading the Model and the Dataset
- 17:5517:55, 17 September 2024 diff hist +5 m Huggingface →Downloading the Model and the Dataset
- 17:5417:54, 17 September 2024 diff hist +7 m Huggingface →Downloading the Model and the Dataset
- 17:5217:52, 17 September 2024 diff hist −11 m Huggingface →Downloading the Model and the Dataset
- 17:4617:46, 17 September 2024 diff hist −2 m Huggingface →Downloading the Model and the Dataset
- 17:0717:07, 17 September 2024 diff hist 0 m Huggingface →Training Large Language Models (LLMs)
- 17:0717:07, 17 September 2024 diff hist −3 m Huggingface →Training Large Language Models (LLMs)
- 17:0017:00, 17 September 2024 diff hist +6,432 Huggingface No edit summary
11 September 2024
- 14:4614:46, 11 September 2024 diff hist −117 m PyTorch No edit summary
- 14:4614:46, 11 September 2024 diff hist −117 m PyTorch No edit summary
- 14:4514:45, 11 September 2024 diff hist −117 m PyTorch No edit summary
- 14:4414:44, 11 September 2024 diff hist −315 m PyTorch No edit summary
18 July 2024
- 15:2115:21, 18 July 2024 diff hist +5 m Dask →Single Node current
17 July 2024
- 15:3715:37, 17 July 2024 diff hist +54 m PyTorch No edit summary
10 May 2024
- 19:3419:34, 10 May 2024 diff hist +11 m Weights & Biases (wandb) No edit summary
12 February 2024
- 21:0821:08, 12 February 2024 diff hist +11 m Dask →Single Node
17 January 2024
- 15:4015:40, 17 January 2024 diff hist 0 m Huggingface →Multi-GPU & multi-node jobs with Accelerate
18 October 2023
- 15:1315:13, 18 October 2023 diff hist +18 m PyTorch No edit summary
3 October 2023
- 18:2218:22, 3 October 2023 diff hist +532 TensorFlow No edit summary
22 September 2023
- 18:1118:11, 22 September 2023 diff hist +1 m Dask No edit summary
19 September 2023
- 15:1715:17, 19 September 2023 diff hist +13 m Dask No edit summary
5 July 2023
- 19:0819:08, 5 July 2023 diff hist −14 m PyTorch No edit summary
- 19:0619:06, 5 July 2023 diff hist −7 m PyTorch No edit summary
- 19:0319:03, 5 July 2023 diff hist −578 m PyTorch No edit summary
- 18:5918:59, 5 July 2023 diff hist 0 m PyTorch No edit summary
- 18:5918:59, 5 July 2023 diff hist 0 m PyTorch No edit summary
- 18:5818:58, 5 July 2023 diff hist −304 m PyTorch No edit summary
- 18:5218:52, 5 July 2023 diff hist −480 m PyTorch No edit summary
26 June 2023
- 15:3215:32, 26 June 2023 diff hist +376 m PyTorch No edit summary
22 June 2023
- 18:0918:09, 22 June 2023 diff hist +91 Weights & Biases (wandb) No edit summary
2 June 2023
- 20:2420:24, 2 June 2023 diff hist +4 m Dask No edit summary
- 20:1220:12, 2 June 2023 diff hist −1 Dask →Multiple Nodes
- 20:0420:04, 2 June 2023 diff hist +5,379 N Dask Created page with "[https://docs.dask.org/en/stable/ Dask] is a flexible library for parallel computing in Python. It provides parallelized NumPy array and Pandas DataFrame objects, and it enables distributed computing in pure Python with access to the PyData stack. ==Installing our wheel== <!--T:15--> The preferred option is to install it using our provided Python [https://pythonwheels.com/ wheel] as follows: :1. Load a Python module, thus <..."
26 May 2023
- 13:0213:02, 26 May 2023 diff hist −5 m Deepspeed →Multi-GPU and multi-node jobs with Deepspeed current
- 13:0113:01, 26 May 2023 diff hist −1 m Deepspeed →Multi-GPU and multi-node jobs with Deepspeed
25 May 2023
- 18:4118:41, 25 May 2023 diff hist +160 m Deepspeed →Multi-GPU and multi-node jobs with Deepspeed
- 18:3918:39, 25 May 2023 diff hist −9 m Deepspeed →Multi-GPU and multi-node jobs with Deepspeed
- 18:1618:16, 25 May 2023 diff hist −53 m Deepspeed →Multi-GPU and multi-node jobs with Deepspeed
- 18:1518:15, 25 May 2023 diff hist +23 m Deepspeed →Multi-GPU and multi-node jobs with Deepspeed
- 18:1418:14, 25 May 2023 diff hist −33 Deepspeed →Multi-GPU and multi-node jobs with Deepspeed
- 18:0518:05, 25 May 2023 diff hist −38 m Deepspeed →Multi-GPU and multi-node jobs with Deepspeed
- 17:3417:34, 25 May 2023 diff hist −204 m Deepspeed →Multi-GPU and multi-node jobs with Deepspeed
- 15:5515:55, 25 May 2023 diff hist +7,809 N Deepspeed Created page with "DeepSpeed is a deep learning training optimization library, providing the means to train massive billion parameter models at scale. Fully compatible with PyTorch, DeepSpeed features implementations of novel memory-efficient distributed training methods, based on the Zero Redundancy Optimizer (ZeRO) concept. Through the use of ZeRO, DeepSpeed enables distributed storage and computing of different elements of a training task - such as optimizer states, model weights, model..."
- 15:3215:32, 25 May 2023 diff hist +547 Huggingface →Accelerate
- 15:1715:17, 25 May 2023 diff hist +4,751 Huggingface No edit summary
23 May 2023
- 18:4718:47, 23 May 2023 diff hist +226 Huggingface No edit summary