GPU Build
Jump to navigation
Jump to search
GPU Build | |
---|---|
Project Information | |
Project Title | GPU Build |
Owner | Oliver Chang, Kyran Adams |
Start Date | |
Deadline | |
Primary Billing | |
Notes | |
Has project status | Active |
Copyright © 2016 edegan.com. All Rights Reserved. |
Contents
Single vs. Multi GPU
- GTX 1080 Ti Specs
- Since we are using Tensorflow, it doesn't scale well to multiple GPUs for a single model
- Which GPU for deep learning (04/09/2017)
- "I quickly found that it is not only very difficult to parallelize neural networks on multiple GPUs efficiently, but also that the speedup was only mediocre for dense neural networks. Small neural networks could be parallelized rather efficiently using data parallelism, but larger neural networks... received almost no speedup."
- Possible other use of multiple GPUs: training multiple different models simultaneously, "very useful for researchers, who want try multiple versions of a new algorithm at the same time."
- This source recommends GTX 1080 Tis and does cost analysis of it
- If the network doesn't fit in the memory of one GPU (11 GB),
- Want to get two graphics cards, one for development, one (crappy card) for operating system [1]
- Intra-model parallelism: If a model has long, independent computation paths, then you can split the model across multiple GPUs and have each compute a part of it. This requires careful understanding of the model and the computational dependencies.
- Replicated training: Start up multiple copies of the model, train them, and then synchronize their learning (the gradients applied to their weights & biases).
TL;DR
Pros of multiple GPUs:
- Able to train multiple networks at once (either copies of the same network or modified networks). Allows for running long experiments while running new ones
- Possible speed ups if the network can be split up (and is big enough), but tensorflow is not great for this
- More memory for huge batches (not sure if necessary)
Cons of multiple GPUs:
- Adds a lot of complexity.
Misc. Parts
- Cases: Rosewill 1.0 mm Thickness 4U Rackmount Server Chassis, Black Metal/Steel RSV-L4000[2]
- DVDRW (Needed?): Asus 24x DVD-RW Serial-ATA Internal OEM Optical Drive DRW-24B1ST [3]
- Keyboard and Mouse: AmazonBasics Wired Keyboard and Wired Mouse Bundle Pack [4]
- Optical drive: HP - DVD1265I DVD/CD Writer [5]
Other Builds/Guides
- Deep learning box for $1700 (Discussion)
- A Full Hardware Guide to Deep Learning
- Cheap build
- How to build a GPU deep learning machine
- Deep Learning Computer Build useful tips, long
- Another box
Questions to ask:
- Approx. dataset/batch size
- Network card?
Double GPU Build
Motherboard
- Should have enough PCIe slots
- Motherboards: MSI - Z170A GAMING M7 ATX LGA1151 Motherboard [6]
CPU/Fan
- At least one core (two threads) per GPU
- Chips: Intel - Core i7-6700 3.4GHz Quad-Core Processor [7]
- CPU Fans: Cooler Master - Hyper 212 EVO 82.9 CFM Sleeve Bearing CPU Cooler [8]
- Buying this fan because it's very cheap for the reviews it got, and the stock cooler for the CPU has had mixed reviews
GPU
- 2x GTX 1080 Ti [9]
- Integrated graphics on CPU: Intel HD Graphics 530
RAM
- At least as much RAM as GPUs (2 * 11 GB [GTX 1080 Ti size] = 22 GB, so 32GB)
- Does not have to be fast for deep learning: "CPU-RAM-to-GPU-RAM is the true bottleneck – this step makes use of direct memory access (DMA). As quoted above, the memory bandwidth for my RAM modules are 51.2GB/s, but the DMA bandwidth is only 12GB/s!"[10]
- Crucial - 32GB (2 x 16GB) DDR4-2133 Memory [11]
PSU
- Some say PSU should be 1.5x-2x wattage of system, some say wattage+100W
- I picked a rating such that another GPU or other extension could be added without a new PSU
- PSU: Rosewill - 1200W 80+ Platinum Certified Fully-Modular ATX Power Supply [12]