Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...Full Bio
Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...
Data science is the big draw in business schools
457 days ago
7 Effective Methods for Fitting a Liner
467 days ago
3 Thoughts on Why Deep Learning Works So Well
467 days ago
3 million at risk from the rise of robots
467 days ago
Top 10 Hot Artificial Intelligence (AI) Technologies
Deep Confusion: Misadventures in Building a Deep Learning Machine
I remember when I was first inspired to build a dedicated deep learning box.
I had just stumbled across Lukas Biewald's post from the O'Reilly AI newsletter on how to "build a super fast deep learning machine for under $1000." The thought of building a dedicated machine hadn't even occurred to me. At the time, I didn't know how to use Tensorflow and couldn't properly explain to you the mathematics of backprop. But, $1000 seemed like a reasonable and reachable budget for experimentation.
Shortly after, Jeremy Howard and Rachel Thomas of Fast.ai took a chance on me and offered me a spot in their Deep Learning Part II course.
This was a massive leap of faith on their part. The prerequisites for the course were Deep Learning Part I - which covered common CNN architectures like VGG, Inception, and Resnet, as well as word embeddings, RNNs, and basic NLP tasks - on top of a minimum of one year working in a coding-based position.
As an experience designer and product strategist, my professional work involves empathizing with users who are frustrated with technology and conducting IDEO-style "design jams" to make them less frustrated. My recent coding experience was limited to tweaking WordPress themes, hobbling together an iOS app that barely eeked past Apple quality control, and shipping crappy Alexa Skills that inflict corny science jokes on unsuspecting Amazon Echo owners.
Oh, and I knew jack squat about deep learning.
Given this (lack of) background, you might understand why taking a "hardcore AI" class intimidated me. I desperately marathoned through all the Deep Learning Part I MOOC videos in a weekend and watched an inordinate number of Khan Academy lessons to remind myself of pesky math principles I'd long forgotten.
But if I can do it, you can do it too. Even before class started, Jeremy encouraged students "who had gotten this far in deep learning" to build their own servers and avoid forking over hundreds of dollars each month to AWS for their slow-ass P2 instances.
Thus began my epic journey to build Deep Confusion, a box which I named after my typical mental experience when I try to understand anything Geoffrey Hinton says.
DOING THE RESEARCH
Luckily, many others have gone through the process and shared their wisdom in detailed articles. Here are the resources I based my choices on:
Lukas Biewald - Build a super fast deep learning machine for under $1,000
Tim Dettmers - A full hardware guide to deep learning
Roelof Pieters - Building a deep learning (dream) machine
Brendan Fortuner - Building your own deep learning box
Joseph Redmon - Hardware guide: neural networks on GPUs
My fellow Fast.ai students warned me that the hardest part is picking the parts. I didn't believe them since there were plenty of detailed configuration lists online, but hardware moves so fast you'll want to conduct your own research before making any commitments. NVIDIA's 1080 Ti was announced soon after many of my compatriots already made their GPU choices, causing much buyer's remorse.
BUYING THE PARTS
So, for what it's worth, here's my final parts list. I started off with a single Titan X Pascal GPU, but designed the computer to accommodate multiple GPUs in the future. By the time you read this, you'll likely be able to find superior hardware configurations, so don't neglect your research!
GPU - NVIDIA GeForce Titan X Pascal
(Warning, this Batmobile of a consumer GPU is so monolithic that it blocks multiple PCIe lanes on my motherboard when installed. In theory, my mobo supports up to 4x GPUs. In reality, it can only accommodate a maximum of 2x Titan X Pascals)
CPU - Intel Core i7-5820K Haswell-E 3.3 Ghz
CPU Cooler - Cooler Master Hyper 212 EVO
Motherboard - MSI X99A SLI Plus
Memory - Corsair Vengeance 16GB (2 x 8GB)
SSD - Samsung 850 EVO 2TB
(I decided to splurge on a larger SSD since I plan to do vision work with ImageNet and liked the idea of fitting the entire dataset on one fast drive)
HD - WD Red 3TB
Power Supply - EVGA 1000GQ 80+ Gold
Case - Rosewill Thor 2 ATX Full Tower
(This roomy case is also great for VR setups. Soon I plan to buy a Vive and make Deep Confusion a box for both deep learning and deep forgetting) Read More