Get Implemented The Deep Learning Flow In Your Organisation

By Jyoti Nigania |Email | Feb 14, 2019 | 4449 Views

Deep learning, or using massive amounts of data to build intelligent models, is a hot topic. Many companies, now seeing the benefits of AI materialize, have decided they need to get started on deep learning or risk getting left behind.
Businesses need to get started on deep learning. Following are the five main steps of deploying deep learning workflow.
1. Identify Business Problems: Computers are great at optimizing models, but not so great at setting strategic goals. This is where humans add a lot of value right at the beginning. If you're in a big organization, it's important to have your executives and data scientists buy-in on the goals. Together, you can look for areas or use cases that can benefit from deep learning in support of solving a pressing challenge or tapping into a new opportunity.
After coming up with one to three pilot projects, put together goals that are as specific, relevant, attainable, measurable, and timely as possible. The reason to generate multiple ideas up front is to leverage the experience of the gathered team and decide on a strategy to pivot quickly, in case results are not forthcoming from the first pilot.
Stay mindful of any regulatory aspect of your project, which is especially important with the implementation of GDPR. This not only affects what data you can collect but also how long you can store it. For example, if you're in insurance, industry policies require you to explain why your tools reject a client's claim, so you might want to use a deep learning tool with decision-tree models a traditional machine learning method to help with explainability and interpretability of the decision.
2. Build a Data Strategy: Having the right data strategy has always been one of the most challenging steps. Data is the experience from which models learn. There are two ways we can gather data:
Leverage publicly available datasets. There are many public data sets on Github and Kaggle you can use to get started. Some developers have a mistrust of public data sets, with skepticism around the quality of such data.
Build your own using available domain expertise or outsource that effort.
If the quality and relevance of the data set are important to you, you need to be the curator of your own experience. You can leverage domain experts to label existing data or outsource to startups for data labeling. Build the data culture that's relevant to you.
In the deep learning space, data curation the management of data throughout its life-cycle, from creation to storage to archive is an iterative process, especially in the evaluation process. That means where you curate data is now less important, comparing to the traditional big data and analytics best practices. It's more important to make sure we have all the right set-up and filters on how we work with the data.
3. Build and Train Models: After you have data, the next step is to build and train models. Depending on the expertise within your team, you may be able to develop a new model architecture that is ideally suited to your business challenge. More often, data scientists will scour the literature and open-source community to find existing model building blocks that allow them to quickly experiment and understand the weakness of existing solutions so as to build a novel architecture that is performant.
One simple rule of thumb for guiding experimentation is to start with a large model to prove that good results are possible. Once such a milestone is reached, it is much simpler to iteratively scale back on the size of the model fabric while maintaining high-quality performance.
Fortunately, for modern data scientists, it has never been easier to get started with model building, training, and evaluation. The deep learning frameworks e.g, TensorFlow, PyTorch, MxNet together with NVIDIA software libraries offer a high-level programming interface, which abstracts hardware and makes building neural network architecture graphs very intuitive.
4. Evaluate Model Accuracy: As soon as a model is built and trained, it is critical to evaluate its quality. This is where data scientists will need to look for patterns in the types of errors made by the model to understand how to iteratively improve its performance.
Two simple questions to ask early include:
Is your model failing to learn on the training set? If so, it may be a case of underfitting, or perhaps a novel architecture family.
 Is your model doing well on training data, but failing to generalize to unseen test data. If so, then it may be important to increase the size of your training data.
5. Deploy Trained Models: After you've gathered the data, built the models, and evaluated the models, then comes deployment. At this step, there's a lot to consider: Do you deploy this solution on-premise or in the cloud? You might want to deploy in the cloud if you're trying to target a global presence. You may consider on-premise due to data privacy and cost reasons.
What platform do I optimize on? Mobile? Medical equipment? In a car? Take the example of deploying on mobile phones. One of the biggest limitations is battery life. If you're trying to fit within that limited battery life, you might want to reduce your model complexity or adjust its various parameters. Sometimes you can get a 5-20x decrease in parameters with only 1-2% performance hit.
One of the tools that our customers use to help with their deployment is TensorRT, which is a high-performance deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications.

Source: HOB