Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Full Bio 
Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

3 Best Programming Languages For Internet of Things Development In 2018
249 days ago

Data science is the big draw in business schools
422 days ago

7 Effective Methods for Fitting a Liner
432 days ago

3 Thoughts on Why Deep Learning Works So Well
432 days ago

3 million at risk from the rise of robots
432 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
301896 views

Here's why so many data scientists are leaving their jobs
79437 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
73461 views

2018 Data Science Interview Questions for Top Tech Companies
72585 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
60201 views

Making Deep Learning User-Friendly, Possible?

By Nand Kishor |Email | Apr 7, 2018 | 12228 Views

Here, I will present efforts being made to make Deep Learning (part of Machine Leaning) more user-friendly, so that it becomes easier to use by companies. Hopefully these efforts will help reduce the "struggle" faced by companies when they dip in the depths of Deep Learning.

So, is it possible to make Deep Learning (MLP, CNN, RNN, LSTM, GAN, DRL, ...) more user-friendly ? More user-friendly like going from MS-DOS to Windows 2.0 (remember MS-DOS ? the CP/M clone). Or like going from an Integrated Development Environment (IDE) to a Graphical Programming Environment (GPE).

Well, everything is possible ... if you put enough effort into it.

Deep Learning has already been made a lot easier to use
A lot of efforts have already been made to make it easier to use Deep Learning:

  • DL Frameworks ( Theano, Caffe, Tensorflow, MXNet)
  • Meta-Frameworks or API (Keras, ONNX ?)
  • Open Models (ResNet, LeNet, VGG16, SqueezeNet, ...) that can be used for transfert learning
  • Jupyter Notebooks


Thanks to the pervasive Open Source culture in Deep Learning, all these are readily and freely available. And now that the Open culture has moved to DL models, very high performance NN have been made available to all through Deep Learning model Zoos. These make transfer learning a "breeze".

They're also are other efforts to make it very easy to use Deep Learning potential without any development effort, any need for a DS/DL department or dedicated engineers:

- Object recognition Web Services (YOLO, ...)
- Google AutoML service

But these provide Black Box services ... well actually a Black Box inside another Black Box. Some companies might not care, and find the service very useful/economic, but some will care and will want to master the Deep Learning process.

User-friendliness ? Cool !!! But what for ?
Still, right now, building a Deep Learning model involves quite a bit of programming with one of the current Deep Learning framework (Theano, Tensorflow, PyTorch, CNTK, Caffe2, ...) or Meta-API ( Keras) and programing language (Python, Java, R, ...). It is usually done by fairly advanced users that have been specifically trained.

So Deep Learning user-friendliness could take the form of:

  • No programming -> better especially for people who don't program much or don't know about the specific Deep Learning frameworks
  • Intuitive Graphical Editor to design/modify Deep Learning architecture -> easier to use and better to reach less advanced users

Other motivations to go the User-Friendly way are that it could yield:

  • better productivity‚??-‚??user are more efficient and creative
  • model more easily understandable and easier to modify/maintain.

That means user-friendliness could also come from:

  • having an efficient development/experiment environnement
  • integrated environnement to perform model inference,
  • DL model performance monitoring
  • easy deployment of the trained models
  • automation in the management of files produced by the DL model construction process.

Target Users
User-friendliness comes with restricted flexibility in building custom DL models. It can work if the cursor between the two opposite aims, customizability & ease of use, is properly positioned. But the position of the cursor depend on the type of the user.

Who's likely to need user-friendliness:

  • students /self-learners / teachers
  • SME companies
  • Engineers / Researchers who want to master the technology but need a tool that will make them more productive

-> need to build new DL model faster an modify them faster
-> need to make a lot experiments: different architectures, optimize Hyper-parameter, tune datasets, ...

Who is less likely to need it:

  • Researchers in Deep Learning
  • Deep Learning engineers, specially in advances uses and production context

The User-Friendly Deep Learning tools
So far, a few attempts have been made to make Deep Learning more user-friendly. Here, I will present three that have gone quite a way to get there:

  • Deep Learning Studio from DEEP COGNITION
  • Neural Network Console from SONY
  • Neural Network Modeler by IBM ( previous names: IBM Deep Learning IDE and DARVIZ )

All of these tools have a graphical editor to edit the Deep Learning models. They all allow to build a Deep Learning architecture from DL layers like 2D-Convolution, 2D-MaxPooling, ...

Two other candidates to user-friendliness will not be reviewed since, as of now, they don't provide a graphical editor to edit DL models:

  • NVIDIA Digits
  • Tensorboard (part Tensorflow environment)

First, a bit of History
Back in the old days (90s), when Neural Networks were still in the SWALLOW ERA, there were Neural Network simulators with GUI like:

  • SN Neural Network Simulator from Neuristique (http://leon.bottou.org/projects/neuristique) in which Yan LeCun was involved
  • SNNS (http://www.ra.cs.uni-tuebingen.de/SNNS/) from the Institute for
  • Parallel and Distributed High Performance Systems (IPVR) of the University of Stuttgart
  • and others.

These already tried to make Neural Network modeling more user-friendly. I actually used SNNS for my first steps into the SWALLOW NN field. It was a great Open Source tool to play around with:

After the SHALLOW era, the neural networks jumped in at the DEEP LEARNING end and got into the DEEP era.

DEEP LEARNING STUDIO (DLS) by Deep Cognition
Deep Leaning Studio is a very interesting platform that has two operation modes: cloud and desktop.

DLS is available here: http://deepcognition.ai/

Deep Leaning Studio GRAPHICAL EDITOR
DLS has the essential user-friendly ingredient -> a Deep Learning model editor that looks like:

The basic ingredients of the DL model editor are the layer (see left panel) that make up a DL model. To build the convolution part of a CNN, layers like Convolution2D, MaxPooling2D, BatchNormalization layer are available. All the layer defined in the KERAS API are available plus a few other. The model is build by drag'n dropping these layer on the editor workspace and defining the connection graph between these layer. The parameters of each layer can be set by selecting the layer and then setting the values in a side panel on the right of the editor screen.

An exemple of building a DL model with DLS can be seen in this video: https://player.vimeo.com/video/198088116"

Each time a layer is added or a parameter of a layer is changed a background process check that the network is "coherent". That way, one is warned early on if is side-tracking into building an "impossible" model.

So Deep Learning Studio has got what it take to make Deep Learning more user-friendly.

Basic characteristics
  • Make designing/modifying the architecture of DL easier
  • Provides all the DL Layer of the Keras API, plus a few technical one (e.g. merge)
  • Drad n' Drop and copy/paste helps building large networks
  • Allow for easy configuration of each layer
  • Automatic checking of the coherence of the constructed DL network
  • Pre-trained KERAS layer. SqueezeNet available ! last fully connected can be retrained making easy to implement transfert learning (add reference)

Advanced characteristics
But it is got a more features that go beyond providing a simple editor and provide a tight integration with the rest of the environment. One of these is:

  • The Restore model (which actually restore, Data Preprocessing and Hyperparm ) is very useful when making several trial for data preprocessing, architecture learning hyperparm.
  • integration of pre-trained Keras models
  • AutoML (see further down)

The DLS ENVIRONMENT
The graphical editor does not come alone. There are 4 more part (accessible as tabs) in the DLS environment:

  • DATA‚??-‚??to load the datasets and pre-processing them
  • HYPERPARAMETERS‚??-‚??to edit the training hyper-parameters
  • TRAINING‚??-‚??to start/stop and monitor training
  • RESULTS‚??-‚??to analyse and compare results from several experiments
  • INFERENCE / DEPLOY‚??-‚??to test the model and deploy it

the Graphical DL model editor is in the MODEL tab
All these functionalities deal with building DL models

Further to that there a other sections of the environment that provide that extra + that takes the tool in another dimension:

  • Notebook‚??-‚??to use/program Jupiter notebooks
  • Environnements‚??-‚??to manage and use ML environnement from the command line
  • Deployments‚??-‚??to manage the deployed DL models constructed with DLS
  • plus a few more practicality orientated sections

Projects‚??-‚??to browse and access projects
  • File Browser‚??-‚??to managed the files in the environment (cloud or local)
  • Datasets‚??-‚??to preprocess and to load dataset so there available as model inputs
  • Forum‚??-‚??to get help from support ans other DLS users
  • Videos‚??-‚??to access training videos
  • Support- to get support from Deep Cognition

So the DLS solution does provide quite a bit more than just a DL model editor. Here is not the place to present the full DLS environment. So I will just point a few functionalities that are quite well thought-out and pretty useful:

  • Have several projects train at the same time and still be able to work on others projects
  • AutoML
  • Jupiter notebook and "command line" programming are provided
  • possibility to define custom loss function (objective functions)
  • the cloud and desktop are the "same" (might just a few technical differences) and further to that the desktop version is available for FREE.

AutoML
It automatically generate an easily trainable DL architecture for a specific Dataset. It's not same approach as Google AutoML (https://techcrunch.com/2018/01/17/googles-automl-lets-you-train-custom-machine-learning-models-without-having-to-code/). I was able to test it on a leaf shape recognition dataset and it works very well.

Production time‚??-‚??Single Click REST-API Deployment
Once the model is built DLS allows deployment of the model as REST API. In addition to deployed REST API, a simple form based web application is also generated and deployed for rapid testing and sharing. Deployed models can be managed from the deployment menu.


Jupiter Notebook and Pre-configured Environments
Jupiter Notebook
DSL provides the possibility to program inside a Jupyter Notebook or run already existing Notebook in the environments provided (Desktop or Cloud).


Pre-configured Environments
Deep Cognition has introduced pre-configured environment for deep learning programmers. This feature frees AI developers from headache of setting up development environments. This is especially important as many deep learning frameworks and libraries require different version of packages. Conflict in version of these packages often lead to wasted time in debugging.

Currently latest version of Tensorflow, Keras, Caffe 2, Chainer, PyTorch, MxNet, and Caffe are available. These enable developers to use various different github AI projects very fast. These environments are isolated and supports both CPU and GPU computing.

Ultimately these free up developers time from devops work and help them focus on real AI model building and optimization work.

Pre-configured environments in Deep Learning Studio not only gives access to terminal but also to a full-fledged web-based IDE that is based on open source components from VS Code.

These two features (Jupiter Notebook and Pre-configured Environments) are a real asset. They make it possible to use the Deep Cognition Cloud and GPUs for any Deep Learning, Machine Leaning or Data Science task -> Doesn't lock people in an Editor only solution.

What comes out
  • Trained models saved in the Keras H5 format
  • DLS model in .yaml format containing a DLS specific description of the model
What is the technology behind DLS
  • Apache MXNet
  • Keras like API
What is missing ... for now
  • possibility to load other pre-trained Keras model -> can't make use of model from Zoo models
  • export to Python/Keras code of the model
  • viewing the performance with a confusion matrix when there is a classification problem
  • a detailed documentation (not a big problem since the environment is pretty intuitive to use)

NEURAL NETWORK CONSOLE by SONY
SONY's Neural Network Console (NNC) seems to have initially be an internal tool that has been made into a product with the associated in-house DL framework Neural Network Libraries (https://nnabla.org/) released in Open Source here https://github.com/sony/nnabla . So NNC should benefit from that internal experience.

NNC is available here: https://dl.sony.com/

Neural Network Console GRAPHICAL EDITOR

The model editor in the NNC works pretty much the same way as the editor in DLS. The layers that can be added to the DL model are specific to SONY's DL framework Neural Network Libraries (NNL).

Here are a few specificities and some differences with DLS:

  • logical processing layer are provided‚??-‚??LogicalAnd, LogicalOr, ...
  • loop layers are provided (but no exemple provided)‚??-‚??useful to build residual networks and recurrent neural networks
  • different data-preprocessing can be done with layers in the model or in the Dataset tab.
  • the hyper-parameters can be set in the CONFIG tab
  • the Evaluation tab has a useful confusion matrix for classification problems

Neural Network Console ENVIRONMENT
Similarly to DLS, NNC provide an environment that goes beyond the DL model editor.

The graphical editor does not come alone. There are 4 more panels ( in the NNC environment:

  • DASHBOARD‚??-‚??to monitor resource usage
  • DATASET‚??-‚??to preprocess and to load dataset so there available as model inputs
  • JOB HISTORY‚??-‚??to monitor the ended training jobs

NNC Drawbacks
  • underlying framework is, for now, pretty "confidential", hardly any echo of it being used outside SONY
  • list of layers is restricted and "non standard" (as the Keras Layer API can be)
  • pre-trained DL are not available as layer so transfert might no be that easy to set-up
  • loading trained KERAS models is not possible
  • no mechanism for easily deploying the DL models is provided
  • For a Video on NNC see : https://www.youtube.com/watch?v=-lXjnaUSEtM

NEURAL NETWORK MODELER by IBM
Neural Network Modeler has already had a few names: DARVIZ and IBM Deep Learning IDE (https://darviz.mybluemix.net/#/) . Bravo IBM marketing or shall we call it confusing. It is now part of the Watson Studio suite.

The aim of NNM is different than for the previous 2 tools. It is meant to produce code at the end (Theano, Tensorflow/Keras, Caffe 1). Now the code is to be used by other tools in the Watson Studio suite AFAIU.

NNM is available here: https://darviz.mybluemix.net/#/dashboardor in the Watson Studio https://dataplatform.ibm.com/docs/content/analyze-data/ml_dlaas.html?audience=dr&context=refinery

Neural Network Modeler GRAPHICAL EDITOR
Do to that NNC provide a nice DL model Graphical Editor:

Here are a few specificities and some differences with DLS:

  • automatic optimal architecture search
  • different data-preprocessing can be done with layers in the model or in the Dataset tab.
  • the hyper-parameters can be optimized by tool in Watson Studio suite

NNM Assets
  • integrated into a Data Science/Machine Learning suite, so could potentially benefit from other functionalities of the suite like deployment mechanism

NNM Drawbacks
  • only produces the code of the DL model , all the other functionalities are in the Watson Studio suite, somewhere (you have to have and know the suite)
  • only available on cloud (and maybe soon only available in the Watson Studio suite ?)

For more info on the NNM see this Medium article: https://medium.com/ibm-watson/accelerate-your-deep-learning-experiments-with-ibms-neural-network-modeler-dd0c92fba814

For a video on DARVIZ (old name of NNM) see: https://www.youtube.com/watch?v=mmRw_MuMPC4 Worth seing for the the "kid" (Yes!) presenting it. Great communication and commercial skills!

Conclusion
Deep Learning user-friendliness is on its way. But is could in the near future get much further.
SONY's NNC is a good solution to build DL models quickly and provides quite a complete environment.
IBM's NNM is a more restricted solutions with the main aim to produce code to use elsewhere. That elsewhere being the Watson Studio suite so it's efficiency will depend on that of the WS suite and on its integration inside the WS suite.

DEEP COGNITION's DLS is a very well thought out solution, that provides a very complete environment and it is not limited to the graphical editing of the Deep Learning models. It's orientation towards "de facto" standards makes it more interesting than SONY's NNC that is based on a "confidential" framework.

Feature comparison table
I have initiated a feature comparison table on the socialcompare.com site. The aim is to provide an easily readable table to summarize which features are present in which product. It is open to other people to contribute.

Here a screen shot as of end of April 2018:




Source: TDS