...
Full Bio
Today's Technology-Data Science
293 days ago
How to build effective machine learning models?
293 days ago
Why Robotic Process Automation Is Good For Your Business?
293 days ago
IoT-Advantages, Disadvantages, and Future
294 days ago
Look Artificial Intelligence from a career perspective
294 days ago
Every Programmer should strive for reading these 5 books
579939 views
Why you should not become a Programmer or not learn Programming Language?
239907 views
See the Salaries if you are willing to get a Job in Programming Languages without a degree?
152316 views
Have a look of some Top Programming Languages used in PubG
144486 views
Highest Paid Programming Languages With Highest Market Demand
137445 views
Demonstration of Deep Learning models Apache, Sparx, and TensorFlow
- A number of neurons in each layer: Too few neurons will reduce the expression power of the network, but too many will substantially increase the running time and return noisy estimates.
- Learning rate: If it is too high, the neural network will only focus on the last few samples seen and disregard all the experience accumulated before. If it is too low, it will take too long to reach a good state.
- The learning rate is critical: if it is too low, the neural network does not learn anything (high test error). If it is too high, the training process may oscillate randomly and even diverge in some configurations.
- The number of neurons is not as important for getting a good performance, and networks with many neurons are much more sensitive to the learning rate. This is Occam's Razor principle: simpler model tend to be "good enough" for most purposes. If you have the time and resource to go after the missing 1% test error, you must be willing to invest a lot of resources in training and to find the proper hyperparameters that will make the difference.
- Distributed processing of images using TensorFlow
- Testing the distribution processing of images using TensorFlow