Artificial Intelligence Is Making us Data Hoarders! Is it true?

By ridhigrg |Email | Feb 19, 2019 | 3048 Views

Artificial intelligence is getting emphasized by machine learning and deep learning. We are benefited through these approaches, as these are heavy data approaches. But the question is? We are getting hooked on data!
We see folks everywhere creating data lakes about to be data oceans that we are going to boil later on. Meanwhile, we have to pay homage to expensive Data Czars and Data Scientists because we want to keep more data for the future and somehow AI will make sense of it later. I am not so sure this is a strategy that will lead us to be competitive with others in the world. A parallel approach is suggested below:
Turn It Around
Instead of just digging around in data for brilliant decisions and actions, why don't we start with the smart decision makers we are already paying the big bucks for and engage them in pointing out the critical data sources that would be good tributaries for critical decisions? Let us turn our wonderful machine learning AI and scarce Data Scientists onto these sources first to get some real outcomes to build on for the future. The Data Czars would be glad for some guidance and prioritization as their job is near impossible right now and it is going to get worse as data volume and types are growing exponentially.

Experiment in a Focused Fashion
Now that the data ocean is a data pool, lets research how this kind of data can be used to outmaneuver others in the industry that want to command the future. Scour the industry groups and even let AI mine text for new and appropriate approaches. This is where our Data Scientists can leverage their skills to find combinations of AI and Algorithms to solve some critical problems for us. This is a decision driven experimentation effort that is operated in a somewhat constrained sandbox. If a solution is identified, it is known to solve a critical issue related to important business outcomes.

Adapt While Learning
Few of us are that brilliant and get it right the first time, always. This means there will be adaptations and corrections going on that will tap new data sources that are related to the problem set an organization is focused on at the moment. If these are done in parallel, the problems can be done in isolation and solved for critical outcomes. There will likely be some integration issues later, but creating an architecture for those integrations will save time on the back end. The fact that these are focused and critical outcomes will pay for some of those integration efforts.
Net, Net
Why should we dig data frantically, hoping to find a gem of data pattern that will help us with making better decisions and taking the next best action, even predictively? Why not start with known success factors and work back to the data to save for AI and poly analytics to assist us with brilliant outcomes. In parallel, we can collect additional data especially as we learn from our active experimentation and adaptation.

Source: HOB