satyamkapoor

I work at ValueFirst Digital Media Private Ltd. I am a Product Marketer in the Surbo Team. Surbo is Chatbot Generator Platform owned by Value First. ...

Full Bio 

I work at ValueFirst Digital Media Private Ltd. I am a Product Marketer in the Surbo Team. Surbo is Chatbot Generator Platform owned by Value First.

Success story of Haptik
442 days ago

Who is afraid of automation?
442 days ago

What's happening in AI, Blockchain & IoT
443 days ago

3 million at risk from the rise of robots
443 days ago

5 ways Machine Learning can save your company from a security breach
443 days ago

Google Course for IT beginners, certificate in 8 months: Enrollment starts on Coursera today, check details
32028 views

7 of the best chatbot building plaftorms out there
19281 views

Could your job be taken over by Artificial Intelligence?
18207 views

IIT Madras launches Winter Course on Machine Intelligence and Brain Research
17319 views

WILL ROBOTS FIGHT THE NEXT WAR? U.S. AND RUSSIA BRING ARTIFICIAL INTELLIGENCE TO THE BATTLEFIELD
14703 views

AI systems are still hard to build

By satyamkapoor |Email | Jan 2, 2018 | 5625 Views

Although there are multiple AI frameworks like TensorFlow or OpenAI, artificial intelligence still requires a much more comprehensive understanding of the field compared to a mainstream developer. If you have managed to build a working prototype, you are perhaps the smartest guy in the room & a member of an elite club. 
By participating in contests on Kaggle, you can even a handsome amount by solving real world projects. However is it enough to build a business? Market mechanics are hard to change. If you look from the point of view of business, AI is just another implementation of existing problems. Customers care about value, not the intricacies of implementation. This means that no matter how fancy the AI you have implemented, in the end, it should deliver value to the customer. 
VC's however, care about your AI, unlike customers. The press also does. That difference in attention can create a dangerous reality distortion field for startups. But don't be fooled: Unless you create universal multipurpose AI there is no free lunch: Even if you are the VC's darling, you have to go the last mile for your customers. So let's get into the driver's seat and look how we can prepare for future scenarios.

The mainstream AI train
AI seems to be different from other mega trends like blockchain, IoT, FinTech etc. Sure, its future is highly unpredictable. But that's true for almost any technology. The difference is that our own value proposition as a human being seems in danger & not only other businesses. Our value as deciders and creatives is on review. That evokes an emotional response. We don't know how to position ourselves.
There are a very limited number of basic technologies, most of which can be categorized under the umbrella term "deep learning", that form the basis of almost every application out there: convolutional and recurrent neural networks, LSTM, auto-encoders, random forests, gradient boosting and a very few others.
AI offers many other approaches but these core mechanisms have shown to be overwhelmingly successful lately. A majority of researchers believe that progress in AI will come from improvements of these technologies (and not from some fundamentally different approaches). Lets call this "mainstream AI research" for that reason.
Any real world solution consists of these core algorithms and a non-AI shell to prepare and process data (e.g. data preparation, feature engineering, world modelling). Improvements of the AI part tend to make the non-AI part unnecessary. That's in the very nature of AI and almost its definition making problem-specific efforts obsolete. But exactly this non-AI part is often times the real value proposition of AI driven companies. It's their secret sauce.
Every improvement in AI makes it more likely that this competitive advantage is open-sourced and available to everyone. With disastrous consequences. Like Frederick Jelinek once said : "Every time I fire a linguist, the performance of the speech recognizer goes up".
Machine learning basically has introduced the next phase of redundancy reduction: Code is reduced to data. Almost all model-based, probability based and rule-based recognition technologies were washed out by the Deep Learning algorithms in the 2010s.
Domain expertise, feature modeling, and hundreds of thousands lines of code now can be beaten with a few hundred lines of scripting (plus a decent amount of data).  As mentioned above: That means that proprietary code is no longer a defensible asset when it's in the path of the mainstream AI train.

Significant contributions are very rare. Real breakthroughs or new developments, even a new combination of the basic components, is only possible for a very limited number of researchers. This inner circle is much smaller as you might think (it's certainly less than 100 developers).
Why is that? Maybe it's rooted in its core algorithm: backpropagation. Nearly every neural network is trained by this method. The simplest form of backpropagation can be formulated in first semester calculus, nothing sophisticated at all (- but no grade school stuff either). In spite of this simplicity or maybe for that very reason in more than 50 years of an interesting and colorful history only a few people looked behind the curtain and questioned its main architecture.
If backpropagation would have had the visibility as it has today, we might be 10 years ahead now (computation power aside).
The steps from plain vanilla neural networks of the 70s, to recurrent networks, to LSTM of today were earthquakes for the AI space. And yet it only needs a few dozen lines of code! Generations of students and researchers went through its math, calculated gradient descents, proved its correctness. But finally most of them nodded and by saying "just a form of optimization" they moved on. Analytical understanding is not enough. You need some form of "inventors intuition" to make a difference.
Since it is very rare be on top of research, for 99.9% of all companies a passenger's seat is all they can get. The core technology is provided by the industry's major players in open-source toolsets and frameworks. To be on the latest level, proprietary approaches vanish over time. In this sense, the overwhelming majority of all AI companies are consumers of these core products and technologies.

Where are we heading?
AI (and the required data) has been compared to many things: electricity, coal, gold. It shows how eager the tech world is to find patterns or trends. That's because this knowledge is absolutely essential for hedging your business or your investments against one simple fact. If you build your business in the path of the AI mainstream train, nothing can save you.
Because of the engine that's already hurtling down the tracks toward business, there are a few scenarios that are important to consider.In the first, the mainstream AI research train will get significantly slower or has already stopped. This means no more problem classes can be addressed. That means we get out of the train and have to walk the "last mile" for our customers. This would be a big chance for startups because they have the opportunity to build proprietary technology with the chance of creating a sustainable business.

The second scenario has the mainstream train rolling along at at its current clip. Then it is all the more difficult to get out of the way or get off the train. At high speed, domain knowledge of individual approaches are in great danger of being "open-sourced" by the big guys. All the efforts of the past may be worthless. At present, systems like AlphaGo LINK require a very high percentage of proprietary technology apart from standard ("vanilla") functionality offered by open-source frameworks. I would not be surprised if we see basic scripts with the same capabilities in the very near future. But the "unknown unknown" is the kind of problem class can be solved with the next wave. Autoencodersand attention based systems are promising candidates. No one can image which verticals can be solved by this. Probability: Possible.
In the fourth scenario, the train gains even more speed. Then, finally: "The singularity is near". Books have been written about it. Billionaires have fought about it. And I will probably write another article about it. The end game here is Artificial General Intelligence. If we achieve this, all bets are off.
Finally, there's the  black swan scenario. Someone in a garage discovers the next generation of algorithms away from the mainstream. If this lone rider can use it for themselves we might see the first self-made-trillionaire. But where would this come from? I doubt that this could be done out of the blue. It may be a combination of mainstream techniques and abandoned model based algorithms. In the 2010's the rise of neural networks some once promising approaches (symbolic approaches etc.) lost parts of their research base. The current run on A.I. also revives other, related research fields. It's becoming difficult to find an "unpopular" technique or algorithm that isn't already swarming with researchers. Nevertheless, there might be an outsider who finds or revives an approach which changes the game.

Who is winning?
Let's put all of this together and finally ask the million dollar question. The answer depends not only on the scenarios above, but foremost on who you are. A businesses starting position is a crucial factor in this equation as its resources and existing assets are key to the strategies they're deploying.
In the AI champions league are a few companies that have deep pockets and can attract critical talent. Since this is a rather "endothermic" process right now you need other sources of income. That limits the players to the well-known Google, Facebook, Microsoft, IBM club. They built huge proprietary systems apart from the status quo, open-source stacks to arrive at new problem classes. A certain amount of time later you will then put this into the next generation of open-source frameworks to build a vivid community.
These players also have existing platforms that lend themselves to train better algorithms. AI might be a megatrend but its application for and by companies in the daily businesses they've built is also critical to their success. These platforms: Amazon, Facebook, Google Apps, Netflix, and even Quora use AI to defend and strengthen their core business model. They find ways to better serve their customers by AI but they are aware to keep their core business distinct from the work they're doing with artificial intelligence (at least publicly).
Some emerging platforms have found ways to adopt this strategy for their own toolsets. These companies found a claim which AI only made possible or monetizable in the first place. One example is the grammar-checker Grammarly.
At first glance you could think of it as a nice add-on that existing vendors can easily build themselves. But there is more. They are building two assets here: a community generated dataset for further quality improvements and more sustainably, an incredibly personalized marketplace for advertising partners.
Then there are the tool-makers. As Mark Twain suggested, Let others dig the gold and stand at the sideline to sell them the shovels. That worked in the past it might work here as well. Providing data, hosting contests, trading talents, educating people. The blueprint for that gas been to find something that every AI aspirant needs (or wants), then charge for it.

Udemy teaches AI courses, and Kaggle initiates AI competitions to help other companies out and let data scientists build their skills. Neither need to build a core competency in AI. Companies also need petabytes of data to be successful. Most of them use supervised learning, so there has to be someone who supervises this.
Finally there  are the companies that have found their niche in AI consulting. Because even on the shoulders of the giants' open-source frameworks there is still a lot of work to do to.
Companies like Element AI were able to put parts of that extra work into a product and make it into a service. Indeed the recent investment of $102 million makes sure that they have the deep pockets needed to succeed.
There are other companies waiting out there who are ready to replace existing business processes with their targeted solutions. They face challenges on two fronts. Open source projects have the potential to solve such projects & existing vendors are investing heavily in more automated solutions to solve the same problems. 
What is most important is the speed of the mainstream AI research, which happens in only small groups. In little time, their results are open sourced in frameworks that have been developed by the AI champion players. The rest of us are either obstacles or passengers in their path. Ultimately it's all about positioning. Companies that keep this in mind will reach their destination. 

Source: HOB Team