The majority of artificial intelligence (AI) research to date has been focused on the vision. Thanks to machine learning, and in particular deep learning, we now have robots and devices that have a pretty good visual understanding of their environment. But let's not forget, sight is just one of the human biological senses. For algorithms that better mimic human intelligence researchers are now focusing on datasets that draw from sensorimotor systems and tactile feedback. With this extra sense to draw on, future robots and AI devices will have an even greater awareness of their physical surroundings, opening up new use cases and possibilities. - Ason Toy, Somatic founder, and AI researcher
Jason Toy, AI enthusiast, technologist and founder of deep learning and neuro-linguistic programming specialist Somatic, recently set up a project focused on training AI systems to interact with the environment based on haptic input. Called SenseNet: 3D Objects Database and Tactile Simulator, the project focuses on expanding robots' mapping of their surroundings beyond the visual to include contours, textures, shapes, hardness, and object recognition by touch.
Toy's initial aim for the project is to create a wave of AI research into sensorimotor systems and tactile feedback. Beyond that, he imagines haptically-trained robots could eventually be used to develop robotic hands for use in factories and distribution centers to perform bin packing, parts retrieval, order fulfillment, and sorting. Other possible applications include robotic hands for food preparation, household tasks, and assembling components.
Robotics And Deep Reinforcement Learning:
The SenseNet project depends on deep reinforcement learning (RL), a branch of machine learning that attracts from each supervised and unsupervised learning techniques and depends on a system of rewards supported monitored interactions to search out higher ways in which to enhance results iteratively.
Many believe that reinforcement learning offers a pathway to developing autonomous robots that might master bound freelance behaviors with stripped human intervention. as an example, initial evaluations of deep RL techniques indicate that it's doable to use simulation to develop deft 3D manipulation skills while not having to manually produce representations.
Using The Sensenet Dataset:
SenseNET and its supporting resources are designed to overcome many of the common challenges researchers face when embarking on touch-based AI projects. An open source dataset of shapes, most of which can be 3D printed, as well as a touch simulator, allowing AI researchers to accelerate project work.
The SenseNet repository on GitHub provides numerous resources beyond the 3D object dataset, including training examples, classification tests, benchmarks, Python code samples, and more. The dataset is made even more useful through the addition of a simulator that lets researchers load and manipulates the objects. Toy explains: We have built a layer upon the Bullet physics engine. The bullet is a widely used physics engine in games, movies, and most recently robotics and machine learning research. It is a real-time physics engine that simulates soft and rigid bodies, collision detection, and gravity. We include a robotic hand called the MPL that allows for a full range of motion in the fingers and we have embedded a touch sensor on the tip of the index finger that allows the hand to simulate touch.