Touch has been shown to be important for dexterous manipulation in robotics. Lately, the GelSight detector has captured significant interest in learning-based robotics because of the low price and wealthy sign. The reason why learning-based systems work well with GelSight detectors is that they output high-resolution tactile images from which many different features such as object geometry, surface texture, normal and shear forces can be estimated that frequently prove crucial to robotic control. The tactile images can be fed into regular CNN-based computer vision pipelines allowing using a variety of distinct learning-based techniques: In Calandra et al. 2017 that a grasp-success classifier is trained on GelSight information collected in self-supervised manner, in Tian et al. 2019 Visual Foresight, a video-prediction-based controller algorithm is used to create a robot roll a die purely based on visual images, also in Lambeta et al. 2020 a model-based RL algorithm is employed to in-hand manipulation utilizing GelSight images Regrettably applying GelSight sensors in practical real-world situations is still hard due to the large size and the fact it is only sensitive on one side. Here we introduce a new, more streamlined Zoom sensor layout based on GelSight which enables for omnidirectional sensing, i.e. making the sensor sensitive to all sides just like a human finger, and reveal how it opens up new chances for sensorimotor learning. We demonstrate this by instructing a robot to pick up electric plugs and fit them only based on visual feedback. The inner surface of the gel skin is illuminated with colored LEDs, providing adequate lighting for the visual picture. Comparison of GelSight-style detector (left side) to our OmniTact sensor Existing GelSight designs are either flat, have little sensitive areas or just provide low-resolution signals. By way of instance, earlier versions of the GelSight detector, supply high resolution (400×400 pixel) images but are large and flat, providing sensitivity on just one side, whereas the commercial OptoForce detector (recently stopped by OnRobot) is curved, but only provides force readings as a single 3-dimensional force vector. Our OmniTact sensor design aims to tackle these constraints. It provides both multi-directional and high heeled feeling on its own curved surface in a compact form factor. Similar to GelSight, OmniTact uses cameras embedded into a silicone gel skin to capture deformation of skin, supplying a rich signal from which a wide range of features such as shear and normal forces, item pose, geometry and material properties can be inferred. OmniTact utilizes multiple cameras providing it both high-resolution and multi-directional capabilities. The detector itself can be applied as a”finger” and may be incorporated into a gripper or robotic hand. It’s more compact than previous GelSight detectors, which can be accomplished by utilizing micro-cameras typically used in endoscopes, and by projecting the silicone gel directly onto the cameras. Tactile pictures from OmniTact are shown in the statistics below. Tactile readings from OmniTact with various items. From left to right: M3 Screw Head, M3 Screw Threads, Combination Lock with numbers 4 3 9, Printed Circuit Board (PCB), Wireless Mouse USB. All pictures are taken from the upward-facing camera. Tactile readings from the OmniTact being rolled over a gear rack. The multi-directional capabilities of OmniTact keep the equipment rack in opinion as the sensor is rotated. Design Highlights One of our principal goals during the design process was to create OmniTact as compact as possible. To accomplish this goal, we used micro-cameras with large viewing angles and a little focus space. Specifically we chose cameras that are widely utilized in medical endoscopes measuring only (1.35 x 1.35 x 5 mm) in dimension with a focus distance of 5 mm. These cameras were organized in a 3D printed camera bracket as shown in the figure below that enabled us to minimize blind spots on the surface of the detector and reduce the diameter (D) of the detector to 30 mm. This image shows the fields of opinion and arrangement of the 5 micro-cameras within the sensor. With this arrangement, the majority of the fingertip could be made sensitive efficiently. In the vertical plane, revealed in A, we obtain α=270 levels of sensitivity. From the horizontal plane, shown in B, we get 360 degrees of sensitivity, except for small blind areas between the fields of view. Electrical Connector Insertion Task We show that OmniTact’s multi-directional tactile sensing capabilities can be leveraged to address a difficult robotic controller problem: Inserting an electrical connector blindly into a wall socket purely based on data from the multi-directional touch sensor (shown in the figure below). This job is challenging since it takes localizing the electrical connector relative to the gripper and localizing the gripper comparative to the wall socket. To learn the insertion job, we used a very simple imitation learning algorithm which estimates the end-effector displacement required for inserting the plug into the socket based on the tactile images in the OmniTact sensor. Our model was trained with only 100 demonstrations of insertion by controlling the robot with keyboard control. Successful insertions obtained by conducting the trained policy are shown from the gifs below. As shown in the table below, using the multi-directional capabilities (both the upper and side camera) of the detector allowed for the highest success rate (80%) in comparison to using only 1 camera in the sensor, signaling that multi-directional touch sensing is really crucial for solving this task. We additionally compared functionality with another multi-directional tactile sensor, the OptoForce detector, which only had a success rate of 17%. What’s Next? We feel that compact, high resolution and multi-directional touch sensing has the potential to transform the capabilities of current robotic manipulation systems. We guess that multi-directional tactile detection can be an essential element in general-purpose robotic manipulation in addition to applications such as robotic teleoperation in surgery, in addition to in marine and space assignments. In the future, we intend to create OmniTact more affordable and more compact, enabling it to be used in a wider range of tasks. Our team additionally plans to run more autonomous manipulation research which will educate future generations of tactile sensors.