Surface estimation from multi-modal tactile data
Abstract
The increasing popularity of Robotic applications has seen use in healthcare, surgery,
and as an industrial tool. These robots are expected to be able to make physical
contact with the objects in the environment which allows tasks such as grasping and
manipulation, while also allowing to obtain information about the objects such as
shape, texture, and hardness. In an ideal world, a complete model of the environment would be known beforehand and robots would not need to explore objects and
surfaces since their information would be available in the model of the world. In the
real world, most environments are unstructured and robots must be able to operate
safely without causing harm to themselves or objects while taking into account environmental uncertainties and building models for the environment and its objects.
To overcome this, the trend has been to use computer vision to detect objects in the
environment. Although computer vision has seen great advancement in this regard,
there are some problems that cannot be solved by using vision alone. Objects that
are occluded, transparent, or do not have rich visual features cannot be detected by
using vision. It is also impossible to estimate features such as hardness or tactile
texture using vision. To this end, we use a bio-inspired tactile sensor consisting of
a compliant structure, a MARG sensor, and a pressure sensor along with a robotic
manipulator to explore surfaces with the only assumption that the general location
of the surface is known. This sensing module allows the robotic manipulator to have
a predetermined angle of approach which is essential when exploring unseen surfaces. [...]