In preliminary work, we have begun to define the field by developing the robotic capability to distinguish a wide variety of materials. We disambiguate different materials by actively contacting and probing them, and by sensing the resulting forces, displacements, and sounds. Here, we report on three of our preliminary investigations: step and feel, hit and listen, whack and watch.
The observed vertical force-displacement response differed dramatically for the three materials tested, demonstrating that the samples can be discriminated without great difficulty.
We digitized the microphone signal and extracted spike features from its power spectrum. Based on these features, we classified the test object as one of the five objects with a hybrid minimum-distance and decision-tree classifier. The classifier achieved 97 percent accuracy on 580 training samples, and 94 percent accuracy on 240 test samples.
We acquired a sequence of images (see figure) of the pendulum making contact and of the object sliding. For each image, we employed basic blob-finding techniques to determine the position of the struck object. From the sequence of positions we derived the velocity and acceleration of the object.
From the expected forces and the estimated accelerations, we computed the mass of the object using Newton's second law, and computed the coefficient of sliding friction using the equation of motion under constant acceleration. The computed masses were within a factor of two of the true masses, and the coefficients of friction were within 25 percent of the true values.