Zeestraten, M.J.A., Havoutis, I., Calinon, S. and Caldwell, D.G. (2017)
Learning Task-Space Synergies using Riemannian Geometry
In Proc. of the IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS), pp. 73-78.

Abstract

In the context of robotic control, synergies can form an elementary unit of behavior. By specifying task-dependent coordination behaviors at a low control level, one can achieve task-specific disturbance rejection. In this work we present an approach to learning such controllers by demonstration. We identify a synergy by extracting covariance information from demonstration data. The extracted synergy is used to derive a time-invariant state feedback controller through optimal control. To cope with the non-Euclidean nature of robot poses, we utilize Riemannian geometry, where both estimation of the covariance and the associated controller take into account the geometry of the pose manifold. We demonstrate the efficacy of the approach experimentally, in a bimanual manipulation task.

Bibtex reference

@inproceedings{Zeestraten17IROS,
	author="Zeestraten, M. J. A. and Havoutis, I. and Calinon, S. and Caldwell, D. G.", 
	title="Learning Task-Space Synergies using Riemannian Geometry",
	booktitle="Proc. {IEEE/RSJ} Intl Conf. on Intelligent Robots and Systems ({IROS})",
	year="2017",
	month="September",
	address="Vancouver, Canada",
	pages="73--78"
}

Video


This video illustrates an approach for learning task-space synergies by demonstration. To transfer the skill a number of kinesthetic demonstrations are given. We encode the skill in a single Gaussian using the data of the two end-effector poses by relying on Riemannian geometry. We then demonstrate the reproduction of the demonstrated skill using LQR in a tangent space of the manifold.

Video credit: Martijn Zeestraten

Source codes

Source codes related to this publication are available as part of PbDlib.


Go back to the list of publications