Projects

Soft catching an object in flight

Catching a fast flying object is particularly challenging as consists of two tasks: it requires extremely precise estimation of the object's motion and control of the robot motion. Any small imprecision may lead the fingers to close too abruptly and let the object fly away from the hand before closing. We present a strategy to overcome for sensori-motor imprecision by introducing softness in the catching approach. Soft catching consists of having the robot moves with the object for a short period of time, so as to leave more time for the fingers to close on the object. We use a dynamical systems (DS) based control law to generate the appropriate reach and follow motion, which is expressed as a Linear Parameter Varying (LPV) system. We propose a method to approximate the parameters of LPV systems using Gaussian Mixture Models, based on a set of kinematically feasible demonstrations generated by an off-line optimal control framework. We show theoretically that the resulting DS will intercept the object at the intercept point, at the right time with the desired velocity direction.

 

 

Coordinated multi-arm motion planning: Reaching for moving objects in the face of uncertainty

Coordinated control strategies for multi-robot systems are necessary for tasks that cannot be executed by a single robot. This encompasses tasks where the workspace of the robot is too small or where the load is too heavy for one robot to handle. Using multiple robots makes the task feasible by extending the workspace and/or increase the payload of the overall robotic system. In this paper, we consider two instances of such task: a co-worker scenario in which a human hands over a large object to a robot; intercepting a large flying object. The problem is made difficult as the pick-up/intercept motions must take place while the object is in motion and because the object's motion is not deterministic. The challenge is then to adapt the motion of the robotic arms in coordination with one another and with the object. Determining the pick-up/intercept point is done by taking into account the workspace of the multi-arm system and is continuously recomputed to adapt to change in the object's trajectory. We propose a dynamical systems (DS) based control law to generate autonomous and synchronized motions for a multi-arm robot system in the task of reaching for a moving object. We show theoretically that the resulting DS coordinates the motion of the robots with each other and with the object, while the system remains stable. We validate our approach on a dual-arm robotic system and demonstrate that it can re-synchronize and adapt the motion of each arm in synchrony in a fraction of seconds, even when the motion of the object is fast and not accurately predictable.

 

 

A Unified Framework for Coordinated Multi-Arm Motion Planning

Coordination is essential in the design of dynamic control strategies for multi-arm robotic systems. Given the complexity of the task and dexterity of the system, coordination constraints can emerge from different levels of planning and control. Primarily, one must consider task-space coordination, where the robots must coordinate with each other, with an object or with a target of interest. Coordination is also necessary in joint-space, as the robots should avoid self-collisions at any time. Moreover, multi-arm task-space behaviors can either be synchronous or asynchronous. In this work, we define a synchronous behavior as that in which the robot arms must coordinate with each other and with a moving object such that they reach for it in synchrony. Whereas, an asynchronous behavior allows for each robot to perform independent point-to-point reaching motions. In this paper, we build upon our previous work on coordinated multi-arm control (Salehian et al. 2016a), to propose a unified framework that endows a multi-arm system with both synchronous and asynchronous behaviors and the capability of smoothly transitioning between them, whilst avoiding self-collisions. To provide such smooth transitioning, we introduce the notion of synchronization allocation. Given the motion of the object and the joint workspace of the multi-arm system, each arm is being continuously allocated to a desired behavior. While being allocated to the synchronous behavior, control of the robots is taken over by the virtual object Dynamical System (DS), which exploits the notion of a virtual object, whose dynamics is coupled to that of the robot’s motions. While allocated to the asynchronous behavior, the robots are controlled independently, each with their own goal-directed stable DS. Both behaviors and their synchronization allocation are encoded in a single dynamical system. Further, we provide coordination in joint-space by introducing a centralized inverse kinematics (IK) solver under self-collision avoidance constraints; formulated as a quadratic program (QP) and solved in real-time. The space of free motion is modeled through a sparse non-linear kernel classification method in a data-driven learning approach. We validate our framework on a dual-arm robotic system and demonstrate that it can re-synchronize and adapt the motion of each arm within milliseconds, even when the motion of the object is fast and not accurately predictable.

 

 

A DS Based Approach for Controlling Robotic Manipulators During Non-contact/Contact Transitions

Many daily life tasks require precise control when making contact with surfaces. Ensuring a smooth transition from free motion to contact is crucial as incurring a large impact force may lead to unstable contact with the robot bouncing on the surface, i.e. chattering. Stabilizing the forces at contact is not possible as the impact lasts less than a millisecond, leaving no time for the robot to react to the impact force. We present a strategy in which the robot adapts its dynamic before entering into contact. The speed is modulated so as to align with the surface. We leverage the properties of autonomous dynamical systems for immediate re-planning and handling unforeseen perturbations and exploit local modulations of the dynamics to control for the smooth transitions at contact. We show theoretically and empirically that by using the modulation framework, the robot can (I) stably touch the contact surface, even when the surface's location is uncertain, (II) at a desired location, and finally (III) leave the surface or stop on the surface at a desired point.

 

 

Learning Augmented Joint-Space Task-Oriented Dynamical Systems

In this paper, we propose an asymptotically stable joint-space dynamical system that captures desired behaviors in joint-space while stably converging towards a task-space attractor. To encode joint-space behaviors while meeting the stability criteria, the dynamical system is constructed as a Linear Parameter Varying (LPV) system combining different motor synergies, and we provide a method for learning these synergy matrices from demonstrations. Specifically, we use dimensionality reduction to find a low-dimensional embedding space for modulating joint synergies, and then estimate the parameters of the corresponding synergies by solving a convex semi-definite optimization problem that minimizes the joint velocity prediction error from the demonstrations. Our proposed approach is empirically validated on a variety of motions.

 

 

Transitioning with confidence during contact/non-contact scenarios

In this work, we propose a probabilistic-based approach for detecting transitions in hybrid control systems with limited sensing. Detecting the transition moment is a particularly challenging problem as (i) multiple sources of sensory information are usually not available in robotic systems and (ii) the sensory information is noisy and requires calibrations. The challenge significantly increases if the robot makes physical contact as it causes discontinuities in the dynamics. The proposed transition criterion addresses these shortcomings by studying the behavior of the robot and the environment during a short horizon of time. We empirically validate our approach while detecting contact transitions in a hand-over scenario where a human operator brings a large object and hands it over to a pair of robotic arms which are not equipped with a force or tactile sensors.