Dr Mehmet Dogar

Dr Mehmet Dogar

Profile

I am an Associate Professor at the School of Computing, University of Leeds. I direct the Robot Manipulation Lab in the school.

My research focuses on autonomous robotic manipulation. I envision a future where robots autonomously perform complex manipulation tasks in human environments; such as grasping an object from the back of a cluttered shelf, or manufacturing and assemling a complex piece of furniture. I am particularly interested in the use of physics-based predictive models in manipulation planning.

Previously, I was a postdoctoral researcher at CSAIL, MIT. I received my PhD from the Robotics Institute at Carnegie Mellon University. 

I am a co-chair of the IEEE RAS Technical Committee on Mobile Manipulation, I have been an Area Chair for the Robotics: Science and Systems conference in 2017 and 2018, and I am an Associate Editor for IEEE Robotics and Automation Letters

I was a finalist for the Best Paper Award at IEEE ICRA 2018, at IEEE ICRA 2015, and at IEEE IROS 2010. I received the Best Reviewer Award at IEEE ICRA 2014.

Publications

At the very bottom of this page is a list of my publications. Also see my Google Scholar page

News

Research interests

My research focuses on autonomous robotic manipulation. I envision a future where robots autonomously perform complex manipulation tasks in human environments; such as grasping an object from the back of a cluttered shelf, or manufacturing and assemling a complex piece of furniture. My manipulation planners use physics-based predictive models. This challenges the existing paradigm which is based on a geometric representation of the world and is limited to pick-and-place actions. The physics-based approach, on the other hand, enables a robot to interact with the environment with a rich set of actions such as pushing, tumbling, and throwing, as well as pick-and-place. 

I am also interested in collaborative manipulation planning, where the task is performed through the collaboration of human-robot or robot-robot teams.

Research Videos


We humans can predict the physical effects of our actions. For example, when you push an object with your finger, you have a prediction about how it will move. Robots need to make such predictions as well, to plan their actions. But what kind of a physics model shall robots use? The traditional approach in robotic planning is to use physics engines. There are other newer approaches to learning approximate physics models. In our recent work, we have been investigating methods to combine coarse but fast physics predictions, with accurate but slow physics predictions, to achieve accurate and fast predictions. Lead author Wisdom Agboh, in collaboration with Daniel Ruprecht. Arxiv preprints here, appeared at ISRR 2019.

<iframe width="560" height="315" src="https://www.youtube.com/embed/5e9oTeu4JOU" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>


In our recent work, we have looked at learning policies for physics-based manipulation in clutter. Arxiv preprints here and here (appeared at Humanoid 2018 and IROS 2019, respectively). Lead author Wissam Bejjani, in collaboration with Matteo Leonetti.


We have been developing algorithms for reactive re-planning for manipulation in cluttered environments. The planners that have been developed for this problem has been open-loop so far. The video below compares the performance of our closed-loop approach. Appeared at IEEE-RAS Humanoids 2018, arxiv preprint (lead author: Wisdom Agboh) here.


Imagine reaching into a fridge shelf that is crowded with fragile objects, such as glass jars and containers. You move slowly and carefully. However, if the fridge shelf is almost empty, only with a few plastic containers that are difficult-to-break, you move faster with less care. In this work (lead author: Wisdom Agboh) we develop a model predictive control framework for robot pushing that adapts to the required task accuracy, pushing fast when it is okay, but slow when the robot needs to be more careful. Appeared at WAFR 2018, arxiv preprint here.


Imagine grasping a wooden board while your friend drills holes into it and cuts pieces off of it. You would predict the forces your friend will apply on the board and choose your grasps accordingly; for example you would rest your palm firmly against the board to hold it stable against the large drilling forces. We developed (lead author: Lipeng Chen, appeared at IROS 2018) a manipulation planner to enable a robot grasp objects similarly. Arxiv preprint (2018) here


At MIT, we developed a multi-robot system to perform multi-step assembly tasks. The system can perform collaborative operations, e.g. two robots carrying a heavy part together. The system is also able to shift between coarse manipulation operations, such as transport, to fine operations, such as part alignment and fastener insert ignoreion. Read more about this study in our ISER 2014 paper. Our paper about multi-robot manipulation planning was a finalist for the Best Paper Award and the Best Manipulation Paper Award at ICRA 2015! A draft is available here


Tactile sensors provide contact information, essential for making physics-based predictions. In this work we use tactile sensing to localize objects during pushing actions.


Human environments are cluttered and robots regularly need to solve manipulation problems by moving objects out of the way to reach other objects. The video below shows the PR2 using my push-grasp planner to contact multiple objects simultaneously and grasp objects through clutter:


I developed an algorithm to rearrange clutter using a library of actions including pushing. The planner can move objects that are not movable by pick-and-place actions, e.g. large or heavy objects. Here is a video where HERB pushes a large box out of the way: 


You can see HERB executing push-grasps in the following video. Contrary to the instantaneous approach to grasping, I model grasping as a process where the object is pushed along by the hand before the fingers close on it. Push-grasping is robust to large uncertainties in object pose, because the uncertainty can be funneled into the hand during pushing. My work showed that we can pre-compute these funnels, called capture regions. Read more about this study in our IROS 2010 paper


Here is a demo our lab at CMU put together where our robot HERB microwaves a meal: 


 

Student education

I teach introductory Robotics.

Research groups and institutes

  • Artificial Intelligence

Current postgraduate researchers

<h4>Postgraduate research opportunities</h4> <p>We welcome enquiries from motivated and qualified applicants from all around the world who are interested in PhD study. Our <a href="https://phd.leeds.ac.uk">research opportunities</a> allow you to search for projects and scholarships.</p>