Dr Mehmet Dogar

Dr Mehmet Dogar

Profile

I am an Associate Professor at the School of Computing, University of Leeds.

Currently, I am

Before joining Leeds, I was a postdoctoral researcher at MIT, USA (2013-2015). I received my PhD (2008-2013) from the Robotics Institute at Carnegie Mellon University, USA. I was a Visiting Professor at ETH Zurich in 2023.

Here is a talk I gave at UW-Seattle presenting an overview of my research interests:

<iframe width="560" height="315" src="https://www.youtube.com/embed/-de8hEkOmNk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

Research interests

My research focuses on autonomous robotic manipulation. I envision a future where robots autonomously perform complex manipulation tasks in human environments; such as grasping an object from the back of a cluttered shelf, or manufacturing and assemling a complex piece of furniture. My manipulation planners use physics-based predictive models. This challenges the existing paradigm which is based on a geometric representation of the world and is limited to pick-and-place actions. The physics-based approach, on the other hand, enables a robot to interact with the environment with a rich set of actions such as pushing, tumbling, and throwing, as well as pick-and-place. 

I am also interested in collaborative manipulation planning, where the task is performed through the collaboration of human-robot or robot-robot teams.

Publications

At the very bottom of this page is a list of my publications. Also see my Google Scholar page

Research Videos


Tracking an object in clutter using a camera when a robot is manipulating it is particularly difficult, due to the cluttering objects and, inevitably, the robot hand blocking the view of the camera. In our recent work, we use robot controls information, during non-prehensile manipulation, to integrate physics-based predictions of the object motion with the camera view. The result is a smooth and consistent tracking of the object even under heavy occlusions (lead author: Zisong Xu). Arxiv version available here and this work is under review.

<iframe width="560" height="315" src="https://www.youtube.com/embed/EMBFYzkno64" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>


We humans can easily pick multiple objects at a time, e.g, think of picking up multiple glasses simultaneously when clearing a dinner table. This is a difficult problem for robots, where they need to reason about how the multiple objects will move and interact with each during the pick. In collaboration with UC Berkeley, we developed a planner to pick multiple objects at a time. (This work was led by Wisdom Agboh, who was a PhD student in our group and is currently a visiting researcher at UC Berkeley.) Appeared at ISRR 2022 and the follow-up work is available on arxiv.

<iframe width="560" height="315" src="https://www.youtube.com/embed/pEZpHX5FZIs" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>


Humans can provide valuable guidance for robot planning systems. Led by Rafael Papallas, in this series of works we developed human-in-the-loop planners that integrate high-level human guidance with robotic manipulation planning. We further investigated approaches where one single remote human operator can guide a cluster of robots (e.g., a cluster of robots working in a warehouse) to increase their efficiency and success rates. Appeared at RAL 2020 and IROS 2022.

<iframe width="560" height="315" src="https://www.youtube.com/embed/t3yrx-J8IRw" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>


To search for a hidden object, a robot needs two skills: (1) generating hypotheses about where the hidden object may be, (2) manipulating the environment to reveal that part of the space to see if the searched object is there. Recently, a PhD student in our group, Wissam Bejjani (in collaboration with Matteo Leonetti), worked on training a neural network to simultaneously generate hypotheses about hidden object positions, while also learning action-value of the partially observable states. Below is the video of the system in action. In the video, at top right, you see the hypothesized positions of the target obect in green. Arxiv preprint here. Appeared at IROS 2021.

<iframe width="560" height="315" src="https://www.youtube.com/embed/4iSsogfCkMc" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>


We humans can predict the physical effects of our actions. For example, when you push an object with your finger, you have a prediction about how it will move. Robots need to make such predictions as well, to plan their actions. But what kind of a physics model shall robots use? The traditional approach in robotic planning is to use physics engines. There are other newer approaches to learning approximate physics models. In our recent work, we have been investigating methods to combine coarse but fast physics predictions, with accurate but slow physics predictions, to achieve accurate and fast predictions. Lead author Wisdom Agboh, in collaboration with Daniel Ruprecht. Arxiv preprints here, appeared at ISRR 2019.

<iframe width="560" height="315" src="https://www.youtube.com/embed/5e9oTeu4JOU" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>


In our recent work, we have looked at learning policies for physics-based manipulation in clutter. Arxiv preprints here and here (appeared at Humanoid 2018 and IROS 2019, respectively). Lead author Wissam Bejjani, in collaboration with Matteo Leonetti.


We have been developing algorithms for reactive re-planning for manipulation in cluttered environments. The planners that have been developed for this problem has been open-loop so far. The video below compares the performance of our closed-loop approach. Appeared at IEEE-RAS Humanoids 2018, arxiv preprint (lead author: Wisdom Agboh) here.


Imagine reaching into a fridge shelf that is crowded with fragile objects, such as glass jars and containers. You move slowly and carefully. However, if the fridge shelf is almost empty, only with a few plastic containers that are difficult-to-break, you move faster with less care. In this work (lead author: Wisdom Agboh) we develop a model predictive control framework for robot pushing that adapts to the required task accuracy, pushing fast when it is okay, but slow when the robot needs to be more careful. Appeared at WAFR 2018, arxiv preprint here.


Imagine grasping a wooden board while your friend drills holes into it and cuts pieces off of it. You would predict the forces your friend will apply on the board and choose your grasps accordingly; for example you would rest your palm firmly against the board to hold it stable against the large drilling forces. We developed (lead author: Lipeng Chen, appeared at IROS 2018) a manipulation planner to enable a robot grasp objects similarly. Arxiv preprint (2018) here


At MIT, we developed a multi-robot system to perform multi-step assembly tasks. The system can perform collaborative operations, e.g. two robots carrying a heavy part together. The system is also able to shift between coarse manipulation operations, such as transport, to fine operations, such as part alignment and fastener insert ignoreion. Read more about this study in our ISER 2014 paper. Our paper about multi-robot manipulation planning was a finalist for the Best Paper Award and the Best Manipulation Paper Award at ICRA 2015! A draft is available here


Tactile sensors provide contact information, essential for making physics-based predictions. In this work we use tactile sensing to localize objects during pushing actions.


Human environments are cluttered and robots regularly need to solve manipulation problems by moving objects out of the way to reach other objects. The video below shows the PR2 using my push-grasp planner to contact multiple objects simultaneously and grasp objects through clutter:


I developed an algorithm to rearrange clutter using a library of actions including pushing. The planner can move objects that are not movable by pick-and-place actions, e.g. large or heavy objects. Here is a video where HERB pushes a large box out of the way: 


You can see HERB executing push-grasps in the following video. Contrary to the instantaneous approach to grasping, I model grasping as a process where the object is pushed along by the hand before the fingers close on it. Push-grasping is robust to large uncertainties in object pose, because the uncertainty can be funneled into the hand during pushing. My work showed that we can pre-compute these funnels, called capture regions. Read more about this study in our IROS 2010 paper


Here is a demo our lab at CMU put together where our robot HERB microwaves a meal: 


 

<h4>Research projects</h4> <p>Any research projects I'm currently working on will be listed below. Our list of all <a href="https://eps.leeds.ac.uk/dir/research-projects">research projects</a> allows you to view and search the full list of projects in the faculty.</p>

Student education

I teach introductory Robotics.

Research groups and institutes

  • Artificial Intelligence

Current postgraduate researchers

<h4>Postgraduate research opportunities</h4> <p>We welcome enquiries from motivated and qualified applicants from all around the world who are interested in PhD study. Our <a href="https://phd.leeds.ac.uk">research opportunities</a> allow you to search for projects and scholarships.</p>