ORCID Identifier(s)

0000-0001-6243-2229

Graduation Semester and Year

2018

Language

English

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Mechanical Engineering

Department

Mechanical and Aerospace Engineering

First Advisor

Panayiotis S. Shiakolas

Abstract

The human hand is one of the greatest (if not the greatest) tool known to mankind for grasping objects. So much so, that researchers have been investigating the development of artificial biomimetic hands in an effort to mimic their functionality and dexterity with the indent to apply the technology to various robotic platforms; ranging from end effectors for industrial pick-and-place robotics, to upper-limb prosthetics, to humanoids. There are certain features that make this endeavor challenging such as the mechanical design, actuation and sensorization, and functionality as well as the interaction from both the view point of interacting with an end user and view point of interacting with an environment. At UT Arlington, the Manufacturing Automation and Robotic Systems lab has developed a Human-Robot Interaction (HRI) software platform along with a 5-finger 8-DOF biomimetic artificial hand (H2) for research purposes. This research focuses on the grasping component of HRI. The aim of this research is to investigate approaches and develop methodologies for autonomous to semi-autonomous object grasping in physical space. Grasp research has progressed in the investigation of pure kinematic grasping starting with a 5-finger 5-DOF hand (H1) to currently an improved 5-finger 8-DOF artificial biomimetic hand with a dexterous 4-DOF thumb. The method used for grasp pattern prediction is based on Artificial Neural Networks trained with experimental data on the HRI platform. A methodology is developed for the prediction of grasp patterns for objects of non-uniform geometric features based on the object and artificial hand geometric dimensions. This information is used to establish the normalized grasp and length ratios which do not discriminate on object category and further allow for their integration in the training data sets for grasp learning. It was observed that pure kinematic grasping produced accurate predictions based on object characteristics, however it was noticed that the there was an issue of undergrasping which sometimes results in unsuccessful grasps. A two stage data/state driven even based controller was proposed to address the unsuccessful grasp scenario. The event based controller has been researched and developed to provide reliable grasping on low compliant convex objects. The controller first stage follows a kinematic objective of “properly” positioning the fingers for grasping based on the object. The final position of the fingers is predicted by a trained non-discriminatory 3-layer Artificial Neural Network based on the characteristics of the desired object. The controller second stage incorporates sensor information for torque/force feedback to ameliorate ``under'' grasping and reliably hold the object. This controller has been verified with the H2 platform with an over 95% success rate and the controller algorithm has also been shown to be transplantable by successfully performing on other robotic hands such as the H1.

Keywords

Human-robot interaction, Grasping, Machine learning, Controls, Artificial neural networks, Mechatronics

Disciplines

Aerospace Engineering | Engineering | Mechanical Engineering

Comments

Degree granted by The University of Texas at Arlington

Share

COinS