Vision Based Robot Control Using Machine Learning

No Thumbnail Available

Date

2019-06

Journal Title

Journal ISSN

Volume Title

Publisher

Addis Ababa University

Abstract

A new simpler vision based robot control system is proposed characterized with position specific artificial neural network (ANN) and end-effecter integrated camera system. Position specific ANN avoids the difficulty of covering the whole joint space with changing parameters using one set of ANN, and end-effecter integrated camera system makes the image of an object consistent when the end-effecter approaches the object. The object coordinate can be directly used as feedback. Most vision-based robot positioning techniques rely on analytical formulations of the relationship between the robot pose and the projected image coordinates of several geometric features of the observed scene. Feature matching algorithms, camera calibration, models of the camera geometry and object feature relationships are also necessary for pose determination. These steps are often computationally intensive and error-prone, and the complexity of the resulting formulations often limit the number of controllable degrees of freedom. This thesis presents controlling mechanism of a parallel robot based on deep neural learning and position based visual servoing that overcomes many of these limitations. ROS/Gazebo simulator is used to model delta 3 parallel robot. From the model training data set is collected and a multilayer feed forward deep neural network is used to learn the complex implicit relationship between the pose displacements of the delta 3 robot and joint angles. Three networks with three hidden layers but different number of neurons per hidden layer were trained and their performance is evaluated. Based on the simulation result it is shown that a network with higher number of neurons per hidden layer shows better performance. The trained network may then be used to move the robot from arbitrary initial positions to a desired pose with respect to the observed scene with MSE less than 0.05. Simulation result shows that the system works smoothly, and converges in limited steps. The algorithm simplifies the model of vision based robot manipulator control system, and improves the control accuracy and response time.

Description

Keywords

Artificial Intelligence, Machine Learning, Artificial Neural Networks, Deep Learning, Deep Neural Networks, Feed-forward neural Network, Rectified Linear Units, ROS/Gazebo, Supervised Learning, Visual Servoing

Citation