Vision Based Robot Control Using Machine Learning

dc.contributor.advisorDereje, Shiferaw (PhD)
dc.contributor.authorSabizer, Birihanu
dc.date.accessioned2019-11-11T06:54:54Z
dc.date.accessioned2023-11-28T14:20:31Z
dc.date.available2019-11-11T06:54:54Z
dc.date.available2023-11-28T14:20:31Z
dc.date.issued2019-06
dc.description.abstractA new simpler vision based robot control system is proposed characterized with position specific artificial neural network (ANN) and end-effecter integrated camera system. Position specific ANN avoids the difficulty of covering the whole joint space with changing parameters using one set of ANN, and end-effecter integrated camera system makes the image of an object consistent when the end-effecter approaches the object. The object coordinate can be directly used as feedback. Most vision-based robot positioning techniques rely on analytical formulations of the relationship between the robot pose and the projected image coordinates of several geometric features of the observed scene. Feature matching algorithms, camera calibration, models of the camera geometry and object feature relationships are also necessary for pose determination. These steps are often computationally intensive and error-prone, and the complexity of the resulting formulations often limit the number of controllable degrees of freedom. This thesis presents controlling mechanism of a parallel robot based on deep neural learning and position based visual servoing that overcomes many of these limitations. ROS/Gazebo simulator is used to model delta 3 parallel robot. From the model training data set is collected and a multilayer feed forward deep neural network is used to learn the complex implicit relationship between the pose displacements of the delta 3 robot and joint angles. Three networks with three hidden layers but different number of neurons per hidden layer were trained and their performance is evaluated. Based on the simulation result it is shown that a network with higher number of neurons per hidden layer shows better performance. The trained network may then be used to move the robot from arbitrary initial positions to a desired pose with respect to the observed scene with MSE less than 0.05. Simulation result shows that the system works smoothly, and converges in limited steps. The algorithm simplifies the model of vision based robot manipulator control system, and improves the control accuracy and response time.en_US
dc.identifier.urihttp://etd.aau.edu.et/handle/12345678/20078
dc.language.isoen_USen_US
dc.publisherAddis Ababa Universityen_US
dc.subjectArtificial Intelligenceen_US
dc.subjectMachine Learningen_US
dc.subjectArtificial Neural Networksen_US
dc.subjectDeep Learningen_US
dc.subjectDeep Neural Networksen_US
dc.subjectFeed-forward neural Networken_US
dc.subjectRectified Linear Unitsen_US
dc.subjectROS/Gazeboen_US
dc.subjectSupervised Learningen_US
dc.subjectVisual Servoingen_US
dc.titleVision Based Robot Control Using Machine Learningen_US
dc.typeThesisen_US

Files

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Sabizer Birihanu.pdf
Size:
2.56 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Plain Text
Description: