English  |  正體中文  |  简体中文  |  Items with full text/Total items : 26987/38787
Visitors : 2218080      Online Users : 36
RC Version 4.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Adv. Search
LoginUploadHelpAboutAdminister

Please use this identifier to cite or link to this item: http://ntour.ntou.edu.tw:8080/ir/handle/987654321/49456

Title: 3D座標與即時字元辨識於五軸機械手臂在智慧型手機自動測試系統之應用
Application of 3D Coordinate and Real-time Character Recognition for 5DoF Robotic Arm on Smartphone Automatic Test System
Authors: Cheng, I-Hua
鄭宜樺
Contributors: NTOU:Department of Communications Navigation and Control Engineering
國立臺灣海洋大學:通訊與導航工程學系
Keywords: 自動控制;3D座標;Q矩陣;逆向運動學;模糊控制
Robot;3D coordinate;Q matrix;Inverse kinematic;Fuzzy control
Date: 2015
Issue Date: 2018-08-22T07:01:56Z
Abstract: 本篇提出使用多顆鏡頭來進行數字或英文字元的辨識,並自動轉換目標物3D座標位置於五軸機械手臂系統。經由視覺辨識,機械手臂控制系統會按照其要求的指令去計算目標物的3D座標,並完成相對應的動作。主要是先藉由兩個攝影鏡頭,使用LabView的視覺模組以及視覺輔助軟體去進行3D校正,再將其校正出來的資料藉由Q矩陣的計算轉成實際目標物的XYZ座標中心位置。接著使用逆向運動學的方式,將XYZ的末端座標轉成機械手臂每一軸精度位置,同時利用像素轉公分的公式,將目標物的其它按鍵位置計算並找出。再使用模糊演算法,讓手臂可以正確觸碰到我們要求的數字或英文數字,進而去執行一般使用者對於智慧型手機會使用到的功能。在指令接收方面,使用另一顆鏡頭去擷取測試系統的畫面訊息,並對影像使用RGB轉HSL色彩空間的處理來進行數字或英文字元的擷取,並將訊息傳遞給機械手臂,讓它知道要去執行什麼樣的工作內容。在目標物的部分,針對畫面較差的影像或對比較弱的字元,也會使用影像處理:包含等化效果以及色差反向,將辨識效果加強。最後手臂在動作時,我們會監視機械手臂每一軸的即時回授值,來限制機械手臂的力道,速度以及位置,以防止失控的現像發生。
In this study three webcams are applied to a 5DoF robotic arm system that is applied to smartphone testing operation. One of the cameras is used to recognize words and numbers from the control panel by the use of Optical Character Recognize (OCR) and pattern matching process; it is an indication or command sending to human's brain for decision. In here the computer is the robot's brain that receives the command and executes decision. Another two cameras are used for catching the left and right images for 3D coordinates of object, and they are similar to human's eyes that can tell the position of object. We can easily see that the robotic arm system can catch the 3D coordinates of object and perform testing operations by the command from visual recognition. In the first step, we need to process the calibration procedure and get the relative internal/external parameters by two webcams. Then the values from image plan can be compared and transformed to 3D coordinates by the Q matrix. The coordinates can be translated to 4096 precision values in robotic arm system. In here we also use the inverse kinematics and translation between pixels and distance in the real world to check the relative position and further to execute the tested smartphone functions. In control panel, for receiving command, we use another webcam to catch the message from the monitor of the control PC by the OCR and pattern matching process. The words of command can be obtained after image translation from RGB to HSL color space. The message will then be sent to robotic arm. Movements of the robotic arm are based on fuzzy logic theory that can drive the robot arm to the relative point and position of object. The robot will execute operations that are requested. The feedback values of arm movement are applied to correct the position error in real time.
URI: http://ethesys.lib.ntou.edu.tw/cgi-bin/gs32/gsweb.cgi?o=dstdcdr&s=G0010267011.id
http://ntour.ntou.edu.tw:8080/ir/handle/987654321/49456
Appears in Collections:[通訊與導航工程學系] 博碩士論文

Files in This Item:

File Description SizeFormat
index.html0KbHTML29View/Open


All items in NTOUR are protected by copyright, with all rights reserved.

 


著作權政策宣告: 本網站之內容為國立臺灣海洋大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,請合理使用本網站之內容,以尊重著作權人之權益。
網站維護: 海大圖資處 圖書系統組
DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback