UAV With the Ability to Control with Sign Language and Hand by Image Processing
Main Article Content
Abstract
Automatic recognition of sign language from hand gesture images is crucial for enhancing human-robot interaction, especially in critical scenarios such as rescue operations. In this study, we employed a DJI TELLO drone equipped with advanced machine vision capabilities to recognize and classify sign language gestures accurately. We developed an experimental setup where the drone, integrated with state-of-the-art radio control systems and machine vision techniques, navigated through simulated disaster environments to interact with human subjects using sign language. Data collection involved capturing various hand gestures under various environmental conditions to train and validate our recognition algorithms, including implementing YOLO V5 alongside Python libraries with OpenCV. This setup enabled precise hand and body detection, allowing the drone to navigate and interact effectively. We assessed the system's performance by its ability to accurately recognize gestures in both controlled and complex, cluttered backgrounds. Additionally, we developed robust debris and damage-resistant shielding mechanisms to safeguard the drone's integrity. Our drone fleet also established a resilient communication network via Wi-Fi, ensuring uninterrupted data transmission even with connectivity disruptions. These findings underscore the potential of AI-driven drones to engage in natural conversational interactions with humans, thereby providing vital information to assist decision-making processes during emergencies. In conclusion, our approach promises to revolutionize the efficacy of rescue operations by facilitating rapid and accurate communication of critical information to rescue teams.
Downloads
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Please find the rights and licenses in the Journal of Information Technology and Computer Engineering (JITCE).
1. License
The non-commercial use of the article will be governed by the Creative Commons Attribution license as currently displayed on Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
2. Author(s)’ Warranties
The author(s) warrants that the article is original, written by stated author(s), has not been published before, contains no unlawful statements, does not infringe the rights of others, is subject to copyright that is vested exclusively in the author and free of any third party rights, and that any necessary permissions to quote from other sources have been obtained by the author(s).
3. User Rights
JITCE adopts the spirit of open access and open science, which disseminates articles published as free as possible under the Creative Commons license. JITCE permits users to copy, distribute, display, and perform the work for non-commercial purposes only. Users will also need to attribute authors and JITCE on distributing works in the journal.
4. Rights of Authors
Authors retain the following rights:
- Copyright, and other proprietary rights relating to the article, such as patent rights,
- the right to use the substance of the article in future own works, including lectures and books,
- the right to reproduce the article for own purposes,
- the right to self-archive the article.
- the right to enter into separate, additional contractual arrangements for the non-exclusive distribution of the article's published version (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal (Journal of Information Technology and Computer Engineering).
5. Co-Authorship
If the article was jointly prepared by other authors; upon submitting the article, the author is agreed on this form and warrants that he/she has been authorized by all co-authors on their behalf, and agrees to inform his/her co-authors. JITCE will be freed on any disputes that will occur regarding this issue.
7. Royalties
By submitting the articles, the authors agreed that no fees are payable from JITCE.
8. Miscellaneous
JITCE will publish the article (or have it published) in the journal if the article’s editorial process is successfully completed and JITCE or its sublicensee has become obligated to have the article published. JITCE may adjust the article to a style of punctuation, spelling, capitalization, referencing and usage that it deems appropriate. The author acknowledges that the article may be published so that it will be publicly accessible and such access will be free of charge for the readers.
References
[2] D. Dhake, M. P. Kamble, S. S. Kumbhar, and S. M. Patil, “Sign language communication with dumb and deaf people,” Int J Eng Appl Sci Technol, vol. 5, no. 4, pp. 254–258, 2020.
[3] W. L. Passos, G. M. Araujo, J. N. Gois, and A. A. de Lima, “A gait energy image-based system for Brazilian sign language recognition,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 68, no. 11, pp. 4761–4771, 2021.
[4] M. Bassiony, A. V. Aluko, and J. A. Radosevich, “Immunotherapy and cancer,” Precision Medicine in Oncology, pp. 133–156, 2020.
[5] R. Patel and P. Kaur, “Hand Gesture Digit Recognition using Machine Learning with Image Processing”.
[6] B. I. Ismail et al., “Evaluation of docker as edge computing platform,” in 2015 IEEE conference on open systems (ICOS), IEEE, 2015, pp. 130–135.
[7] G. Borg, “Drone object detection based on real-time sign language communication.” University of Malta, 2021.
[8] F. Patrona, I. Mademlis, and I. Pitas, “An overview of hand gesture languages for autonomous UAV handling,” 2021 Aerial Robotic Systems Physically Interacting with the Environment (AIRPHARO), pp. 1–7, 2021.
[9] D. Talukder and F. Jahara, “Real-time bangla sign language detection with sentence and speech generation,” in 2020 23rd International Conference on Computer and Information Technology (ICCIT), IEEE, 2020, pp. 1–6.
[10] K. Amrutha and P. Prabu, “ML based sign language recognition system,” in 2021 International Conference on Innovative Trends in Information Technology (ICITIIT), IEEE, 2021, pp. 1–6.
[11] N. Saquib and A. Rahman, “Application of machine learning techniques for real-time sign language detection using wearable sensors,” in Proceedings of the 11th ACM Multimedia Systems Conference, 2020, pp. 178–189.
[12] S. Shrenika and M. M. Bala, “Sign language recognition using template matching technique,” in 2020 international conference on computer science, engineering and applications (ICCSEA), IEEE, 2020, pp. 1–5.
[13] R. Contreras, A. Ayala, and F. Cruz, “Unmanned aerial vehicle control through domain-based automatic speech recognition,” Computers, vol. 9, no. 3, p. 75, 2020.
[14] K. Grispino, D. Lyons, and T.-H. D. Nguyen, “Evaluating the Potential of Drone Swarms in Nonverbal HRI Communication,” in 2020 IEEE International Conference on Human-Machine Systems (ICHMS), IEEE, 2020, pp. 1–6.
[15] C.-C. Tsai, C.-C. Kuo, and Y.-L. Chen, “3D hand gesture recognition for drone control in unity,” in 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), IEEE, 2020, pp. 985–988.