http://jitce.fti.unand.ac.id/index.php/JITCE/issue/feedJITCE (Journal of Information Technology and Computer Engineering)2024-09-30T11:30:05-04:00Editor JITCEjitce@it.unand.ac.idOpen Journal Systems<div class="jumbotron text-white jumbotron-image container-full-bg" style="background-image: url('/public/site/images/redaksi/JITCEY3.png'); background-size: cover;"> <h2 class="mb-4">Journal of Information Technology and Computer Engineering</h2> <hr class="my-0 border-top-6 border-dark border-2"> <h5 class="mb-4 text-sm">JITCE (Journal of Information Technology and Computer Engineering) is a scholarly periodical, peer-reviewed, and open access journal. JITCE focuses on Internet of Things (IoT) applications and sensor networks as its main scope.</h5> <br> <h5 class="mb-4"><strong>Publish With Us</strong></h5> <a class="btn btn-primary" href="http://jitce.fti.unand.ac.id/index.php/JITCE/about/submissions">SUBMIT AN ARTICLE</a></div> <div class="row"> <div class="col-sm-6 col-md-3"><a class="thumbnail" href="http://jitce.fti.unand.ac.id/index.php/JITCE/template"> <img src="/public/site/images/redaksi/Template.png"> </a></div> <div class="col-sm-6 col-md-3"><a class="thumbnail" href="http://jitce.fti.unand.ac.id/index.php/JITCE/fees"> <img src="/public/site/images/redaksi/COST.png"> </a></div> <div class="col-sm-6 col-md-3"><a class="thumbnail" href="http://jitce.fti.unand.ac.id/index.php/JITCE/scope"> <img src="/public/site/images/redaksi/Focus1.png"> </a></div> <div class="col-sm-6 col-md-3"><a class="thumbnail" href="http://jitce.fti.unand.ac.id/index.php/JITCE/guide"> <img src="/public/site/images/redaksi/Guide1.png"> </a></div> </div> <p><strong>JITCE (Journal of Information Technology and Computer Engineering)</strong> is a journal platform for the dissemination of cutting-edge research in the fields of information technology and computer engineering. Our journal serves as a conduit for high-quality research that drives innovation and addresses the critical challenges of our time. Our robust peer review process ensures that each submission undergoes thorough evaluation by respected experts in the field, upholding the highest standards of academic integrity and scientific rigor. JITCE offers an extensive reach within the global academic and professional communities, ensuring your research is widely accessible and influential.Benefit from the guidance and support of our distinguished editorial board, composed of leading scholars and practitioners dedicated to advancing the field of information technology and computer engineering. We invite you to contribute to JITCE, where your research will not only reach a global audience but also contribute to the ongoing research in technology and engineering. Your work has the potential to influence the future of these critical fields, and we are here to help you achieve that impact.</p> <p> </p> <div class="card mb-3"> <div class="row g-0"> <div class="col-md-4 thumbnail"><img class="img-fluid rounded-start mx-auto" style="object-fit: cover; width: 200px;" src="/public/site/images/redaksi/backjit.png"></div> <div class="col-md-8"> <div class="card-body"> <table class="table table-hover table-dark table-striped"> <tbody> <tr> <td>Journal Title</td> <td>JITCE (Journal of Information Technology and Computer Engineering)</td> </tr> <tr> <td>e-ISSN</td> <td><a href="https://portal.issn.org/resource/ISSN-L/2599-1663" target="_blank" rel="noopener">2599-1663 </a></td> </tr> <tr> <td>Editor-in-chief</td> <td>Dr.Eng Rian Ferdian (Universitas Andalas)</td> </tr> <tr> <td>Organized</td> <td>Computer Engineering Departemen, Universitas Andalas, Indonesia</td> </tr> <tr> <td>Frequency</td> <td>Semiannual</td> </tr> <tr> <td>Indexed By</td> <td><a href="https://doaj.org/toc/2599-1663" target="_blank" rel="noopener"> DOAJ </a> , <a href="https://search.crossref.org/?q=+2599-1663&from_ui=yes" target="_blank" rel="noopener">CROSSREF</a></td> </tr> </tbody> </table> </div> </div> </div> </div>http://jitce.fti.unand.ac.id/index.php/JITCE/article/view/253The Influence of Physical Tuning Technology on Voice Over LTE (VoLTE)2024-09-30T11:25:48-04:00zurnawita zurnawitazurnawita@gmail.comDikky Chandrazurnawita@gmail.comfajru ju zulyafajruzulya4@gmail.com<p>The Long-Term Evolution (LTE) technology is currently evolving in the cellular communication system. Currently, LTE technology is only used for faster internet data activities. Unfortunately, phone calls still rely on second-generation (2G) or third-generation (3G) networks. To improve the quality of voice calls, one of the ways is through the utilization of Voice Over LTE (VoLTE) technology. The reason for using VoLTE in fourth-generation (4G) networks includes the voice quality based on Internet Protocol (IP). This study analyzes the performance of VoLTE technology networks. Based on the data collected, the Reference Signal Received Power (RSRP) with a percentage of 37.73% falls into the Good category, Signal to Interference Noise Ratio (SINR) with a percentage of 55.32% falls into the Fair category, and Throughput with a percentage of 66.16% falls into the Poor category. In terms of delay, it has a score of 4, categorized as very good, jitter has a score of 3, categorized as good, and packet loss has a score of 4, categorized as very good. The optimization results using physical tuning show that the Reference Signal Received Power (RSRP) falls into the Good category with a percentage of 52.8%, Signal to Interference Noise Ratio (SINR) falls into the Good category with a percentage of 70%, and Throughput falls into the Very Good category with a percentage of 64.50%.</p>2024-09-30T11:21:40-04:00##submission.copyrightStatement##http://jitce.fti.unand.ac.id/index.php/JITCE/article/view/246UAV With the Ability to Control with Sign Language and Hand by Image Processing2024-09-30T11:30:05-04:00Hediyeh HojajiHojaji@RIThub.orgAlireza DelisnavDelisnav@RIThub.orgMohammad Hossein Ghafouri MoghaddamGhafouri@RIThub.orgFariba GhorbaniDr.f.ghorbani@gmail.comShadi ShafaghiShafaghishadi@yahoo.comMasoud Shafaghiresearchinnovationteams@gmail.com<p>Automatic recognition of sign language from hand gesture images is crucial for enhancing human-robot interaction, especially in critical scenarios such as rescue operations. In this study, we employed a DJI TELLO drone equipped with advanced machine vision capabilities to recognize and classify sign language gestures accurately. We developed an experimental setup where the drone, integrated with state-of-the-art radio control systems and machine vision techniques, navigated through simulated disaster environments to interact with human subjects using sign language. Data collection involved capturing various hand gestures under various environmental conditions to train and validate our recognition algorithms, including implementing YOLO V5 alongside Python libraries with OpenCV. This setup enabled precise hand and body detection, allowing the drone to navigate and interact effectively. We assessed the system's performance by its ability to accurately recognize gestures in both controlled and complex, cluttered backgrounds. Additionally, we developed robust debris and damage-resistant shielding mechanisms to safeguard the drone's integrity. Our drone fleet also established a resilient communication network via Wi-Fi, ensuring uninterrupted data transmission even with connectivity disruptions. These findings underscore the potential of AI-driven drones to engage in natural conversational interactions with humans, thereby providing vital information to assist decision-making processes during emergencies. In conclusion, our approach promises to revolutionize the efficacy of rescue operations by facilitating rapid and accurate communication of critical information to rescue teams.</p>2024-09-30T11:13:43-04:00##submission.copyrightStatement##http://jitce.fti.unand.ac.id/index.php/JITCE/article/view/202Deep Learning-Based Dzongkha Handwritten Digit Classification2024-03-31T02:03:25-04:00Yonten Jamtshoyontenjamtsho.gcit@rub.edu.btPema Yangdenpemayangden.gcit@rub.edu.btSonam Wangmosonamwangmo.gcit@rub.edu.btNima Demandema.gcit@rub.edu.bt<p>In computer vision applications, pattern recognition is one of the important fields in artificial intelligence. With the advancement in deep learning technology, many machine learning algorithms were developed to tackle the problem of pattern recognition. The purpose of conducting the research is to create the first-ever Dzongkha handwritten digit dataset and develop a model to classify the digit. In the study, the 3 layer set of CONV → ReLU → POOL, followed by a fully connected layer, dropout layer, and softmax function were used to train the digit. In the dataset, each class (0-9) contains 1500 images which are split into train, validation, and test sets: 70:20:10. The model was trained on three different image dimensions: 28 by 28, 32 by 32, and 64 by 64. Compared to image dimensions 28 by 28 and 32 by 32, 64 by 64 gave the highest train, validation, and test accuracy of 98.66%, 98.9%, and 99.13% respectively. In the future, the sample of digits needs to be increased and use the transfer learning concept to train the model.</p>2024-03-31T00:00:00-04:00##submission.copyrightStatement##http://jitce.fti.unand.ac.id/index.php/JITCE/article/view/177The Evaluation of LSB Steganography on Image File Using 3DES and MD5 Key2024-03-31T02:03:26-04:00Ilham Firman Asharifirman.ashari@if.itera.ac.idEko Dwi Nugrohofirman.ashari@if.itera.ac.idDodi Devrian Andriantofirman.ashari@if.itera.ac.idM. Asyroful Nur Maulana Yusuffirman.ashari@if.itera.ac.idMakruf Alkarkhifirman.ashari@if.itera.ac.id<p>Information security is paramount for individuals, companies, and governments. Threats to data confidentiality are increasingly complex, demanding strong protection. Therefore, cryptography and steganography play pivotal roles. This study proposes the utilization of LSB (Least Significant Bit) steganography on image files employing the 3DES (Triple Data Encryption Standard) algorithm. The aim is to facilitate secure transmission and reception of data, including confidential messages, within digital information media. The research methodology involves implementing 3DES + LSB using Image Citra and innovating 3DES + MD5 Hash in .txt files. The results and discussions described include, among others, Pseudocode, Cryptographic Testing, and Steganography Testing. Based on the results of program analysis and testing, it can be concluded that the more messages that are inserted in the image, the more pixel differences there are in the stego image. The more colors in the image to which the message will be inserted, the more pixel differences in the stego image will be. The images that stego objects can present are only images with .png and .jpeg extensions. Testing from the fidelity aspect, the average PSNR obtained is 66,365, meaning that the stego image quality is very good. Testing from the recovery aspect, from 4 tested stego images, showed that messages can be extracted again. Testing of the robustness spec using two attack techniques, namely rotation, and robustness, shows that the message cannot be extracted from the image. Testers of the computation time, from testing 1-1000 characters, show the average time required for computation is about 0.798 seconds.</p>2024-03-31T00:00:00-04:00##submission.copyrightStatement##http://jitce.fti.unand.ac.id/index.php/JITCE/article/view/204Design of a Drowsiness Prevention Helmet with Vibration and IoT-Based Theft Detection Alarms2024-03-31T02:03:26-04:00Aditya Putra Perdana Prasetyoaditrecca@gmail.comHarlis Richard Sitorusaditrecca@gmail.comRahmat Fadli Isnantoaditrecca@gmail.comAdi Hermansyahaditrecca@gmail.com<p>Ensuring safety while riding a motorbike is an imperative task. Currently, safety products such as helmets have the capability to provide protection to users without the additional feature of issuing warnings. Consequently, a preemptive alert system is developed to offer timely notifications to drivers. The experimental setup involves the utilization of a Max30100 sensor that is linked to a microcontroller and integrated into a helmet. The objective of this final project is to offer a timely alert to the rider and utilize the Max30100 sensor for pulse detection in order to ascertain the normalcy of the rider's pulse. In instances where the rider encounters tiredness and fatigue, it is common for the pulse intensity to exhibit a reduction. The Blynk application presents the detection pulse findings on the smartphone screen, while the buzzer on the helmet will activate in response to vibrations and sounds once the pulse has diminished. Based on testing, the average pulse rate on quiet road conditions is 78.58 BPM. On busy road conditions, the average pulse rate is 73.25 BPM. While in traffic conditions, the average pulse rate is 73.5 BPM. The helmet theft detector uses a Sharp GP2Y0A21 sensor that can only detect object distances up to 10 cm.</p>2024-03-31T00:00:00-04:00##submission.copyrightStatement##