DOI:10.20894/IJDMTA.
Periodicity: Bi Annual.
Impact Factor:
SJIF:4.893 & GIF:0.787
Submission:Any Time
Publisher: IIR Groups
Language: English
Review Process:
Double Blinded

News and Updates

Author can submit their paper through online submission. Click here

Paper Submission -> Blind Peer Review Process -> Acceptance -> Publication.

On an average time is 3 to 5 days from submission to first decision of manuscripts.

Double blind review and Plagiarism report ensure the originality

IJDMTA provides online manuscript tracking system.

Every issue of Journal of IJDMTA is available online from volume 1 issue 1 to the latest published issue with month and year.

Paper Submission:
Any Time
Review process:
One to Two week
Journal Publication:
June / December

IJDMTA special issue invites the papers from the NATIONAL CONFERENCE, INTERNATIONAL CONFERENCE, SEMINAR conducted by colleges, university, etc. The Group of paper will accept with some concession and will publish in IJDMTA website. For complete procedure, contact us at admin@iirgroups.org

Paper Template
Copyright Form
Subscription Form
web counter
web counter
Published in:   Vol. 10 Issue 1 Date of Publication:   June 2021

Enhanced Hand Gesture Recognition In Augmented Reality Using Genetic Algorithm

M.Sivaneshwari,P.Prabhu

Page(s):   21-26 ISSN:   2278-2397
DOI:   10.20894/IJDMTA.102.0010.001.001 Publisher:   Integrated Intelligent Research (IIR)

Hand Gesture recognition is technology which interpret human gestures using various algorithms.Interpreting human hand gestures has various challenges and issues such as image noise,visibility and orientation. There are various kinds of computer based algorithms have been proposed in the literature to overcome these limitations and still needs improvement. Hence in this research work a new Hand Gesture recognition system for augmented reality using Genetic Algorithm and Artificial neural network algorithm has been proposed.This shows that the experiment is successful and gesture recognition system is vigorous against various changes that are made in illumination changes and background changes. Experimental results show that the extracted features are effective, robust, and can cover the entire feature space of the selected gestures. This method satisfactory performance when compared with convensional methods