A Scoring Approach for Improving Presentation Impact: Addressing Voice Stuttering, AR Glasses-Based Emotion Recognition, and Profiled Movement Assessment
Mar 7, 2024ยท,,,ยท
0 min read
Mazen Walid
Mostafa Ameen
Moamen Zaher
Ayman Atia
Abstract
This paper explores the key elements of a proficient presentation and performs a comparative examination of different approaches for identifying issues related to presentations within an Augmented Reality/Virtual Reality (AR/VR) setting. The addressed challenges include stutter detection, emotion recognition, and analysis of profiled motions. Each problem is tackled using distinct models. In addressing voice stuttering, two approaches are explored employing mel-frequency cepstral coefficients (MFCC) and utilizing a trained Convolutional Neural Network (CNN). The CNN model attains the highest accuracy at 93%. For AR emotion detection, two models are employed CNN and Visual Geometry Group (VGG16). The VGG16 model achieves superior accuracy, reaching 92%. Concerning profiled movements, a study compares newer algorithms such as 1$ and Few-shot learning with relatively older counterparts like Long Short-Term Memory (LSTM), CNN-LSTM, Recurrent Neural Network (RNN), for detecting and classifying movements. The findings contribute to a foundation of established AR/VR presentation techniques for future research and development in enhancing virtual interaction and communication.
Type
Publication
2024 6th International Conference on Computing and Informatics (ICCI)