IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Classification of Indian Native English Accents

Classification of Indian Native English Accents
View Sample PDF
Author(s): A. Aadhitya (Anna University, Chennai, India), K. N. Balasubramanian (Anna University, Chennai, India)and J. Dhalia Sweetlin (Anna University, Chennai, India)
Copyright: 2024
Pages: 15
Source title: Semantic Web Technologies and Applications in Artificial Intelligence of Things
Source Author(s)/Editor(s): Fernando Ortiz-Rodriguez (Tamaulipas Autonomous University, Mexico), Amed Leyva-Mederos (Universidad Central "Marta Abreu" de Las Villas, Cuba), Sanju Tiwari (Tamaulipas Autonomous University, Mexico), Ania R. Hernandez-Quintana (Universidad de La Habana, Cuba)and Jose L. Martinez-Rodriguez (Autonomous University of Tamaulipas, Mexico)
DOI: 10.4018/979-8-3693-1487-6.ch015

Purchase

View Classification of Indian Native English Accents on the publisher's website for pricing and purchasing information.

Abstract

The accent spoken by the people is generally influenced by their native mother tongue language. People located at various geographical locations speak by adding flavors to their native language. Various Indian native English accents are classified to bring out a classic difference between these accents. To bring a solution to this problem, a comparative classification model has been built to classify the accents of five distinct native Indian languages such as Tamil, Malayalam, Odia, Telugu, and Bangla from English accents. Firstly, the features of the five-second audio samples each from different accents are obtained and converted to images. The consolidated attributes are gathered. The VGG16 pre-trained model is fused with support vector model to classify accents accurately. Secondly, along with these features, mel frequency cepstral coefficient is added and trained. Then, the features obtained from VGG16 were reduced using principal component analysis. Highest accuracy obtained was 98.46%. Further analysis could be made to produce automated speech recognition for various aspects.

Related Content

R. Sundar, P. Balaji Srikaanth, Darshana A. Naik, V. P. Murugan, Madhavi Karumudi, Sampath Boopathi. © 2024. 26 pages.
Kamalendu Pal. © 2024. 26 pages.
Hayder Luis Endo Pérez, Amed Abel Leiva Mederos, José Antonio Senso-Ruíz, Ghislain Auguste Atemezing, Daniel Gálvez Lio, Jose Luis Sánchez-Chávez, Alfredo Simón Cueva. © 2024. 13 pages.
Graveth Uzoma Ejekwu, Samson Ajodo, O. Mashood Lawal, Oluwafemi S. Balogun. © 2024. 20 pages.
Marwa Ben Arab, Mouna Rekik, Lotfi Krichen. © 2024. 18 pages.
J. Vimala Devi, Rajesh Vyankatesh Argiddi, P. Renuka, K. Janagi, B. S. Hari, S. Boopathi. © 2024. 24 pages.
Marius Iulian Mihailescu, Stefania Loredana Nita. © 2024. 45 pages.
Body Bottom