The IRMA Community
Newsletters
Research IRM
Click a keyword to search titles using our InfoSci-OnDemand powered search:
|
Counterfactual Autoencoder for Unsupervised Semantic Learning
Abstract
Deep Neural Networks (DNNs) are best known for being the state-of-the-art in artificial intelligence (AI) applications including natural language processing (NLP), speech processing, computer vision, etc. In spite of all recent achievements of deep learning, it has yet to achieve semantic learning required to reason about the data. This lack of reasoning is partially imputed to the boorish memorization of patterns and curves from millions of training samples and ignoring the spatiotemporal relationships. The proposed framework puts forward a novel approach based on variational autoencoders (VAEs) by using the potential outcomes model and developing the counterfactual autoencoders. The proposed framework transforms any sort of multimedia input distributions to a meaningful latent space while giving more control over how the latent space is created. This allows us to model data that is better suited to answer inference-based queries, which is very valuable in reasoning-based AI applications.
Related Content
Vinod Kumar, Himanshu Prajapati, Sasikala Ponnusamy.
© 2023.
18 pages.
|
Sougatamoy Biswas.
© 2023.
14 pages.
|
Ganga Devi S. V. S..
© 2023.
10 pages.
|
Gotam Singh Lalotra, Ashok Sharma, Barun Kumar Bhatti, Suresh Singh.
© 2023.
15 pages.
|
Nimish Kumar, Himanshu Verma, Yogesh Kumar Sharma.
© 2023.
16 pages.
|
R. Soujanya, Ravi Mohan Sharma, Manish Manish Maheshwari, Divya Prakash Shrivastava.
© 2023.
12 pages.
|
Nimish Kumar, Himanshu Verma, Yogesh Kumar Sharma.
© 2023.
22 pages.
|
|
|