IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Comparative Study of CAMSHIFT and RANSAC Methods for Face and Eye Tracking in Real-Time Video

Comparative Study of CAMSHIFT and RANSAC Methods for Face and Eye Tracking in Real-Time Video
View Sample PDF
Author(s): T. Raghuveera (Anna University, Department of Computer Science and Engineering, Tamil Nadu, India), S. Vidhushini (Anna University, Department of Computer Science and Engineering, Tamil Nadu, India)and M. Swathi (Anna University, Department of Computer Science and Engineering, Tamil Nadu, India)
Copyright: 2017
Volume: 13
Issue: 2
Pages: 13
Source title: International Journal of Intelligent Information Technologies (IJIIT)
Editor(s)-in-Chief: Vijayan Sugumaran (Oakland University, Rochester, USA)
DOI: 10.4018/IJIIT.2017040104

Purchase

View Comparative Study of CAMSHIFT and RANSAC Methods for Face and Eye Tracking in Real-Time Video on the publisher's website for pricing and purchasing information.

Abstract

Real-Time Facial and eye tracking is critical in applications like military surveillance, pervasive computing, Human Computer Interaction etc. In this work, face and eye tracking are implemented by using two well-known methods, CAMSHIFT and RANSAC. In our first approach, a frontal face detector is run on each frame of the video and the Viola-Jones face detector is used to detect the faces. CAMSHIFT Algorithm is used in the real- time tracking along with Haar-Like features that are used to localize and track eyes. In our second approach, the face is detected using Viola-Jones, whereas RANSAC is used to match the content of the subsequent frames. Adaptive Bilinear Filter is used to enhance quality of the input video. Then, we run the Viola-Jones face detector on each frame and apply both the algorithms. Finally, we use Kalman filter upon CAMSHIFT and RANSAC and compare with the preceding experiments. The comparisons are made for different real-time videos under heterogeneous environments through proposed performance measures, to identify the best-suited method for a given scenario.

Related Content

Li Liao. © 2024. 16 pages.
Shuqin Zhang, Peiyu Shi, Tianhui Du, Xinyu Su, Yunfei Han. © 2024. 27 pages.
Jinming Zhou, Yuanyuan Zhan, Sibo Chen. © 2024. 29 pages.
G. Manikandan, Reuel Samuel Sam, Steven Frederick Gilbert, Karthik Srikanth. © 2024. 16 pages.
Liangqun Yang. © 2024. 17 pages.
V. Shanmugarajeshwari, M. Ilayaraja. © 2024. 22 pages.
Kaisheng Liu. © 2024. 21 pages.
Body Bottom