Human AI: Explainable and responsible models in computer vision
- Title
- Human AI: Explainable and responsible models in computer vision
- Creator
- Kumar K.P.; Thiruthuvanathan M.M.; K.K. S.; Chandra D.R.
- Description
- Artificial intelligence (AI) is being used in all areas of information, research, and technology. Allied parts of AI have to be investigated for understanding the association among them. Human and explainable AI (XAI) are a few examples that can help in the development of understandable systems. Posthoc actions and operations are geared toward explainable AI, which investigates what went wrong in a black box setting. Responsible AI, on the other hand, seeks to avoid such blunders in the first ring. Ontology is defined as the study of existence and has several applications in computer science, specifically in platforms such as Resource Description Framework and Web Ontology Language. In this chapter, we examine both parts of the aforementioned AI and attempt to establish a link between ontology and explainable AI as they complement each other in terms of creating trustworthy systems. As part of the chapter, an applicable literature is also brought in, emphasizing the necessity of current understanding in explainable and responsible AI. For illustrating the lineage of input and output operations in relation to ontology characteristics and AI, a scenario of AI implementation using image processing dataset is studied. Classroom learning is an integral element of every student's daily life. Assessing the interest levels of individual pupils would help in enhancing the process of teaching and learning. This work contributes to the process of explainable AI by eliciting algorithms that can extract faces from frames, recognize emotions, conduct studies on engagement levels, and provide a session-wide analysis. Detailed descriptions of these operations, as well as specific parameters, are provided to relate the theme of work. We feel that this collaboration between ontology and explainable AI is unique in that it acts as a springboard for future study in these domains. 2024 Elsevier Inc. All rights reserved.
- Source
- Emotional AI and Human-AI Interactions in Social Networking, pp. 237-254.
- Date
- 2023-01-01
- Publisher
- Elsevier
- Subject
- Computer vision and image processing; Explainable AI; Ontology; Responsible AI; Unobtrusive student engagement analysis
- Coverage
- Kumar K.P., Department of Computer Science Engineering, School of Engineering and Technology, CHRIST University, Kengeri Campus, Karnataka, Bangalore, India; Thiruthuvanathan M.M., Department of Computer Science Engineering, School of Engineering and Technology, CHRIST University, Kengeri Campus, Karnataka, Bangalore, India; K.K. S., Department of Computer Science Engineering, School of Engineering and Technology, CHRIST University, Kengeri Campus, Karnataka, Bangalore, India; Chandra D.R., Department of Computer Science Engineering, School of Engineering and Technology, CHRIST University, Kengeri Campus, Karnataka, Bangalore, India
- Rights
- Restricted Access
- Relation
- ISBN: 978-044319096-4; 978-044319097-1
- Format
- Online
- Language
- English
- Type
- Book chapter
Collection
Citation
Kumar K.P.; Thiruthuvanathan M.M.; K.K. S.; Chandra D.R., “Human AI: Explainable and responsible models in computer vision,” CHRIST (Deemed To Be University) Institutional Repository, accessed February 24, 2025, https://archives.christuniversity.in/items/show/18385.