<?xml version="1.0" encoding="UTF-8"?>
<item xmlns="http://omeka.org/schemas/omeka-xml/v5" itemId="20110" public="1" featured="0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://archives.christuniversity.in/items/show/20110?output=omeka-xml" accessDate="2026-04-08T19:35:10+00:00">
  <collection collectionId="16">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="51377">
                <text>Conference Papers</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </collection>
  <itemType itemTypeId="28">
    <name>Conference Paper</name>
    <description>Faculty Publications- Conference Papers</description>
  </itemType>
  <elementSetContainer>
    <elementSet elementSetId="1">
      <name>Dublin Core</name>
      <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
      <elementContainer>
        <element elementId="50">
          <name>Title</name>
          <description>A name given to the resource</description>
          <elementTextContainer>
            <elementText elementTextId="177132">
              <text>Smart Facial Emotion Recognition with Gender and Age Factor Estimation</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="49">
          <name>Subject</name>
          <description>The topic of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="177133">
              <text>Age recognition; CNN; Emotion recognition; Gender; HCI; KNN; SVM; VGG - 16</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="41">
          <name>Description</name>
          <description>An account of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="177134">
              <text>Human-Computer Interaction (HCI) in an intelligent way, which aims at creating scalable and flexible solutions. Big tech firms and businesses believe in the success of HCI as it allows them to profit from on-demand technology and infrastructure for information-centric applications without having to use public clouds. Because of its capacity to imitate human coding abilities, facial expression recognition and software-based facial expression identification systems are crucial. This paper proposes a system of recognizing the emotional condition of humans, given a facial expression, and conveys two methods of predicting the age and gender factors from human faces. This research also aims in understanding the influences posed by gender and age of humans on their facial expressions. The model can currently detect 7 emotions based on the facial data of a person - (Anger, Disgust, Happy, Fear, Sad, Surprise, and Neutral state). The proposed system is divided into three segments: a.) Gender Detection b.) Age Detection c.) Emotion Recognition. The initial model is created using 2 algorithms - KNN, and SVM. We have also utilized the architectures of some of the deep learning models such as CNN and VGG - 16 pre-trained models (Transfer Learning). The evaluation metrics show the model performance regarding the accuracy of the Recognition system. Future enhancements of this work can include the deployment of the DL and ML model onto an android or a wearable device such as a smartphone or a watch for a real-time use case.  2022 Elsevier B.V.. All rights reserved.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="39">
          <name>Creator</name>
          <description>An entity primarily responsible for making the resource</description>
          <elementTextContainer>
            <elementText elementTextId="177135">
              <text>Teja Chavali S.; Tej Kandavalli C.; Sugash T.M.; Subramani R.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="48">
          <name>Source</name>
          <description>A related resource from which the described resource is derived</description>
          <elementTextContainer>
            <elementText elementTextId="177136">
              <text>Procedia Computer Science, Vol-218, pp. 113-123.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="45">
          <name>Publisher</name>
          <description>An entity responsible for making the resource available</description>
          <elementTextContainer>
            <elementText elementTextId="177137">
              <text>Elsevier B.V.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="40">
          <name>Date</name>
          <description>A point or period of time associated with an event in the lifecycle of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="177138">
              <text>2022-01-01</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="43">
          <name>Identifier</name>
          <description>An unambiguous reference to the resource within a given context</description>
          <elementTextContainer>
            <elementText elementTextId="177139">
              <text>&lt;a href="https://doi.org/10.1016/j.procs.2022.12.407" target="_blank" rel="noreferrer noopener"&gt;https://doi.org/10.1016/j.procs.2022.12.407&lt;/a&gt;
&lt;br /&gt;&lt;br /&gt;&lt;a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85163687390&amp;amp;doi=10.1016%2Fj.procs.2022.12.407&amp;amp;partnerID=40&amp;amp;md5=5dbafaeb530809dff94994d9451408cc" target="_blank" rel="noreferrer noopener"&gt;https://www.scopus.com/inward/record.uri?eid=2-s2.0-85163687390&amp;amp;doi=10.1016%2fj.procs.2022.12.407&amp;amp;partnerID=40&amp;amp;md5=5dbafaeb530809dff94994d9451408cc&lt;/a&gt;</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="47">
          <name>Rights</name>
          <description>Information about rights held in and over the resource</description>
          <elementTextContainer>
            <elementText elementTextId="177140">
              <text>All Open Access; Gold Open Access</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="46">
          <name>Relation</name>
          <description>A related resource</description>
          <elementTextContainer>
            <elementText elementTextId="177141">
              <text>ISSN: 18770509</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="42">
          <name>Format</name>
          <description>The file format, physical medium, or dimensions of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="177142">
              <text>Online</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="44">
          <name>Language</name>
          <description>A language of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="177143">
              <text>English</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="51">
          <name>Type</name>
          <description>The nature or genre of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="177144">
              <text>Conference paper</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="38">
          <name>Coverage</name>
          <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
          <elementTextContainer>
            <elementText elementTextId="177145">
              <text>Teja Chavali S., Department of Computer Science and Engineering, Amrita School of Engineering, Bengaluru, Amrita Vishwa Vidyapeetham, Bengaluru, 56003, India; Tej Kandavalli C., Department of Computer Science and Engineering, Amrita School of Engineering, Bengaluru, Amrita Vishwa Vidyapeetham, Bengaluru, 56003, India; Sugash T.M., Department of Mathematics, CHRIST, Deemed to Be University, Bengaluru, 560029, India; Subramani R., Department of Mathematics, CHRIST, Deemed to Be University, Bengaluru, 560029, India</text>
            </elementText>
          </elementTextContainer>
        </element>
      </elementContainer>
    </elementSet>
  </elementSetContainer>
</item>
