<?xml version="1.0" encoding="UTF-8"?>
<item xmlns="http://omeka.org/schemas/omeka-xml/v5" itemId="14904" public="1" featured="0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://archives.christuniversity.in/items/show/14904?output=omeka-xml" accessDate="2026-05-01T09:26:58+00:00">
  <collection collectionId="5">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="64">
                <text>Articles</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </collection>
  <itemType itemTypeId="19">
    <name>Article</name>
    <description>Faculty Publications -Articles</description>
  </itemType>
  <elementSetContainer>
    <elementSet elementSetId="1">
      <name>Dublin Core</name>
      <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
      <elementContainer>
        <element elementId="50">
          <name>Title</name>
          <description>A name given to the resource</description>
          <elementTextContainer>
            <elementText elementTextId="105462">
              <text>Multimodal emotional analysis through hierarchical video summarization and face tracking</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="49">
          <name>Subject</name>
          <description>The topic of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="105463">
              <text>Deep neural networks; Emotional intelligence; Entropy; Face detection; Hierarchical summarization; Video summarization; Viola Jones</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="41">
          <name>Description</name>
          <description>An account of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="105464">
              <text>The era of video data has fascinated users into creating, processing, and manipulating videos for various applications. Voluminous video data requires higher computation power and processing time. In this work, a model is developed that can precisely acquire keyframes through hierarchical summarization and use the keyframes to detect faces and assess the emotional intent of the user. The key-frames are used to detect faces using recursive Viola-Jones algorithm and an emotional analysis for the faces extracted is conducted using an underlying architecture developed based on Deep Neural Networks (DNN). This work has significantly contributed in improving the accuracy of face detection and emotional analysis in non-redundant frames. The number of frames selected after summarization was less than 30% using the local minima extraction. The recursive routine introduced for face detection reduced false positives in all the video frames to lesser than 2%. The accuracy of emotional prediction on the faces acquired through the summarized frames, on Indian faces achieved a 90%. The computational requirement scaled down to 40% due to the hierarchical summarization that removed redundant frames and recursive face detection removed false localization of faces. The proposed model intends to emphasize the importance of keyframe detection and use them for facial emotional recognition.  2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="39">
          <name>Creator</name>
          <description>An entity primarily responsible for making the resource</description>
          <elementTextContainer>
            <elementText elementTextId="105465">
              <text>Thiruthuvanathan M.M.; Krishnan B.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="48">
          <name>Source</name>
          <description>A related resource from which the described resource is derived</description>
          <elementTextContainer>
            <elementText elementTextId="105466">
              <text>Multimedia Tools and Applications, Vol-81, No. 25, pp. 35535-35554.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="45">
          <name>Publisher</name>
          <description>An entity responsible for making the resource available</description>
          <elementTextContainer>
            <elementText elementTextId="105467">
              <text>Springer</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="40">
          <name>Date</name>
          <description>A point or period of time associated with an event in the lifecycle of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="105468">
              <text>2022-01-01</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="43">
          <name>Identifier</name>
          <description>An unambiguous reference to the resource within a given context</description>
          <elementTextContainer>
            <elementText elementTextId="105469">
              <text>&lt;a href="https://doi.org/10.1007/s11042-021-11010-y" target="_blank" rel="noreferrer noopener"&gt;https://doi.org/10.1007/s11042-021-11010-y&lt;/a&gt;
&lt;br /&gt;&lt;br /&gt;&lt;a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85106472136&amp;amp;doi=10.1007%2Fs11042-021-11010-y&amp;amp;partnerID=40&amp;amp;md5=88b39e18f88ebd344befd233d85493e9" target="_blank" rel="noreferrer noopener"&gt;https://www.scopus.com/inward/record.uri?eid=2-s2.0-85106472136&amp;amp;doi=10.1007%2fs11042-021-11010-y&amp;amp;partnerID=40&amp;amp;md5=88b39e18f88ebd344befd233d85493e9&lt;/a&gt;</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="47">
          <name>Rights</name>
          <description>Information about rights held in and over the resource</description>
          <elementTextContainer>
            <elementText elementTextId="105470">
              <text>Restricted Access</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="46">
          <name>Relation</name>
          <description>A related resource</description>
          <elementTextContainer>
            <elementText elementTextId="105471">
              <text>ISSN: 13807501; CODEN: MTAPF</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="42">
          <name>Format</name>
          <description>The file format, physical medium, or dimensions of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="105472">
              <text>Online</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="44">
          <name>Language</name>
          <description>A language of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="105473">
              <text>English</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="51">
          <name>Type</name>
          <description>The nature or genre of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="105474">
              <text>Article</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="38">
          <name>Coverage</name>
          <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
          <elementTextContainer>
            <elementText elementTextId="105475">
              <text>Thiruthuvanathan M.M., Department of Computer Science and Engineering, School of Engineering and Technology, CHRIST (Deemed to be University), Bangalore, India; Krishnan B., Department of Computer Science and Engineering, School of Engineering and Technology, CHRIST (Deemed to be University), Bangalore, India</text>
            </elementText>
          </elementTextContainer>
        </element>
      </elementContainer>
    </elementSet>
  </elementSetContainer>
</item>
