<?xml version="1.0" encoding="UTF-8"?>
<item xmlns="http://omeka.org/schemas/omeka-xml/v5" itemId="16421" public="1" featured="0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://archives.christuniversity.in/items/show/16421?output=omeka-xml" accessDate="2026-04-18T16:35:07+00:00">
  <collection collectionId="5">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="64">
                <text>Articles</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </collection>
  <itemType itemTypeId="19">
    <name>Article</name>
    <description>Faculty Publications -Articles</description>
  </itemType>
  <elementSetContainer>
    <elementSet elementSetId="1">
      <name>Dublin Core</name>
      <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
      <elementContainer>
        <element elementId="50">
          <name>Title</name>
          <description>A name given to the resource</description>
          <elementTextContainer>
            <elementText elementTextId="126563">
              <text>EMONET: A Cross Database Progressive Deep Network for Facial Expression Recognition</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="49">
          <name>Subject</name>
          <description>The topic of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="126564">
              <text>Cross database; Deep neural network; Emonet; Facial expression recognition; Progressive resizing</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="41">
          <name>Description</name>
          <description>An account of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="126565">
              <text>Recognizing facial features to detect emotions has always been an interesting topic for research in the field of Computer vision and cognitive emotional analysis. In this research a model to detect and classify emotions is explored, using Deep Convolutional Neural Networks (DCNN). This model intends to classify the primary emotions (Anger, Disgust, Fear, Happy, Sad, Surprise and Neutral) using progressive learning model for a Facial Expression Recognition (FER) System. The proposed model (EmoNet) is developed based on a linear growing-shrinking filter method that shows prominent extraction of robust features for learning and interprets emotional classification for an improved accuracy. EmoNet incorporates Progressive- Resizing (PR) of images to accommodate improved learning traits from emotional datasets by adding more image data for training and Validation which helped in improving the model's accuracy by 5%. Cross validations were carried out on the model, this enabled the model to be ready for testing on new data. EmoNet results signifies improved performance with respect to accuracy, precision and recall due to the incorporation of progressive learning Framework, Tuning Hyper parameters of the network, Image Augmentation and moderating generalization and Bias on the images. These parameters are compared with the existing models of Emotional analysis with the various datasets that are prominently available for research. The Methods, Image Data and the Fine-tuned model combinedly contributed in achieving 83.6%, 78.4%, 98.1% and 99.5% on FER2013, IMFDB, CK+ and JAFFE respectively. EmoNet has worked on four different datasets and achieved an overall accuracy of 90%.  2020. All Rights Reserved.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="39">
          <name>Creator</name>
          <description>An entity primarily responsible for making the resource</description>
          <elementTextContainer>
            <elementText elementTextId="126566">
              <text>Thiruthuvanathan M.M.; Krishnan B.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="48">
          <name>Source</name>
          <description>A related resource from which the described resource is derived</description>
          <elementTextContainer>
            <elementText elementTextId="126567">
              <text>International Journal of Intelligent Engineering and Systems, Vol-13, No. 6, pp. 31-41.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="45">
          <name>Publisher</name>
          <description>An entity responsible for making the resource available</description>
          <elementTextContainer>
            <elementText elementTextId="126568">
              <text>Intelligent Network and Systems Society</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="40">
          <name>Date</name>
          <description>A point or period of time associated with an event in the lifecycle of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="126569">
              <text>2020-01-01</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="43">
          <name>Identifier</name>
          <description>An unambiguous reference to the resource within a given context</description>
          <elementTextContainer>
            <elementText elementTextId="126570">
              <text>&lt;a href="https://doi.org/10.22266/ijies2020.1231.04" target="_blank" rel="noreferrer noopener"&gt;https://doi.org/10.22266/ijies2020.1231.04&lt;/a&gt;
&lt;br /&gt;&lt;br /&gt;&lt;a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85100673488&amp;amp;doi=10.22266%2Fijies2020.1231.04&amp;amp;partnerID=40&amp;amp;md5=5b8a1a7783439d9c0a4d637b180208bf" target="_blank" rel="noreferrer noopener"&gt;https://www.scopus.com/inward/record.uri?eid=2-s2.0-85100673488&amp;amp;doi=10.22266%2fijies2020.1231.04&amp;amp;partnerID=40&amp;amp;md5=5b8a1a7783439d9c0a4d637b180208bf&lt;/a&gt;</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="47">
          <name>Rights</name>
          <description>Information about rights held in and over the resource</description>
          <elementTextContainer>
            <elementText elementTextId="126571">
              <text>All Open Access; Bronze Open Access</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="46">
          <name>Relation</name>
          <description>A related resource</description>
          <elementTextContainer>
            <elementText elementTextId="126572">
              <text>ISSN: 2185310X</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="42">
          <name>Format</name>
          <description>The file format, physical medium, or dimensions of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="126573">
              <text>Online</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="44">
          <name>Language</name>
          <description>A language of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="126574">
              <text>English</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="51">
          <name>Type</name>
          <description>The nature or genre of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="126575">
              <text>Article</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="38">
          <name>Coverage</name>
          <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
          <elementTextContainer>
            <elementText elementTextId="126576">
              <text>Thiruthuvanathan M.M., Department of Computer Science and Engineering, School of Engineering and Technology, CHRIST (Deemed to be University), Bangalore, India; Krishnan B., Department of Computer Science and Engineering, School of Engineering and Technology, CHRIST (Deemed to be University), Bangalore, India</text>
            </elementText>
          </elementTextContainer>
        </element>
      </elementContainer>
    </elementSet>
  </elementSetContainer>
</item>
