<?xml version="1.0" encoding="UTF-8"?>
<item xmlns="http://omeka.org/schemas/omeka-xml/v5" itemId="19366" public="1" featured="0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://archives.christuniversity.in/items/show/19366?output=omeka-xml" accessDate="2026-04-29T03:36:17+00:00">
  <collection collectionId="16">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="51377">
                <text>Conference Papers</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </collection>
  <itemType itemTypeId="28">
    <name>Conference Paper</name>
    <description>Faculty Publications- Conference Papers</description>
  </itemType>
  <elementSetContainer>
    <elementSet elementSetId="1">
      <name>Dublin Core</name>
      <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
      <elementContainer>
        <element elementId="50">
          <name>Title</name>
          <description>A name given to the resource</description>
          <elementTextContainer>
            <elementText elementTextId="166749">
              <text>Synergizing Senses: Advancing Multimodal Emotion Recognition in Human-Computer Interaction with MFF-CNN</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="49">
          <name>Subject</name>
          <description>The topic of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="166750">
              <text>Deep Learning and Emotion Classification; Facial Expressions; Human-Computer Interaction; MFF-CNN (Multimodal Fusion and Convolutional Neural Network); Multimodal Emotion Recognition; Speech Signals</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="41">
          <name>Description</name>
          <description>An account of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="166751">
              <text>Optimizing the authenticity and efficacy of interactions between humans and computers is largely dependent on emotion detection. The MFF-CNN framework is used in this work to present a unique method for multidimensional emotion identification. The MFF-CNN model is a combination of approaches that combines convolutional neural networks and multimodal fusion. It is intended to efficiently collect and integrate data from several modalities, including spoken words and human facial expressions. The first step in the suggested system's implementation is gathering a multimodal dataset with emotional labels added to it. The MFF-CNN receives input features in the form of retrieved facial landmarks and voice signal spectroscopy reconstructions. Convolutional layers are used by the model to understand hierarchies spatial and temporal structures, which improves its capacity to recognize complex emotional signals. Our experimental assessment shows that the MFF-CNN outperforms conventional unimodal emotion recognition algorithms. Improved preciseness, reliability, and adaptability across a range of emotional states are the outcomes of fusing the linguistic and face senses. Additionally, visualization methods improve the interpretability of the model and offer insights into the learnt representations. By providing a practical and understandable method for multimodal emotion identification, this study advances the field of human-computer interaction. The MFF-CNN architecture opens the door to more organic and psychologically understanding human-computer interactions by showcasing its possibilities for practical applications.  The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="39">
          <name>Creator</name>
          <description>An entity primarily responsible for making the resource</description>
          <elementTextContainer>
            <elementText elementTextId="166752">
              <text>Upreti K.; Vats P.; Malik K.; Verma R.; Divakaran P.; Gangwar D.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="48">
          <name>Source</name>
          <description>A related resource from which the described resource is derived</description>
          <elementTextContainer>
            <elementText elementTextId="166753">
              <text>Lecture Notes in Networks and Systems, Vol-1047 LNNS, pp. 279-288.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="45">
          <name>Publisher</name>
          <description>An entity responsible for making the resource available</description>
          <elementTextContainer>
            <elementText elementTextId="166754">
              <text>Springer Science and Business Media Deutschland GmbH</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="40">
          <name>Date</name>
          <description>A point or period of time associated with an event in the lifecycle of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="166755">
              <text>2024-01-01</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="43">
          <name>Identifier</name>
          <description>An unambiguous reference to the resource within a given context</description>
          <elementTextContainer>
            <elementText elementTextId="166756">
              <text>&lt;a href="https://doi.org/10.1007/978-3-031-64836-6_28" target="_blank" rel="noreferrer noopener"&gt;https://doi.org/10.1007/978-3-031-64836-6_28&lt;/a&gt;
&lt;br /&gt;&lt;br /&gt;&lt;a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85200662389&amp;amp;doi=10.1007%2F978-3-031-64836-6_28&amp;amp;partnerID=40&amp;amp;md5=ce682860a3f40cba52fb9676af9a17a0" target="_blank" rel="noreferrer noopener"&gt;https://www.scopus.com/inward/record.uri?eid=2-s2.0-85200662389&amp;amp;doi=10.1007%2f978-3-031-64836-6_28&amp;amp;partnerID=40&amp;amp;md5=ce682860a3f40cba52fb9676af9a17a0&lt;/a&gt;</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="47">
          <name>Rights</name>
          <description>Information about rights held in and over the resource</description>
          <elementTextContainer>
            <elementText elementTextId="166757">
              <text>Restricted Access</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="46">
          <name>Relation</name>
          <description>A related resource</description>
          <elementTextContainer>
            <elementText elementTextId="166758">
              <text>ISSN: 23673370; ISBN: 978-303164835-9</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="42">
          <name>Format</name>
          <description>The file format, physical medium, or dimensions of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="166759">
              <text>Online</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="44">
          <name>Language</name>
          <description>A language of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="166760">
              <text>English</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="51">
          <name>Type</name>
          <description>The nature or genre of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="166761">
              <text>Conference paper</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="38">
          <name>Coverage</name>
          <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
          <elementTextContainer>
            <elementText elementTextId="166762">
              <text>Upreti K., Department of Computer Science, CHRIST (Deemed to Be University), Delhi NCR, Ghaziabad, India; Vats P., Department of Computer Science and Engineering, SCSE, Manipal University Jaipur, Rajasthan, Jaipur, India; Malik K., School of Law, CHRIST (Deemed to Be University), Delhi NCR, Ghaziabad, India; Verma R., School of Business and Management, CHRIST (Deemed to Be University), Delhi NCR, Ghaziabad, India; Divakaran P., School of Business and Management, Himalayan University, Arunachal Pradesh, Itanagar, India; Gangwar D., Department of Management, Babu Banarasi Das Institute of Technology and Management, Lucknow, India</text>
            </elementText>
          </elementTextContainer>
        </element>
      </elementContainer>
    </elementSet>
  </elementSetContainer>
</item>
