<?xml version="1.0" encoding="UTF-8"?>
<item xmlns="http://omeka.org/schemas/omeka-xml/v5" itemId="19247" public="1" featured="0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://archives.christuniversity.in/items/show/19247?output=omeka-xml" accessDate="2026-04-28T20:41:59+00:00">
  <collection collectionId="16">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="51377">
                <text>Conference Papers</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </collection>
  <itemType itemTypeId="28">
    <name>Conference Paper</name>
    <description>Faculty Publications- Conference Papers</description>
  </itemType>
  <elementSetContainer>
    <elementSet elementSetId="1">
      <name>Dublin Core</name>
      <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
      <elementContainer>
        <element elementId="50">
          <name>Title</name>
          <description>A name given to the resource</description>
          <elementTextContainer>
            <elementText elementTextId="165087">
              <text>Machine Learning's Transformative Role in Human Activity Recognition Analysis</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="49">
          <name>Subject</name>
          <description>The topic of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="165088">
              <text>Activity Recognition; Artificial In telligence (AI); Computer Vision; Deep Learning; Human-Computer Interaction (HCI); Machine Learning; Motion Analysis; Pattern Recognition; Video Surveillance</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="41">
          <name>Description</name>
          <description>An account of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="165089">
              <text>Human action recognition (HAR) is a burgeoning field of computer vision that seeks to automatically understand and classify the intricate movements performed by humans. From the graceful leaps of a ballerina to the decisive strides of a surgeon, HAR aims to decipher the language of motion, unlocking a plethora of potential applications. This abstract delves into the core of HAR, highlighting its key challenges and promising avenues for advancement. We begin by outlining the various modalities used for action recognition, such as RGB videos, depth sensors, and skeletal data, each offering unique perspectives on the human form. Next, we delve into the diverse set of algorithms employed for HAR, ranging from traditional machine learning techniques to the burgeoning realm of deep learning. We explore the strengths and limitations of each approach, emphasizing the crucial role of feature extraction and model selection in achieving accurate recognition. Challenges in Human Action Recognition (HAR), such as intra-class variations, inter-class similarities, and environmental factors. Ongoing efforts include robust feature development and contextual integration. The paper envisions HAR's future impact on healthcare, robotics, video surveillance, and augmented reality, presenting an invitation to explore the transformative world of human action recognition and its potential to enhance our interaction with technology.   2024 IEEE.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="39">
          <name>Creator</name>
          <description>An entity primarily responsible for making the resource</description>
          <elementTextContainer>
            <elementText elementTextId="165090">
              <text>Darshan R.; Janmitha S.N.; Deekshith S.; Kulkarni P.; Rajesh T.M.; Gurudas V.R.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="48">
          <name>Source</name>
          <description>A related resource from which the described resource is derived</description>
          <elementTextContainer>
            <elementText elementTextId="165091">
              <text>Proceedings of InC4 2024 - 2024 IEEE International Conference on Contemporary Computing and Communications</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="45">
          <name>Publisher</name>
          <description>An entity responsible for making the resource available</description>
          <elementTextContainer>
            <elementText elementTextId="165092">
              <text>Institute of Electrical and Electronics Engineers Inc.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="40">
          <name>Date</name>
          <description>A point or period of time associated with an event in the lifecycle of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="165093">
              <text>2024-01-01</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="43">
          <name>Identifier</name>
          <description>An unambiguous reference to the resource within a given context</description>
          <elementTextContainer>
            <elementText elementTextId="165094">
              <text>&lt;a href="https://doi.org/10.1109/InC460750.2024.10649391" target="_blank" rel="noreferrer noopener"&gt;https://doi.org/10.1109/InC460750.2024.10649391&lt;/a&gt;
&lt;br /&gt;&lt;br /&gt;&lt;a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85203794903&amp;amp;doi=10.1109%2FInC460750.2024.10649391&amp;amp;partnerID=40&amp;amp;md5=5a973d0f2f68afdaad0624c3e9b80880" target="_blank" rel="noreferrer noopener"&gt;https://www.scopus.com/inward/record.uri?eid=2-s2.0-85203794903&amp;amp;doi=10.1109%2fInC460750.2024.10649391&amp;amp;partnerID=40&amp;amp;md5=5a973d0f2f68afdaad0624c3e9b80880&lt;/a&gt;</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="47">
          <name>Rights</name>
          <description>Information about rights held in and over the resource</description>
          <elementTextContainer>
            <elementText elementTextId="165095">
              <text>Restricted Access</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="46">
          <name>Relation</name>
          <description>A related resource</description>
          <elementTextContainer>
            <elementText elementTextId="165096">
              <text>ISBN: 979-835038365-2</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="42">
          <name>Format</name>
          <description>The file format, physical medium, or dimensions of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="165097">
              <text>Online</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="44">
          <name>Language</name>
          <description>A language of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="165098">
              <text>English</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="51">
          <name>Type</name>
          <description>The nature or genre of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="165099">
              <text>Conference paper</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="38">
          <name>Coverage</name>
          <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
          <elementTextContainer>
            <elementText elementTextId="165100">
              <text>Darshan R., Dayananda Sagar university, Department of CSE, Bengaluru, India; Janmitha S.N., Dayananda Sagar university, Department of CSE, Bengaluru, India; Deekshith S., Dayananda Sagar university, Department of CSE, Bengaluru, India; Kulkarni P., Dayananda Sagar university, Department of CSE, Bengaluru, India; Rajesh T.M., Dayananda Sagar university, Department of CSE, Bengaluru, India; Gurudas V.R., CHRIST University, Department of CSE, Bangalore, India</text>
            </elementText>
          </elementTextContainer>
        </element>
      </elementContainer>
    </elementSet>
  </elementSetContainer>
</item>
