<?xml version="1.0" encoding="UTF-8"?>
<item xmlns="http://omeka.org/schemas/omeka-xml/v5" itemId="19117" public="1" featured="0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://archives.christuniversity.in/items/show/19117?output=omeka-xml" accessDate="2026-04-29T10:33:47+00:00">
  <collection collectionId="16">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="51377">
                <text>Conference Papers</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </collection>
  <itemType itemTypeId="28">
    <name>Conference Paper</name>
    <description>Faculty Publications- Conference Papers</description>
  </itemType>
  <elementSetContainer>
    <elementSet elementSetId="1">
      <name>Dublin Core</name>
      <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
      <elementContainer>
        <element elementId="50">
          <name>Title</name>
          <description>A name given to the resource</description>
          <elementTextContainer>
            <elementText elementTextId="163270">
              <text>Enhancing Human-Computer Interaction with a Low-Cost Air Mouse and Sign Language Recognition System</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="49">
          <name>Subject</name>
          <description>The topic of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="163271">
              <text>Air Mouse; CNN; Hand Gesture; Mediapipe; Proton Assistant; Sign Language</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="41">
          <name>Description</name>
          <description>An account of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="163272">
              <text>The purpose of this study is to investigate the development of assistive technologies that are designed to empower people with disabilities by increasing their level of freedom and accessibility. Voice assistants, air mice, and software that recognizes sign language are some of the topics that are specifically covered in this. Those who have impaired fine motor skills can benefit from using air mice since they allow controls to be made by hand gestures. Using machine learning algorithms, sign language recognition software is able to decipher signs with an accuracy rate of over 90 percent, making it easier for people who are deaf or hard of hearing to communicate themselves. By relying solely on vocal instructions, voice assistants like Alexa make it possible to control devices without using your hands. Not only do these technologies have the potential to be revolutionary, but they also confront obstacles in terms of improving identification accuracy and integrating them into common gadgets. In this study, the development and impact of voice assistants, sign language software, and air mice are discussed. More specifically, the paper highlights the potential for these technologies to help millions of people with disabilities all over the world. Additionally, it examines potential enhancements that could be made to these technologies in the future in order to further improve accessibility and inclusivity. This research integrates computer vision and machine learning to create a multimodal system blending air mouse functionality with real-time sign language translation. Achieving 95% accuracy in gesture recognition for air mouse control and 98% accuracy in sign language letter classification using a basic webcam, the system promotes accessible interaction without specialized hardware. Despite limitations in vocabulary and lighting sensitivity, future efforts aim to broaden data training and explore mobile deployment. These advancements hold promise for enhancing natural human-computer interaction, particularly for users with disabilities, by enabling intuitive, hands-free control and communication.  2024 IEEE.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="39">
          <name>Creator</name>
          <description>An entity primarily responsible for making the resource</description>
          <elementTextContainer>
            <elementText elementTextId="163273">
              <text>Santhan H.; Sudhakar T.; Joy H.K.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="48">
          <name>Source</name>
          <description>A related resource from which the described resource is derived</description>
          <elementTextContainer>
            <elementText elementTextId="163274">
              <text>Proceedings of International Conference on Circuit Power and Computing Technologies, ICCPCT 2024, pp. 93-98.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="45">
          <name>Publisher</name>
          <description>An entity responsible for making the resource available</description>
          <elementTextContainer>
            <elementText elementTextId="163275">
              <text>Institute of Electrical and Electronics Engineers Inc.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="40">
          <name>Date</name>
          <description>A point or period of time associated with an event in the lifecycle of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="163276">
              <text>2024-01-01</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="43">
          <name>Identifier</name>
          <description>An unambiguous reference to the resource within a given context</description>
          <elementTextContainer>
            <elementText elementTextId="163277">
              <text>&lt;a href="https://doi.org/10.1109/ICCPCT61902.2024.10673196" target="_blank" rel="noreferrer noopener"&gt;https://doi.org/10.1109/ICCPCT61902.2024.10673196&lt;/a&gt;
&lt;br /&gt;&lt;br /&gt;&lt;a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85205577953&amp;amp;doi=10.1109%2FICCPCT61902.2024.10673196&amp;amp;partnerID=40&amp;amp;md5=9bb0cbeafa7075e3ac8caee0c543e82f" target="_blank" rel="noreferrer noopener"&gt;https://www.scopus.com/inward/record.uri?eid=2-s2.0-85205577953&amp;amp;doi=10.1109%2fICCPCT61902.2024.10673196&amp;amp;partnerID=40&amp;amp;md5=9bb0cbeafa7075e3ac8caee0c543e82f&lt;/a&gt;</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="47">
          <name>Rights</name>
          <description>Information about rights held in and over the resource</description>
          <elementTextContainer>
            <elementText elementTextId="163278">
              <text>Restricted Access</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="46">
          <name>Relation</name>
          <description>A related resource</description>
          <elementTextContainer>
            <elementText elementTextId="163279">
              <text>ISBN: 979-835037281-6</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="42">
          <name>Format</name>
          <description>The file format, physical medium, or dimensions of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="163280">
              <text>Online</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="44">
          <name>Language</name>
          <description>A language of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="163281">
              <text>English</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="51">
          <name>Type</name>
          <description>The nature or genre of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="163282">
              <text>Conference paper</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="38">
          <name>Coverage</name>
          <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
          <elementTextContainer>
            <elementText elementTextId="163283">
              <text>Santhan H., CHRIST (Deemed to Be University), Computer Science, Bangalore, India; Sudhakar T., CHRIST (Deemed to Be University), Computer Science, Bangalore, India; Joy H.K., CHRIST (Deemed to Be University), Computer Science, Bangalore, India</text>
            </elementText>
          </elementTextContainer>
        </element>
      </elementContainer>
    </elementSet>
  </elementSetContainer>
</item>
