<?xml version="1.0" encoding="UTF-8"?>
<item xmlns="http://omeka.org/schemas/omeka-xml/v5" itemId="20220" public="1" featured="0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://archives.christuniversity.in/items/show/20220?output=omeka-xml" accessDate="2026-04-29T07:02:15+00:00">
  <collection collectionId="16">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="51377">
                <text>Conference Papers</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </collection>
  <itemType itemTypeId="28">
    <name>Conference Paper</name>
    <description>Faculty Publications- Conference Papers</description>
  </itemType>
  <elementSetContainer>
    <elementSet elementSetId="1">
      <name>Dublin Core</name>
      <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
      <elementContainer>
        <element elementId="50">
          <name>Title</name>
          <description>A name given to the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178669">
              <text>Gesture based Real-Time Sign Language Recognition System</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="49">
          <name>Subject</name>
          <description>The topic of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178670">
              <text>Convolutional Neural Network; Deep Learning; Hand Gesture Detection; Sign Language Recognition; Text-to-Sound Converter</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="41">
          <name>Description</name>
          <description>An account of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178671">
              <text>Real-Time Sign Language Recognition (RTSLG) can help people express clearer thoughts, speak in shorter sentences, and be more expressive to use declarative language. Hand gestures provide a wealth of information that persons with disabilities can use to communicate in a fundamental way and to complement communication for others. Since the hand gesture information is based on movement sequences, accurately detecting hand gestures in real-time is difficult. Hearing-impaired persons have difficulty interacting with others, resulting in a communication gap. The only way for them to communicate their ideas and feelings is to use hand signals, which are not understood by many people. As a result, in recent days, the hand gesture detection system has gained prominence. In this paper, the proposed design is of a deep learning model using Python, TensorFlow, OpenCV and Histogram Equalization that can be accessed from the web browser. The proposed RTSLG system uses image detection, computer vision, and neural network methodologies i.e. Convolution Neural Network to recognise the characteristics of the hand in video filmed by a web camera. To enhance the details of the images, an image processing technique called Histogram Equalization is performed. The accuracy obtained by the proposed system is 87.8%. Once the gesture is recognized and text output is displayed, the proposed RTSLG system makes use of gTTS (Google Text-to-Speech) library in order to convert the displayed text to audio for assisting the communication of speech and hearing-impaired person.   2022 IEEE.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="39">
          <name>Creator</name>
          <description>An entity primarily responsible for making the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178672">
              <text>Siby T.A.; Pal S.; Arlina J.; Nagaraju S.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="48">
          <name>Source</name>
          <description>A related resource from which the described resource is derived</description>
          <elementTextContainer>
            <elementText elementTextId="178673">
              <text>Proceedings of the 2022 International Conference on Connected Systems and Intelligence, CSI 2022</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="45">
          <name>Publisher</name>
          <description>An entity responsible for making the resource available</description>
          <elementTextContainer>
            <elementText elementTextId="178674">
              <text>Institute of Electrical and Electronics Engineers Inc.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="40">
          <name>Date</name>
          <description>A point or period of time associated with an event in the lifecycle of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178675">
              <text>2022-01-01</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="43">
          <name>Identifier</name>
          <description>An unambiguous reference to the resource within a given context</description>
          <elementTextContainer>
            <elementText elementTextId="178676">
              <text>&lt;a href="https://doi.org/10.1109/CSI54720.2022.9924024" target="_blank" rel="noreferrer noopener"&gt;https://doi.org/10.1109/CSI54720.2022.9924024&lt;/a&gt;
&lt;br /&gt;&lt;br /&gt;&lt;a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85142249015&amp;amp;doi=10.1109%2FCSI54720.2022.9924024&amp;amp;partnerID=40&amp;amp;md5=de7070e6d60a3c9ff451a37cbd03cb3f" target="_blank" rel="noreferrer noopener"&gt;https://www.scopus.com/inward/record.uri?eid=2-s2.0-85142249015&amp;amp;doi=10.1109%2fCSI54720.2022.9924024&amp;amp;partnerID=40&amp;amp;md5=de7070e6d60a3c9ff451a37cbd03cb3f&lt;/a&gt;</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="47">
          <name>Rights</name>
          <description>Information about rights held in and over the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178677">
              <text>Restricted Access</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="46">
          <name>Relation</name>
          <description>A related resource</description>
          <elementTextContainer>
            <elementText elementTextId="178678">
              <text>ISBN: 978-166545815-3</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="42">
          <name>Format</name>
          <description>The file format, physical medium, or dimensions of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178679">
              <text>Online</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="44">
          <name>Language</name>
          <description>A language of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178680">
              <text>English</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="51">
          <name>Type</name>
          <description>The nature or genre of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178681">
              <text>Conference paper</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="38">
          <name>Coverage</name>
          <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
          <elementTextContainer>
            <elementText elementTextId="178682">
              <text>Siby T.A., School of Engineering &amp;amp; Technology Christ (Deemed-to-be University), Department of Information Technology, Bengaluru, India; Pal S., School of Engineering &amp;amp; Technology Christ (Deemed-to-be University), Department of Information Technology, Bengaluru, India; Arlina J., School of Engineering &amp;amp; Technology Christ (Deemed-to-be University), Department of Information Technology, Bengaluru, India; Nagaraju S., School of Engineering &amp;amp; Technology Christ (Deemed-to-be University), Department of Computer Science &amp;amp; Engineering, Bengaluru, India</text>
            </elementText>
          </elementTextContainer>
        </element>
      </elementContainer>
    </elementSet>
  </elementSetContainer>
</item>
