<?xml version="1.0" encoding="UTF-8"?>
<item xmlns="http://omeka.org/schemas/omeka-xml/v5" itemId="20240" public="1" featured="0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://archives.christuniversity.in/items/show/20240?output=omeka-xml" accessDate="2026-04-28T09:33:30+00:00">
  <collection collectionId="16">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="51377">
                <text>Conference Papers</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </collection>
  <itemType itemTypeId="28">
    <name>Conference Paper</name>
    <description>Faculty Publications- Conference Papers</description>
  </itemType>
  <elementSetContainer>
    <elementSet elementSetId="1">
      <name>Dublin Core</name>
      <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
      <elementContainer>
        <element elementId="50">
          <name>Title</name>
          <description>A name given to the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178948">
              <text>Deep Learning-based Gender Recognition Using Fusion of Texture Features from Gait Silhouettes</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="49">
          <name>Subject</name>
          <description>The topic of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178949">
              <text>Behavioral biometrics; Biometrics; Convolutional neural network (CNN); Gait energy image (GEI); Gait silhouettes; Gender recognition; Histogram of oriented gradient (HOG)</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="41">
          <name>Description</name>
          <description>An account of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178950">
              <text>The gait of a person is the manner in which he or she walks. The human gait can be considered as a useful behavioral type of biometric that could be utilized for identifying people. Gait can also be used to identify a persons gender and age group. Recent breakthroughs in image processing and artificial intelligence have made it feasible to extract data from photographs and videos for various classifying purposes. Gender can be regarded as soft biometric that could be useful in video captured using surveillance cameras, particularly in uncontrolled environments with erratic placements. Gender recognition in security, particularly in surveillance systems, is becoming increasingly popular. Popularly used deep learning algorithms for images, convolutional neural networks, have proven to be a good mechanism for gender recognition. Still, there are drawbacks to convolutional neural network approaches, like a very complex network model, comparatively larger training time and highly expensive in computational resources, meager convergence quickness, overfitting of the network, and accuracy that may need improvement. As a result, this paper proposes a texture-based deep learning-based gender recognition system. The gait energy image, that is created by adding silhouettes received from a portion of the video which portrays an entire gait cycle, can be the most often utilized feature in gait-based categorization. More texture features, such as histogram of oriented gradient (HOG) and entropy for gender identification, have been examined in the proposed work. The accuracy of gender classification using whole body image, upper body image, and lower body image is compared in this research. Combining texture features is more accurate than looking at each texture feature separately, according to studies. Furthermore, full body gait images are more precise than partial body gait images.  2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="39">
          <name>Creator</name>
          <description>An entity primarily responsible for making the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178951">
              <text>Thomas K.T.; Pushpalatha K.P.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="48">
          <name>Source</name>
          <description>A related resource from which the described resource is derived</description>
          <elementTextContainer>
            <elementText elementTextId="178952">
              <text>Lecture Notes in Networks and Systems, Vol-462, pp. 153-165.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="45">
          <name>Publisher</name>
          <description>An entity responsible for making the resource available</description>
          <elementTextContainer>
            <elementText elementTextId="178953">
              <text>Springer Science and Business Media Deutschland GmbH</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="40">
          <name>Date</name>
          <description>A point or period of time associated with an event in the lifecycle of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178954">
              <text>2022-01-01</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="43">
          <name>Identifier</name>
          <description>An unambiguous reference to the resource within a given context</description>
          <elementTextContainer>
            <elementText elementTextId="178955">
              <text>&lt;a href="https://doi.org/10.1007/978-981-19-2211-4_13" target="_blank" rel="noreferrer noopener"&gt;https://doi.org/10.1007/978-981-19-2211-4_13&lt;/a&gt;
&lt;br /&gt;&lt;br /&gt;&lt;a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85135036534&amp;amp;doi=10.1007%2F978-981-19-2211-4_13&amp;amp;partnerID=40&amp;amp;md5=d33575cfde25f003a4966359d78819db" target="_blank" rel="noreferrer noopener"&gt;https://www.scopus.com/inward/record.uri?eid=2-s2.0-85135036534&amp;amp;doi=10.1007%2f978-981-19-2211-4_13&amp;amp;partnerID=40&amp;amp;md5=d33575cfde25f003a4966359d78819db&lt;/a&gt;</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="47">
          <name>Rights</name>
          <description>Information about rights held in and over the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178956">
              <text>Restricted Access</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="46">
          <name>Relation</name>
          <description>A related resource</description>
          <elementTextContainer>
            <elementText elementTextId="178957">
              <text>ISSN: 23673370; ISBN: 978-981192210-7</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="42">
          <name>Format</name>
          <description>The file format, physical medium, or dimensions of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178958">
              <text>Online</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="44">
          <name>Language</name>
          <description>A language of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178959">
              <text>English</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="51">
          <name>Type</name>
          <description>The nature or genre of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="178960">
              <text>Conference paper</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="38">
          <name>Coverage</name>
          <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
          <elementTextContainer>
            <elementText elementTextId="178961">
              <text>Thomas K.T., School of Computer Sciences, Mahatma Gandhi University, Kottayam, India, Christ University, Pune, India; Pushpalatha K.P., School of Computer Sciences, Mahatma Gandhi University, Kottayam, India</text>
            </elementText>
          </elementTextContainer>
        </element>
      </elementContainer>
    </elementSet>
  </elementSetContainer>
</item>
