<?xml version="1.0" encoding="UTF-8"?>
<item xmlns="http://omeka.org/schemas/omeka-xml/v5" itemId="14428" public="1" featured="0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://archives.christuniversity.in/items/show/14428?output=omeka-xml" accessDate="2026-04-04T20:11:02+00:00">
  <collection collectionId="5">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="64">
                <text>Articles</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </collection>
  <itemType itemTypeId="19">
    <name>Article</name>
    <description>Faculty Publications -Articles</description>
  </itemType>
  <elementSetContainer>
    <elementSet elementSetId="1">
      <name>Dublin Core</name>
      <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
      <elementContainer>
        <element elementId="50">
          <name>Title</name>
          <description>A name given to the resource</description>
          <elementTextContainer>
            <elementText elementTextId="98823">
              <text>Normalized Attention Neural Network with Adaptive Feature Recalibration for Detecting the Unusual Activities Using Video Surveillance Camera</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="49">
          <name>Subject</name>
          <description>The topic of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="98824">
              <text>adaptive feature recalibration; normalized attention network; surveillance data; unusual activities</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="41">
          <name>Description</name>
          <description>An account of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="98825">
              <text>Over the past few years, surveillance cameras have become common in many homes and businesses. Many businesses still employ a human monitor of their cameras, despite the fact that this individual is more probable to miss some anomalous occurrences in the video feeds owing to the inherent limitations of human perception. Numerous scholars have investigated surveillance data and offered several strategies for automatically identifying anomalous occurrences. Therefore, it is important to build a model for identifying unusual occurrences in the live stream from the security cameras. Recognizing potentially dangerous situations automatically so that appropriate action may be taken is crucial and can be of great assistance to law enforcement. In this research work, starting with an MRCNN for feature extraction and AFR for fine-tuning, this architecture has a number of key components (AFR). To increase the quality of the features extracted by the MRCNN, the AFR replicas the inter-dependencies among the features to enhance the quality of the low- and high-frequency features extracted. Then, a normalized attention network (NAN) is used to learn the relationships between channels, which used to identify the violence and speeds up the convergence process for training a perfect. Furthermore, the dataset took real-time security camera feeds from a variety of subjects and situations, as opposed to the hand-crafted datasets utilized in prior efforts. We also demonstrate the method's capability of assigning the correct category to each anomaly by classifying normal and abnormal occurrences. The method divided the information gathered into three primary groups: those in need of fire protection, those experiencing theft or violence, and everyone else. The study applied the proposed approach to the UCF-Crime dataset, where it outperformed other models on the same dataset.  2023 WITPress. All rights reserved.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="39">
          <name>Creator</name>
          <description>An entity primarily responsible for making the resource</description>
          <elementTextContainer>
            <elementText elementTextId="98826">
              <text>Damera V.K.; Vatambeti R.; Mekala M.S.; Pani A.K.; Manjunath C.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="48">
          <name>Source</name>
          <description>A related resource from which the described resource is derived</description>
          <elementTextContainer>
            <elementText elementTextId="98827">
              <text>International Journal of Safety and Security Engineering, Vol-13, No. 1, pp. 51-58.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="45">
          <name>Publisher</name>
          <description>An entity responsible for making the resource available</description>
          <elementTextContainer>
            <elementText elementTextId="98828">
              <text>International Information and Engineering Technology Association</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="40">
          <name>Date</name>
          <description>A point or period of time associated with an event in the lifecycle of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="98829">
              <text>2023-01-01</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="43">
          <name>Identifier</name>
          <description>An unambiguous reference to the resource within a given context</description>
          <elementTextContainer>
            <elementText elementTextId="98830">
              <text>&lt;a href="https://doi.org/10.18280/ijsse.130106" target="_blank" rel="noreferrer noopener"&gt;https://doi.org/10.18280/ijsse.130106&lt;/a&gt;
&lt;br /&gt;&lt;br /&gt;&lt;a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85158149015&amp;amp;doi=10.18280%2Fijsse.130106&amp;amp;partnerID=40&amp;amp;md5=a13ee0d04d05971fde8a8ab512f64edd" target="_blank" rel="noreferrer noopener"&gt;https://www.scopus.com/inward/record.uri?eid=2-s2.0-85158149015&amp;amp;doi=10.18280%2fijsse.130106&amp;amp;partnerID=40&amp;amp;md5=a13ee0d04d05971fde8a8ab512f64edd&lt;/a&gt;</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="47">
          <name>Rights</name>
          <description>Information about rights held in and over the resource</description>
          <elementTextContainer>
            <elementText elementTextId="98831">
              <text>All Open Access; Bronze Open Access</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="46">
          <name>Relation</name>
          <description>A related resource</description>
          <elementTextContainer>
            <elementText elementTextId="98832">
              <text>ISSN: 20419031</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="42">
          <name>Format</name>
          <description>The file format, physical medium, or dimensions of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="98833">
              <text>Online</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="44">
          <name>Language</name>
          <description>A language of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="98834">
              <text>English</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="51">
          <name>Type</name>
          <description>The nature or genre of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="98835">
              <text>Article</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="38">
          <name>Coverage</name>
          <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
          <elementTextContainer>
            <elementText elementTextId="98836">
              <text>Damera V.K., Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Hyderabad, 500090, India; Vatambeti R., School of Computer Science and Engineering, VIT-AP University, Vijayawada, 522237, India; Mekala M.S., School of Communication Engineering, Yeungnam University, Gyeongsan, 38541, South Korea; Pani A.K., Department of Computer Science and Engineering, CHRIST (Deemed to be University), Karnataka, Bangalore, 560074, India; Manjunath C., Department of Computer Science and Engineering, CHRIST (Deemed to be University), Karnataka, Bangalore, 560074, India</text>
            </elementText>
          </elementTextContainer>
        </element>
      </elementContainer>
    </elementSet>
  </elementSetContainer>
</item>
