<?xml version="1.0" encoding="UTF-8"?>
<item xmlns="http://omeka.org/schemas/omeka-xml/v5" itemId="19541" public="1" featured="0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://archives.christuniversity.in/items/show/19541?output=omeka-xml" accessDate="2026-04-04T20:11:01+00:00">
  <collection collectionId="16">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="51377">
                <text>Conference Papers</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </collection>
  <itemType itemTypeId="28">
    <name>Conference Paper</name>
    <description>Faculty Publications- Conference Papers</description>
  </itemType>
  <elementSetContainer>
    <elementSet elementSetId="1">
      <name>Dublin Core</name>
      <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
      <elementContainer>
        <element elementId="50">
          <name>Title</name>
          <description>A name given to the resource</description>
          <elementTextContainer>
            <elementText elementTextId="169195">
              <text>EFMD-DCNN: Efficient Face Mask Detection Model in Street Camera Using Double CNN</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="49">
          <name>Subject</name>
          <description>The topic of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="169196">
              <text>And Double CNN; Deep Learning; Face Detection; Mask Detection; Real-time Monitoring</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="41">
          <name>Description</name>
          <description>An account of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="169197">
              <text>The COVID-19 pandemic has necessitated the widespread use of masks, and in India, mask-wearing in public gatherings has become mandatory, with violators being fined. In densely populated nations like India, strict regulations must be established and enforced to mitigate the pandemics impact. Authorities and cameras conduct real-time monitoring of individuals leaving their homes, but 24/7 surveillance by humans is not feasible. A suggested approach to resolve this problem is to connect human intelligence and Artificial Intelligence (AI) by employing two Machine Learning (ML) models to recognize people who arent wearing masks in live-stream feeds from surveillance, street, and new IP mask recognition cameras. The effectiveness of this method has been demonstrated through its high accuracy compared to other algorithms. The first ML model uses the YOLO (You Only Look Once) model to recognize human faces in real-time video streams. The second ML model is a pre-trained classifier using 180,000 photos to categorize photos of humans into two groups: masked and unmasked. Double is a model that combines face recognition and mask classification into a single model. CNN provides a potential solution that may be utilized with image or video-capturing equipment such as CCTV cameras to monitor security breaches, encourage mask usage, and promote a secure workplace. This studys proposed mask detection technology utilized pre-trained datasets, face detection, and various classifiers to classify faces as having a proper mask, an improper mask, or no mask. The Double CNN-based model incorporated dual convolutional neural networks and a technology-based warning system to provide real-time facial identification detection. The ML model achieved high performance and accuracy of 98.15%, with the highest precision and recall, and can be used worldwide due to its cost-effectiveness. Overall, the proposed mask detection approach can potentially be a valuable instrument for preventing the spread of infectious diseases.  The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="39">
          <name>Creator</name>
          <description>An entity primarily responsible for making the resource</description>
          <elementTextContainer>
            <elementText elementTextId="169198">
              <text>Thamarai Selvi R.; Arulkumar N.; Ramasamy G.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="48">
          <name>Source</name>
          <description>A related resource from which the described resource is derived</description>
          <elementTextContainer>
            <elementText elementTextId="169199">
              <text>Communications in Computer and Information Science, Vol-1973 CCIS, pp. 427-437.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="45">
          <name>Publisher</name>
          <description>An entity responsible for making the resource available</description>
          <elementTextContainer>
            <elementText elementTextId="169200">
              <text>Springer Science and Business Media Deutschland GmbH</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="40">
          <name>Date</name>
          <description>A point or period of time associated with an event in the lifecycle of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="169201">
              <text>2024-01-01</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="43">
          <name>Identifier</name>
          <description>An unambiguous reference to the resource within a given context</description>
          <elementTextContainer>
            <elementText elementTextId="169202">
              <text>&lt;a href="https://doi.org/10.1007/978-3-031-50993-3_34" target="_blank" rel="noreferrer noopener"&gt;https://doi.org/10.1007/978-3-031-50993-3_34&lt;/a&gt;
&lt;br /&gt;&lt;br /&gt;&lt;a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85185721206&amp;amp;doi=10.1007%2F978-3-031-50993-3_34&amp;amp;partnerID=40&amp;amp;md5=0565e573d014424a32ba26317edcbb9d" target="_blank" rel="noreferrer noopener"&gt;https://www.scopus.com/inward/record.uri?eid=2-s2.0-85185721206&amp;amp;doi=10.1007%2f978-3-031-50993-3_34&amp;amp;partnerID=40&amp;amp;md5=0565e573d014424a32ba26317edcbb9d&lt;/a&gt;</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="47">
          <name>Rights</name>
          <description>Information about rights held in and over the resource</description>
          <elementTextContainer>
            <elementText elementTextId="169203">
              <text>Restricted Access</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="46">
          <name>Relation</name>
          <description>A related resource</description>
          <elementTextContainer>
            <elementText elementTextId="169204">
              <text>ISSN: 18650929; ISBN: 978-303150992-6</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="42">
          <name>Format</name>
          <description>The file format, physical medium, or dimensions of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="169205">
              <text>Online</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="44">
          <name>Language</name>
          <description>A language of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="169206">
              <text>English</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="51">
          <name>Type</name>
          <description>The nature or genre of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="169207">
              <text>Conference paper</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="38">
          <name>Coverage</name>
          <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
          <elementTextContainer>
            <elementText elementTextId="169208">
              <text>Thamarai Selvi R., Master of Computer Applications, Bishop Heber College, Tiruchirappalli, India; Arulkumar N., Christ University, Bangalore, India; Ramasamy G., Christ University, Bangalore, India</text>
            </elementText>
          </elementTextContainer>
        </element>
      </elementContainer>
    </elementSet>
  </elementSetContainer>
</item>
