<?xml version="1.0" encoding="UTF-8"?>
<item xmlns="http://omeka.org/schemas/omeka-xml/v5" itemId="20012" public="1" featured="0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://omeka.org/schemas/omeka-xml/v5 http://omeka.org/schemas/omeka-xml/v5/omeka-xml-5-0.xsd" uri="https://archives.christuniversity.in/items/show/20012?output=omeka-xml" accessDate="2026-04-08T08:27:57+00:00">
  <collection collectionId="16">
    <elementSetContainer>
      <elementSet elementSetId="1">
        <name>Dublin Core</name>
        <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
        <elementContainer>
          <element elementId="50">
            <name>Title</name>
            <description>A name given to the resource</description>
            <elementTextContainer>
              <elementText elementTextId="51377">
                <text>Conference Papers</text>
              </elementText>
            </elementTextContainer>
          </element>
        </elementContainer>
      </elementSet>
    </elementSetContainer>
  </collection>
  <itemType itemTypeId="28">
    <name>Conference Paper</name>
    <description>Faculty Publications- Conference Papers</description>
  </itemType>
  <elementSetContainer>
    <elementSet elementSetId="1">
      <name>Dublin Core</name>
      <description>The Dublin Core metadata element set is common to all Omeka records, including items, files, and collections. For more information see, http://dublincore.org/documents/dces/.</description>
      <elementContainer>
        <element elementId="50">
          <name>Title</name>
          <description>A name given to the resource</description>
          <elementTextContainer>
            <elementText elementTextId="175769">
              <text>An Effective Deep Learning Classification of Diabetes Based Eye Disease Grades: An Retinal Analysis Approach</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="49">
          <name>Subject</name>
          <description>The topic of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="175770">
              <text>Classification; Deep learning; Densenet; Diabetes retinopathy; Feature extraction; Feature selection; Particle Swarm Optimization</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="41">
          <name>Description</name>
          <description>An account of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="175771">
              <text>Diabetic Retinopathy (DR) is a common misdiagnosis of diabetes mellitus, which damages the retina and impairs eyesight. It can lead to vision impairment if it is not caught early. Tragically, DR is an unbreakable cycle, and treatment only serves to reinforce the perception. Early detection of DR and effective treatment can significantly lower the risk of visual loss. In comparison to PC-aided conclusion frameworks, the manual analysis process used by ophthalmologists to diagnose DR retina fundus images takes a lot of time, effort, and money and is prone to error. As of late, profound learning has become quite possibly the most well-known procedure that has accomplished better execution in numerous areas, particularly in clinical picture examination and classification. Thereby, this paper brings an effective deep learning-based diabetes-based retinography in which the following are the stages: a) Data collection from MESSIDOR which contains 1200 images classified into 4 levels and graded from 03 followed by b) Preprocessing using grayscale normalized data. Then followed by c) feature extraction using Discrete Wavelet Transform (DWT), d) feature selection using Particle Swarm Optimization (PSO) and finally given for e) classification using Densenet 169. Experimental states that the proposed model outperforms and effectively classified grades compared to other state-of-art models (accuracy:0.95, sensitivity:0.96, specificity;0.97).  2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="39">
          <name>Creator</name>
          <description>An entity primarily responsible for making the resource</description>
          <elementTextContainer>
            <elementText elementTextId="175772">
              <text>Ajesh F.; Jims A.; Alapatt B.P.; Philip F.M.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="48">
          <name>Source</name>
          <description>A related resource from which the described resource is derived</description>
          <elementTextContainer>
            <elementText elementTextId="175773">
              <text>Lecture Notes in Networks and Systems, Vol-649 LNNS, pp. 234-244.</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="45">
          <name>Publisher</name>
          <description>An entity responsible for making the resource available</description>
          <elementTextContainer>
            <elementText elementTextId="175774">
              <text>Springer Science and Business Media Deutschland GmbH</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="40">
          <name>Date</name>
          <description>A point or period of time associated with an event in the lifecycle of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="175775">
              <text>2023-01-01</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="43">
          <name>Identifier</name>
          <description>An unambiguous reference to the resource within a given context</description>
          <elementTextContainer>
            <elementText elementTextId="175776">
              <text>&lt;a href="https://doi.org/10.1007/978-3-031-27499-2_22" target="_blank" rel="noreferrer noopener"&gt;https://doi.org/10.1007/978-3-031-27499-2_22&lt;/a&gt;
&lt;br /&gt;&lt;br /&gt;&lt;a href="https://www.scopus.com/inward/record.uri?eid=2-s2.0-85152560405&amp;amp;doi=10.1007%2F978-3-031-27499-2_22&amp;amp;partnerID=40&amp;amp;md5=767deec2ebe66f7cf4d6d49b02c06cd6" target="_blank" rel="noreferrer noopener"&gt;https://www.scopus.com/inward/record.uri?eid=2-s2.0-85152560405&amp;amp;doi=10.1007%2f978-3-031-27499-2_22&amp;amp;partnerID=40&amp;amp;md5=767deec2ebe66f7cf4d6d49b02c06cd6&lt;/a&gt;</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="47">
          <name>Rights</name>
          <description>Information about rights held in and over the resource</description>
          <elementTextContainer>
            <elementText elementTextId="175777">
              <text>Restricted Access</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="46">
          <name>Relation</name>
          <description>A related resource</description>
          <elementTextContainer>
            <elementText elementTextId="175778">
              <text>ISSN: 23673370; ISBN: 978-303127498-5</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="42">
          <name>Format</name>
          <description>The file format, physical medium, or dimensions of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="175779">
              <text>Online</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="44">
          <name>Language</name>
          <description>A language of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="175780">
              <text>English</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="51">
          <name>Type</name>
          <description>The nature or genre of the resource</description>
          <elementTextContainer>
            <elementText elementTextId="175781">
              <text>Conference paper</text>
            </elementText>
          </elementTextContainer>
        </element>
        <element elementId="38">
          <name>Coverage</name>
          <description>The spatial or temporal topic of the resource, the spatial applicability of the resource, or the jurisdiction under which the resource is relevant</description>
          <elementTextContainer>
            <elementText elementTextId="175782">
              <text>Ajesh F., Department of Computer Science and Engineering, Sree Buddha College of Engineering, Kerala, Alappuzha, India; Jims A., JAIN (Deemed-to-be University), Bangalore, India; Alapatt B.P., CHRIST (Deemed to be University), Delhi-NCR, India; Philip F.M., JAIN (Deemed-to-be University), Bangalore, India</text>
            </elementText>
          </elementTextContainer>
        </element>
      </elementContainer>
    </elementSet>
  </elementSetContainer>
</item>
