Browse Items (2150 total)
Sort by:
-
Facial Recognition Model Using Custom Designed Deep Learning Architecture
Facial Recognition is widely used in some applications such as attendance tracking, phone unlocking, and security systems. An extensive study of methodologies and techniques used in face recognition systems has already been suggested, but it doesn't remain easy in the real-world domain. Preprocessing steps are mentioned in this, including data collection, normalization, and feature extraction. Different classification algorithms such as Support Vector Machines (SVM), Nae Bayes, and Convolutional Neural Networks (CNN) are examined deeply, along with their implementation in different research studies. Moreover, encryption schemes and custom-designed deep learning architecture, particularly designed for face recognition, are also covered. A methodology involving training data preprocessing, dimensionality reduction using Principal Component Analysis, and training multiple classifiers is further proposed in this paper. It has been analyzed that a recognition accuracy of 91% is achieved after thorough experimentation. The performance of the trained models on the test dataset is evaluated using metrics such as accuracy and confusion matrix. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024. -
Improved Acceptance model: Unblocking Potential of Blockchain in Banking Space
Over the past ten years, blockchain has emerged as the new buzzword in the banking sector.The new technology is being adopted globally in many industries, including the business sector,because of its unique uses and features. However, no adoption model is available to help with this process.This research paper examines the new technology known as blockchain, which powers cryptocurrencies like Bitcoin and others. It looks at what blockchain technology is, how it works especially in the banking sector, and how it can change and upend the financial services sector. It outlines the features of the technology and discusses why these can have a significant effect on the financial industry as a whole in areas like identity services, payments, and settlements in addition to spawning new products based on things like 'smart contracts'. The adoption variables found in the literature study were used to gather, test, and evaluate the official papers that are currently available from regulatory organizations, practitioners, and research bodies. This study was able to classify adoption factors into three categories - supporting, impeding, and circumstantial - identify a new adoption factor, and determine the relative relevance of the factors. Consequently, an institutional adoption paradigm for blockchain technology in the banking sector is put out. In light of this, it is advised to conduct additional research on using the suggested model at banks using the new technology in order to assess its suitability. 2024 IEEE. -
Enhancing Medical Decision Support Systems withtheTwo-Parameter Logistic Regression Model
The logistic regression model is an invaluable tool for predicting binary response variables, yet it faces a significant challenge in scenarios where explanatory variables exhibit multicollinearity. Multicollinearity hinders the models ability to provide accurate and reliable predictions. To address this critical issue, this study introduces innovative combinations of Ridge and Liu estimators tailored for the two-parameter logistic regression model. To evaluate the effectiveness of the combination of ridge and Liu estimators under the two-parameter logistic regression, a real-world dataset from the medical domain is utilized, and Mean Squared Errors are employed as a performance metric. The findings of our investigation revealed that the ridge estimator, denoted as k4, outperforms other Liu estimators when multicollinearity is present in the data. The significance of this research lies in its potential to enhance the reliability of predictions for binary outcome variables in the medical domain. These novel estimators offer a promising solution to the multicollinearity challenge, contributing to more accurate and trustworthy results, ultimately benefiting medical practitioners and researchers alike. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024. -
Cybersecurity Threats Detection in Intelligent Networks using Predictive Analytics Approaches
The modern scenario of network vulnerabilities necessitates the adoption of sophisticated detection and mitigation strategies. Predictive analytics is surfaced to be a powerful tool in the fight against cybercrime, offering unparalleled capabilities for automating tasks, analyzing vast amounts of data, and identifying complex patterns that might elude human analysts. This paper presents a comprehensive overview of how AI is transforming the field of cybersecurity. Machine intelligence can bring revolution to cybersecurity by providing advanced defense capabilities. Addressing ethical concerns, ensuring model explainability, and fostering collaboration between researchers and developers are crucial for maximizing the positive impact of AI in this critical domain. 2024 IEEE. -
A Comprehensive Review of Linear Regression, Random Forest, XGBoost, and SVR: Integrating Machine Learning and Actuarial Science for Health Insurance Pricing
Actuarial science and data science are being studied as a fusion using Industry 4.0 technologies such as the Internet of Things, artificial intelligence, big data, and machine learning (ML) algorithms. When analyzing earlier components of actuarial science, it could have been more accurate and quick, but when later stages of AI and ML were integrated, the algorithms weren't up to the standard, and actuaries experienced some accuracy concerns. The company requires actuaries to be precise with analysis to acquire reliable results. As a result of the large amount of data these companies collect, a choice made manually may turn out to be incorrect. We will, therefore, examine alternative models in this article as part of the decision-making process. Once we have chosen the best path of action, we will use our actuarial expertise to evaluate the risk associated with specific charges features. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024. -
Domain-Driven Summarization: Models for Diverse Content Realms
In todays information-rich landscape, automatic text summarization systems are pivotal in condensing extensive textual content into concise and informative summaries. The current study ventures into domain-agnostic summarization, delving into advanced models spanning various domains, such as business, entertainment, sports, politics, and technology. The study aims to uncover domain-specific enhancements, assess resource efficiency, and explore the boundaries of applicability. This study covers nine cutting-edge models, including Google Pegasus-Large, Facebook BART-Base, SSHLEIFER DistilBART-CNN-6-6, Facebook BART-Large, T5-Large, T5-Base, Facebook BART-Large-CNN, Facebook BART-Large-Xsum, and SSHLEIFER DistilBART-Xsum-12-1. Each model undergoes rigorous evaluation, revealing its efficacy within various domains. Google Pegasus-Large emerges as a standout choice for cross-domain summarization, while Facebook BART-Base demonstrates remarkable stability. Models like SSHLEIFER DistilBART-CNN-6-6, T5 variants, and others contribute to the evolving landscape of summarization. This study endeavors to establish a robust foundation for enhancing the efficiency and effectiveness of summarization techniques within various domains, thereby contributing valuable insights to the broader literature on text summarization. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024. -
File Validation intheData Ingestion Process Using Apache NiFi
In the industries of today, development and maintenance of data pipelines is of paramount importance. With large volumes of data being generated across industries on a continuous basis, there is a growing need to process and store this ingested data in a fast, and efficient manner. Apache NiFi is one such tool which possesses crucial capabilities that can be used to enhance, modify, and automate data pipelines. However, automation of the ingestion process creates certain inherent issues which, without being resolved, tend to be detrimental to the entire ingestion process. These issues vary in nature, ranging from corrupted data to changes in the file schema, to name a few. In this paper, a solution to this problem is proposed. By exploiting Apache NiFis custom processor development capabilities, problem-specific processors can be designed and deployed which can ensure accurate validation of the ingestion process on a real-time basis. To demonstrate this, two processors were developed as a proof-of-concept, which tackle specific file-related validation issues in the ingestion processthat of the file size, and, the ingestion frequency. These custom-built processors are designed to be inserted into the pipeline at key points to ensure that the ingested data is validated against certain standards and requirements. Having successfully demonstrated its capabilities, the paper presents the exploitation of Apache NiFis custom processor capabilities as a potential way forward to resolve the plethora of ingestion issues in industry, today. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024. -
Investigating Personalized Learning Paths to Address Educational Disparities Using Advanced Artificial Intelligence Systems
This innovative study reimagines the role of Natural Language Processing (NLP) in individualized education by highlighting the critical need to incorporate cultural subtleties. While natural language processing (NLP) offers great potential for improving classroom instruction, current research frequently fails to account for the complex issues caused by cultural variation. This research fills a significant need by providing a novel framework for the detection and incorporation of cultural subtleties into individualized learning programs. Further research into common biases is driving the development of natural language processing models with greater cultural sensitivity and awareness, such as gender bias in Named Entity Recognition (NER) and sentiment bias in cultural preferences. In order to correct past biases and promote gender neutrality in educational content, the research makes use of an adaptive NER algorithm and a diverse training dataset. Similarly, to guarantee nuanced and fair sentiment evaluations, the study suggests regularly evaluating and retraining sentiment algorithms with datasets that represent multiple cultures. A Cultural Relevance Score of 0.9, Adaptive Content Embedding vectors [0.3, 0.6, -0.2.], and an impressive Cosine Similarity of 0.85 are some of the evaluation measures that highlight the effectiveness of the research. These measurements show encouraging gains, which confirms that the research might help make schools more welcoming and sensitive to different cultures. The research has the potential to revolutionize individualized education by making it more accessible and engagingfor students from all backgrounds. 2024 IEEE. -
Unveiling the Landscape: A Comparative Study of U-Net Models for Geographical Features Segmentation
Geographical features segmentation is a critical task in remote sensing and earth observation applications, enabling the extraction of valuable information from satellite imagery and aiding in environmental analysis, urban planning, and disaster management. The U-Net model, a deep learning architecture, has proven its efficacy in image segmentation tasks, including geographical feature analysis. In this research paper, a comparative study of various U-Net models customized explicitly for geographical features segmentation is presented. The study aimed to evaluate the performance of these U-Net variants under diverse geographical contexts and datasets. Their strengths and limitations were assessed, considering factors such as accuracy, robustness, and generalization capabilities. The efficacy of integrated components, such as skip connections, attention mechanisms, and multi-scale features, in enhancing the models performance was analyzed. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024. -
An Analysis Conducted Retrospectively on the Use: Artificial Intelligence in the Detection of Uterine Fibroid
The most frequent benign pelvic tumors in women of age of conception are uterine fibroids, sometimes referred to as leiomyomas. Ultrasonography is presently the first imaging modality utilized as clinical identification of uterine fibroids since it has a high degree of specificity and sensitivity and is less expensive and more widely accessible than CT and MRI examination. However, certain issues with ultrasound based uterine fibroid diagnosis persist. The main problem is the misunderstanding of pelvic and adnexal masses, as well as subplasmic and large fibroids. The specificity of fibroid detection is impacted by the existing absence of standardized image capture views and the variations in performance amongst various ultrasound machines. Furthermore, the proficiency and expertise of ultra sonographers determines the accuracy of the ultrasound diagnosis of uterine fibroids. In this work, we created a Deep convolutional neural networks (DCNN) model that automatically identifies fibroids in the uterus in ultrasound pictures, distinguishes between their presence and absence, and has been internally as well as externally validated in order to increase the reliability of the ultrasound examinations for uterine fibroids. Additionally, we investigated whether Deep convolutional neural networks model may help junior ultrasound practitioners perform better diagnostically by comparing it to eight ultrasound practitioners at different levels of experience. 2024 IEEE. -
Analysis of U-Net and Modified VGG16 Technique for Mitosis Identification in Histopathology Images
One of the most frequently diagnosed cancers in women is breast cancer. Mitotic cells in breast histopathological images are a very important biomarker to diagnose breast cancer. Mitotic scores help medical professionals to grade breast cancer appropriately. The procedure of identifying mitotic cells is quite time-consuming. To speed up and improve the process, automated deep learning methods can be used. The suggested study aims to conduct analysis on the detection of mitotic cells using U-Net and modified VGG16 technique. In this study, pre-processing of the input images is done using stain normalization and enhancement processes. A modified VGG16 classifier is used to classify the segmented results after the altered image has been segmented using U-Net technology. The suggested method's robustness is evaluated using data from the MITOSIS 2012 dataset. The proposed strategy performed better with a precision of 86%,recall of 75% and F1-Score of 80%. 2024 IEEE. -
Fine-tuning Language Models for Predicting the Impact of Events Associated to Financial News Articles
Investors and other stakeholders like consumers and employees, increasingly consider ESG factors when making decisions about investments or engaging with companies. Taking into account the importance of ESG today, FinNLP-KDF introduced the ML-ESG-3 shared task, which seeks to determine the duration of the impact of financial news articles in four languages - English, French, Korean, and Japanese. This paper describes our team, LIPIs approach towards solving the above-mentioned task. Our final systems consist of translation, paraphrasing and fine-tuning language models like BERT, Fin-BERT and RoBERTa for classification. We ranked first in the impact duration prediction subtask for French language. 2024 ELRA Language Resource Association. -
Hybrid Deep Learning Cloud Intrusion Detection
The scalability and flexibility that cloud computing provides, organisations can readily adapt their resources to meet demand without having to make significant upfront expenditures in hardware infrastructure. Three main types of computing services are provided to people worldwide via the Internet. Increased performance and resource access are two benefits that come with using cloud computing, but there is also an increased chance of attack. As a result of this research, intrusion detection systems that can process massive amounts of data packets, analyse them, and produce reports using knowledge and behaviour analysis were created. Convolution Neural Network Algorithm encrypts data as it's being transmitted end-to-end and is stored in the cloud, providing an extra degree of security. Data protection in the cloud is improved by intrusion detection. This study uses a model to show how data is encrypted and decrypted, of an algorithm and describes the defences against attacks. When assessing the performance of the suggested system, it's critical to consider the time and memory needed to encrypt and decrypt big text files. Additionally, the security of the cloud has been investigated and contrasted with various encoding techniques now in use. 2024 IEEE. -
Role of AI in Enhancing Customer Experience in Online Shopping
AI-powered tools and applications may provide customers with a positive, effective, and customized purchasing experience. By studying client preferences and behaviours, AI systems can anticipate future customer needs, improving and personalizing the shopping experience. The main aim of this study is to examine the role of artificial intelligence (AI) on enhancing customer experience. The results of this study revealed that there is a positive significant relationship between AI features like perceived convenience, personalization and AI-enabled service quality and Customer experience. A total of 416 responses were analysed using a structured questionnaire. The findings indicate significant role of trust as factor, mediating the effects of independent variables on customer experience. Data was analysed using T-test, ANOVA and regression. 2024 IEEE. -
Perception to Control: End-to-End Autonomous Driving Systems
End-to-end autonomous driving systems have garnered a lot of attention in recent years, and researchers have been exploring different ways to make them work. In this paper, we provide an overview of the field with a focus on the two main types of systems: those that use only RGB images and those that use a combination of multiple modalities. We review the literature in each area, highlighting the strengths and limitations of each approach. We also discuss the challenges of integrating these systems into a complete end-to-end autonomous driving pipeline, including issues related to perception, decision-making, and control. Lastly, we identify areas where more research is needed to make autonomous driving systems work better and be safer. Overall, this paper provides a comprehensive look at the current state-of-the-art in end-to-end autonomous driving, with a focus on the technical challenges and opportunities for future research. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024. -
Optimizing Drug Discovery for Breast Cancer in a Laboratory Environment Using Machine Learning
Breast cancer therapy can be greatly enhanced by the proposed method that combines experimental and computational techniques. Employing a state-of-the-art in vitro system, we evaluated biopsy tissues at different cancer stages, monitoring them for 48 hours. Later on, our investigation involved the application of machine learning models including nae Bayes (NB), artificial neural networks (ANN), random forest (RF), and decision trees (DT). Surprisingly, these models reached high test accuracies - ANN 93.2%, NB 90.4%, DT 87.8%, and RF 85.9%. The dataset's impedance dynamics data provide evidence for treatment efficacy. Therapeutic strategies need to be adjusted for particular patients and their stage of cancer since the results underscore the usefulness of personalized breast cancer therapy. This study will significantly contribute to new tailored treatment options available for breast cancer patients. 2024 IEEE. -
Strengthening the Security of IoT Devices Through Federated Learning: A Comprehensive Study
There is a strong need for having an operative security framework which can help in making IoT (Internet of Things) devices more secure and reliable which can further protect from adversarial intrusions. Federated Learning, due to its decentralized architecture, has emerged as one of the ideal choices by the research practitioners in order to protect sensitive data from wide IoT-based attacks like DoS (Denial of Service) attack, Device Tampering, Sensor-Data manipulation etc. This paper discusses the significance of federated learning in addressing security concerns with IoT (Internet of Things) devices and how those issues can be minimized with the use of Federated Learning has been deliberated with the help of comparative analysis. In order to perform this comparative analysis, we investigated the published work in FL based IoT application for the last five years i.e., 2018-2022. We have defined a few inclusion/exclusion criteria and based on that we selected the desired paper and provided a comprehensive solution to IoT based applications using FL approach. Federated learning offers an optimistic approach to intensify security in IoT environments by enabling collaborative model training while preserving information privacy. In this paper a framework named Federated AI Technology Enabler (FATE) has been envisaged which is one of the recommended frameworks in safeguarding security and privacy measures of IoT devices. 2024 IEEE. -
Interpreting the Evidence on Life Cycle to Improve Educational Outcomes of Students Based on Generalized ARC-GRU Approach
Research on the effects of teachers' fatigue on students' learning has been significantly less common than research on the effects of teachers' fatigue on teachers' own performance. Therefore, the purpose of this research is to see if teachers' emotional weariness has any bearing on their students' performance in the classroom. Consideration is given to a student's grades and their impressions of whether or not the system receive assistance from teachers, as well as to the student's general outlook on school, confidence in their own abilities, and faith in the availability of faculty support. Data preparation, feature extraction, and model training are the first steps in the proposed approach. Indicators of the quality of the education being provided are eliminated (by outlier removal and feature scaling). k-mean clustering approach is a technique of clustering which is commonly used in feature extraction. Following feature extraction, GARCH-GRU models are trained. The proposed approach is superior to two popular alternatives, ARCH and GRU. Using the provided method, the system were able to achieve a maximum accuracy of 97.07%. 2024 IEEE. -
A Hybrid Grayscale Image Scrambling Framework Using Block Minimization and Arnold Transform
Image disarranging is the process of randomly rearranging picture elements to make the visibility unreadable and break the link among neighboring elements. Pixel values often don't change while they are being scrambled. There has been a slew of proposed image encryption techniques recently. The two steps that most image encryption algorithms go through are confusion and diffusion. Using a scrambling technique, the pixel positions are permuted during the confusion phase, and an inverse-able function is used to modify the pixel values during the diffusion phase. A good scrambling method practically eliminates the high relationships between adjacent pixels in a picture. In the proposed scheme, XOR based minimization operator is applied on blocks of images followed by Arnold Transform. The suggested design is assessed using a matrix comprising the Structured Similarity Index and the Peak Signal to Noise Ratio. The computed PSNR value less than 10 indicates the input image and scrambled image has high variation. The SSIM value nearer to 0 indicates no similarity in the structure of the input image and scrambled image. 2024 IEEE. -
AR and Online Purchase Intention Towards Eye Glasses
Augmented reality (AR) can be a potent tool for Indian online eyewear marketers by bridging the gap between online and offline purchasing experiences and meeting the needs of social validation and sensory engagement, which are preferences of Indian consumers. The present research explores how augmented reality (AR) technology affects Indian consumers' intentions to buy glasses online. A combination of descriptive and exploratory research design was used on the sample size of 236 consumers. Data was analyzed using frequency table and Structured Equation modelling (SEM) to identify the relationship amongst the variables. The findings indicate that accessibility to product information, telepresence, and perceived ease of use are important variables impacting purchase intention. AR can bridge the gap between online and offline experiences, meet consumer preferences, and create trust and confidence. Future research should explore AR's effectiveness and personalization possibilities for Indian online eyewear retailers. Future research should explore AR's effectiveness and personalization possibilities for Indian online eyewear retailers. 2024 IEEE.