Browse Items (2150 total)
Sort by:
-
A study of Autoregressive Model Using Time Series Analysis through Python
A Time-series investigation is a simple technique for dividing information from reconsideration perceptions on a solitary unit or individual at ordinary stretches over countless perceptions. Timeseries examination can be considered to be the model of longitudinal plans. The most widely used method is focused on a class of Auto-Regressive Moving Average (ARMA) models. ARMA models could examine various examination questions, including fundamental cycle analysis, intercession analysis, and long-term therapy impact analysis. The model ID process, the meanings of essential concepts, and the factual assessment of boundaries are all depicted as specialized components of ARMA models. To explain the models, Multiunit time-series plans, multivariate time-series analysis, the consideration of variables, and the study of examples of intra-individual contrasts across time are all ongoing improvements to ARMA demonstrating techniques. [1] 2022 IEEE. -
An Analysis Conducted Retrospectively on the Use: Artificial Intelligence in the Detection of Uterine Fibroid
The most frequent benign pelvic tumors in women of age of conception are uterine fibroids, sometimes referred to as leiomyomas. Ultrasonography is presently the first imaging modality utilized as clinical identification of uterine fibroids since it has a high degree of specificity and sensitivity and is less expensive and more widely accessible than CT and MRI examination. However, certain issues with ultrasound based uterine fibroid diagnosis persist. The main problem is the misunderstanding of pelvic and adnexal masses, as well as subplasmic and large fibroids. The specificity of fibroid detection is impacted by the existing absence of standardized image capture views and the variations in performance amongst various ultrasound machines. Furthermore, the proficiency and expertise of ultra sonographers determines the accuracy of the ultrasound diagnosis of uterine fibroids. In this work, we created a Deep convolutional neural networks (DCNN) model that automatically identifies fibroids in the uterus in ultrasound pictures, distinguishes between their presence and absence, and has been internally as well as externally validated in order to increase the reliability of the ultrasound examinations for uterine fibroids. Additionally, we investigated whether Deep convolutional neural networks model may help junior ultrasound practitioners perform better diagnostically by comparing it to eight ultrasound practitioners at different levels of experience. 2024 IEEE. -
Artificial intelligence: A new model for online proctoring in education
As a result of technological advancements, society is becoming increasingly computerized. Massive open online courses and other forms of remote instruction continue to grow in popularity and reach. COVID-19's global impact has boosted the demand for similar courses by a factor of ten. The ability to successfully assign distant online examinations is a crucial limiting factor in this next stage of education's adaptability. Human proctoring is now the most frequent method of evaluation, which involves either forcing test takers to visit an examination centre or watching them visually and audibly throughout tests via a webcam. However, such approaches are time-consuming and expensive. In this paper, we provide a multimedia solution for semi-automated proctoring that does not require any extra gear other than the student's computer's webcam and microphone. The system continuously monitors and analyses the user based on gaze detection, lip movement, the number of individuals in the room, and mobile phone detection, and captures audio in real time through the microphone and transforms it to text for assessment using speech recognition. Access the words gathered by speech recognition and match them for keywords with the questions being asked for higher accuracy using Natural Language Processing. If any inconsistencies are discovered, they are reported to the proctor, who can investigate and take appropriate action. Extensive experimental findings illustrate the correctness, resilience, and efficiency of our online exam proctoring system, as well as how it allows a single proctor to simultaneously monitor several test takers. 2023 Author(s). -
A Novel Approach for Segmenting Coronary Artery from Angiogram Videos
This paper addresses the research focuses on coronary artery disease; it is one of the major heart diseases affecting the people all around the world in the recent era. This heart disease is primarily diagnosed using a medical test called angiogram test. During the angiogram procedure the cardiologist often physically selects the frame from the angiogram video to diagnose the coronary artery disease. Due to the waning and waxing changeover in the angiogram video, its hard for the cardiologist to identify the artery structure from the frame. So, finding the keyframe which has a complete artery structure is difficult for the cardiologist. To help the cardiologist a method is proposed, to detect the keyframe which has segmented artery from the angiogram video. 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Analysis of benchmark image pre-processing techniques for coronary angiogram images
Coronary Artery supplies oxygenated blood and nutrients to the heart muscles. It can be narrow by the plaque deposited on the artery wall. Cardiologists and radiologists diagnose the disease through visual inspection based on x-ray images. It is a challenging part for them to identify the plaque in the artery in the given imagery. By using image processing and pattern recognition techniques, a narrowed artery can be identified. In this paper, pre-processing methods of image processing are discussed with respect to coronary angiogram image(s). In general the angiogram images are affected by device generated noise / artifacts; pre-processing techniques help to reduce the noise in the image and to enhance the quality of the image so that the region of interest is sensed. The main objective of the medical image analysis is to localize the region of interest by removing the noise. It is essential to find the structure of the artery in the angiogram image, for that preprocessing is useful. 2021 IEEE. -
An Innovative Method for Brain Stroke Prediction based on Parallel RELM Model
Strokes occur when blood supply to the brain is suddenly cut off or severely impaired. Stroke victims may experience cell death as a result of oxygen and food shortages. The effectiveness of various predictive data mining algorithms in illness prediction has been the subject of numerous studies. The three stages that make up this suggested method are feature selection, model training, and preprocessing. Missing value management, numeric value conversion, imbalanced dataset handling, and data scaling are all components of data preparation. The chi-square and RFE methods are utilized in feature selection. The former assesses feature correlation, while the latter recursively seeks for ever-smaller feature sets to choose features. The whole time the model was being trained, a Parallel RELM was used. This new method outperforms both ELM and RELM, achieving an average accuracy of 95.84%. 2024 IEEE. -
Performance Evaluation of Area-Based Segmentation Technique on Ambient Sensor Data for Smart Home Assisted Living
Activity recognition(AR) is a popular subject of research in the recent past. Recognition of activities performed by human beings, enables the addressing of challenges posed by many real-world applications such as health monitoring, providing security etc. Segmentation plays a vital role in AR. This paper evaluates the efficiency of Area-Based Segmentation using different performance measures. Area-Based segmentation was proposed in our earlier research work. The evaluation of the Area-Based segmentation technique is conducted on four real world datasets viz. Aruba17, Shib010, HH102, and HH113 comprising of data pertaining to an individual, living in the test bed home. Machine learning classifiers, SVM-R, SVM-P, NB and KNN are adopted to validate the performance of Area-Based segmentation. Amongst the four chosen classification algorithms SVM-R exhibits better in all the four datasets. Area-Based segmentation recognise the four test bed activities with accuracies of 0.74, 0.98, 0.66, and 0.99 respectively. The results reveal that Area based segmentation can efficiently segment sensor data stream which aids in accurate recognition of smart home activities. 2019 Procedia Computer Science. All rights reserved. -
Ambient monitoring in smart home for independent living
Ambient monitoring is a much discussed area in the domain of smart home research. Ambient monitoring system supports and encourages the elders to live independently. In this paper, we deliberate upon the framework of an ambient monitoring system for elders. The necessity of the smart home system for elders, the role of activity recognition in a smart home system and influence of the segmentation method in activity recognition are discussed. In this work, a new segmentation method called area-based segmentation using optimal change point detection is proposed. This segmentation method is implemented and results are analysed by using real sensor data which is collected from smart home test bed. Set of features are extracted from the segmented data, and the activities are classified using Naive Bayes, kNN and SVM classifiers. This research work gives an insight to the researchers into the application of activity recognition in smart homes. Springer Nature Singapore Pte Ltd. 2019. -
Quantitative Structure-Activity Relationship Modeling for the Prediction of Fish Toxicity Lethal Concentration on Fathead Minnow
As there has been a rise in the usage of in silico approaches, for assessing the risks of harmful chemicals upon animals, more researchers focus on the utilization of Quantitative Structure Activity Relationship models. A number of machine learning algorithms link molecular descriptors that can infer chemical structural properties associated with their corresponding biological activity. Efficient and comprehensive computational methods which can process huge set of heterogeneous chemical datasets are in demand. In this context, this study establishes the usage of various machine learning algorithms in predicting the acute aquatic toxicity of diverse chemicals on Fathead Minnow (Pimephales promelas). Sample drive approach is employed on the train set for binning the data so that they can be located in a domain space having more similar chemicals, instead of using the dataset that covers a wide range of chemicals at the entirety. Here, bin wise best learning model and subset of features that are minimally required for the classification are found for further ease. Several regression methods are employed to find the estimation of toxicity LC50 value by adopting several statistical measures and hence bin wise strategies are determined. Through experimentation, it is evident that the proposed model surpasses the other existing models by providing an R2 of 0.8473 with RMSE 0.3035 which is comparable. Hence, the proposed model is competent for estimating the toxicity in new and unseen chemical. The Author(s), under exclusive license to Springer Nature Switzerland AG 2025. -
Financial analytical usage of cloud and appropriateness of cloud computing for certain small and medium-sized enterprises
The term "cloud computing"refers to a novel approach of providing useful ICTs to consumers over the internet on an as-needed and pay-per-usage basis. Businesses may streamline internal processes, increase contact with customers, and expand their market reach with the aid of cloud computing, which provides convenient and inexpensive access to cutting-edge information and communication technologies. Developing economies like India's present unique problems for small and medium-sized businesses (SMEs), such as a lack of funding, an inadequate workforce, and inadequate information and communication technology (ICT) use. Various advantages offered by current information and communication technology solutions are unavailable to SMEs because of these limitations. If small and medium-sized enterprises (SMEs) are seeking to enhance their internal operations, communication with customers and business partners, and market reach using current information and communication technology (ICT) solutions, cloud computing might be a good fit for them. Therefore, SMEs are particularly well-served by cloud computing. Companies with a lack of capital, personnel, or other resources to deploy and use appropriate ICTs may greatly benefit from cloud computing, and the public cloud in particular. 2024 Author(s). -
Formula One Race Analysis Using Machine Learning
Formula One (also known as Formula 1 or F1) is the highest class of international auto-racing for single-seater formula racing cars sanctioned by the Fation International de automobile (FIA). The World Drivers Championship, which became the FIA Formula One World Championship in 1981, has been one of the premier forms of racing around the world since its inaugural season in 1950. This article looks at cost-effective alternatives for Formula 1 racing teams interested in data prediction software. In Formula 1 racing, research was undertaken on the current state of data gathering, data analysis or prediction, and data interpretation. It was discovered that a big portion of the leagues racing firms require a cheap, effective, and automated data interpretation solution. As the need for faster and more powerful software grows in Formula 1, so does the need for faster and more powerful software. Racing teams benefit from brand exposure, and the more they win, the more publicity they get. The papers purpose is to address the problem of data prediction. It starts with an overview of Formula 1s current situation and the billion-dollar industrys history. Racing organizations that want to save money might consider using Python into their data prediction to improve their chances of winning and climbing in the rankings. 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Facial Recognition Model Using Custom Designed Deep Learning Architecture
Facial Recognition is widely used in some applications such as attendance tracking, phone unlocking, and security systems. An extensive study of methodologies and techniques used in face recognition systems has already been suggested, but it doesn't remain easy in the real-world domain. Preprocessing steps are mentioned in this, including data collection, normalization, and feature extraction. Different classification algorithms such as Support Vector Machines (SVM), Nae Bayes, and Convolutional Neural Networks (CNN) are examined deeply, along with their implementation in different research studies. Moreover, encryption schemes and custom-designed deep learning architecture, particularly designed for face recognition, are also covered. A methodology involving training data preprocessing, dimensionality reduction using Principal Component Analysis, and training multiple classifiers is further proposed in this paper. It has been analyzed that a recognition accuracy of 91% is achieved after thorough experimentation. The performance of the trained models on the test dataset is evaluated using metrics such as accuracy and confusion matrix. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024. -
Bioinformatics Research Challenges and Opportunities in Machine Learning
This research work has studied about the utilization of machine learning algorithms in bioinformatics. The primary purpose of studying this is to understand bioinformatics and different machine algorithms which are used to analyze the biological data present with us. This research study discusses about different machine learning approaches like supervised, unsupervised, and reinforcement which play an essential role in understanding and analyzing biological data. Machine learning is helping us to solve a wide range of bioinformatics problems by describing a wide range of genomics sequences and analyzing vast amounts of genomic data. One of the biggest real-world problems is that machine learning is helping us to identify cancer with a given gene expression, which is done using a support vector machine. In addition, this study discusses about the classification of molecular data, which will help find out minor diseases. With the advancement of machine learning in healthcare and other related applications, data collection becomes a tedious process. This article also focuses on some of the research problems in machine learning domain. The uses of machine learning algorithms in bioinformatics have been extensively studied. These objectives will help to understand bioinformatics and different machine algorithms that are used to analyze the biological data. This research study presents different machine learning approaches like supervised, unsupervised, and reinforcement, which play an important role in understanding and analyzing biological data. Machine learning helps to solve a wide range of bioinformatics related challenges by describing a wide range of genomics sequences and analyzing huge amounts of genomic data. One of the biggest real-time challenges is that the machine learning is helping to identify cancer with a given gene expression, and this is done by using a support vector machine. Finally, this research study has discussed about the classification of molecular data, which will be helpful in finding out minor diseases. 2022 IEEE. -
A Comparative Performance Analysis of Convolution W/O OpenCL on a Standalone System
Initial approach of this paper is to provide a deep understanding of OpenCL architecture. Secondly, it proposes an implementation of a matrix and image convolution implemented in C (Serial Programming) and OpenCL (Parallel Programming), to describe detailed OpenCL programming flow and to provide a comparative performance analysis. The implementation is being carried on AMD A10 APU and various algebraic scenarios are created, to observe the performance improvement achieved on a single system when using Parallel Programming. In the related works authors have worked on AMDAPPSDK samples such as N-body & SimpleGL to understand the concept of vector data types in OpenCL and OpenCL-GL interoperability, have also implemented 3-D particle bouncing concept in OpenCL & 3D-Mesh rendering using OpenCL. Lastly, authors have also illuminated about their future work, where they intend to implement a novel algorithm for mesh segmentation using OpenCL, for which they have tried to form a strong knowledge base through this work. 2015 IEEE. -
A Novel CNN Approach for Condition Monitoring of Hydraulic Systems
In the dynamic landscape of Industry 4.0, the ascendancy of predictive analytics methods is a pivotal paradigm shift. The persistent challenge of machine failures poses a substantial hurdle to the seamless functioning of factories, compelling the need for strategic solutions. Traditional reactive maintenance checks, though effective, fall short in the face of contemporary demands. Forward-thinking leaders recognize the significance of integrating data-driven techniques to not only minimize disruptions but also enhance overall operational productivity while mitigating redundant costs. The innovative model proposed herein harnesses the robust capabilities of Convolutional Neural Networks (CNN) for predictive analytics. Distinctively, it selectively incorporates the most influential variables linked to each of the four target conditions, optimizing the model's predictive precision. The methodology involves a meticulous process of variable extraction based on a predetermined threshold, seamlessly integrated with the CNN framework. This nuanced and refined approach epitomizes a forward-looking strategy, empowering the model to discern intricate failure patterns with a high degree of accuracy. 2024 IEEE. -
Simulation of IoT-based Smart City of Darwin: Leading Cyber Attacks and Prevention Techniques
The Rise of the Internet of Things (IoT) technology made the world smarter as it has embedded deeply in several application areas such as manufacturing, homes, cities, and health etc. In the developed cities, millions of IoT devices are deployed to enhance the lifestyle of citizens. IoT devices increases the efficiency and productivity with time and cost efficiency in smart cities, on the other hand, also set an attractive often easy targets for cybercriminals by exposing a wide variety of vulnerabilities. Cybersecurity risks, if ignored can results as very high cost to the citizens and management as well. In this research, simulated IoT network of Darwin CBD has been used with different IoT simulation tools. The treacherous effects of vulnerable IoT environment are demonstrated in this research followed by implementation of security measures to avoid the illustrated threats. 2023 IEEE. -
Machine Learning Methods leveraging ADFA-LD Dataset for Anomaly Detection in Linux Host Systems
Advancement in network technology and revolution in the global internet transformed the overall Information Technology (IT) infrastructure and its usage. In the era of the Internet of Things (IoT) and the Internet of Everything (IoE), most everyday gadgets and electronic devices are IT-enabled and can be connected over the internet. With the advancements in IT technologies, operating systems also evolved to leverage these advancements. Today's operating systems are more user-friendly and feature-rich to support current IT requirements and provide sophisticated functionalities. On the one hand, these features enabled operating systems accomplish all current requirements, but on the other hand, these modern operating systems increased their attack surface considerably. Intrusion detection systems play a significant role in providing security against the broad spectrum of attacks on host systems. Intrusion detection systems based on anomaly detection have become a prominent research area among diverse areas of cyber security. The traditional approaches for anomaly detection are inadequate to discover the operating system level anomalies. The advancement and research in Machine Learning (ML) based anomaly detection open new opportunities to tackle this challenge. The dataset plays a significant role in ML-based system efficacy. The Australian Defence Force Academy Linux Dataset (ADFA-LD) comprises thousands of normal and attack processes system call traces for the Linux platform. It is the benchmark dataset used for dynamic approach-based anomaly detection. This paper provided a comprehensive and structured study of various research works based on the ADFA-LD for host-based anomaly detection and presented a comparative analysis. 2022 IEEE. -
Implementation of Supervised Pre-Training Methods for Univariate Time Series Forecasting
There has been a recent deep learning revolution in Computer Vision and Natural Language Processing. One of the biggest reasons for this has been the availability of large-scale datasets to pre-train on. One can argue that the Time Series domain has been left out of the aforementioned revolution. The lack of large scale pretrained models could be one of the reasons for this.While there have been prior experiments using pre-trained models for time series forecasting, the scale of the dataset has been relatively small. One of the few time series problems with large scale data available for pre-training is the financial domain. Therefore, this paper takes advantage of this and pretrains a ID CNN using a dataset of 728 US Stock Daily Closing Price Data in total, 2,533,901 rows. Then, we fine-tune and evaluate a dataset of the NIFTY 200 stocks' Closing Prices, in total 166,379 rows. Our results show a 32% improvement in RMSE and a 36% improvement in convergence speed when compared to a baseline non pre trained model. 2023 IEEE. -
Context Driven Software Development
The Context-Driven Software Development (CDSD) is a novel software development approach with an ability to thrive upon challenges of 21st century digital and disruptive technologies by using its innovative practices and implementation prowess. CDSD is a coherent set of multidisciplinary innovative and best practices like context-aware and self-adaptive system modelling, human-computer interaction, quality engineering, software development-testing-and continuous deployment frameworks, open-source tools-technology-and end-to-end automation, software governance, engaging stakeholders, adaptive solutioning, design thinking, and group creativity. Implementation prowess of CDSD approach stems from its three unique characteristics, namely, its principles, Contextualize-Build-Validate-Evolve (CBVE) product development element, and iterative and lean CDSD life cycle with Profiling, Contextualizing, Modelling, Transforming, and Deploying phases with in-process and phase-end Governance and Compliances. CDSD approach helps to address issues like complexity, software ageing, risks related to internal and external ecosystem, user diversity, and process-related issues including cost, documentation, and delay. 2021, Springer Nature Switzerland AG. -
Quantum Convolutional Neural Network for Medical Image Classification: A Hybrid Model
This study explores the application of Quantum Convolutional Neural Networks (QCNNs) in the realm of image classification, particularly focusing on datasets with a highly reduced number of features. We investigate the potential quantum computing holds in processing and classifying image data efficiently, even with limited feature availability. This research investigates QCNNs' application within a highly constrained feature environment, using chest X-ray images to distinguish between normal and pneumonia cases. Our findings demonstrate QCNNs' utility in classifying images from the dataset with drastically reduced feature dimensions, highlighting QCNNs' robustness and their promising future in machine learning and computer vision. Additionally, this study sheds light on the scalability of QCNNs and their adaptability across various training-test splits, emphasizing their potential to enhance computational efficiency in machine learning tasks. This suggests a possibility of paradigm shift in how we approach data-intensive challenges in the era of quantum computing. We are looking into quantum paradigms like Quantum Support Vector Machine (QSVM) going forward so that we can explore trade offs effectiveness of different classical and quantum computing techniques. 2024 IEEE.