Browse Items (11858 total)
Sort by:
-
Leveraging Deep Autoencoders for Security in Big Data Framework: An Unsupervised Cloud Computing Approach
Abnormalities recognition in bank transaction big data is the number one issue for stability of financial security system. Due to the rate digital transactions are increasing it is vital to have effective ways. Encryption with deep autoencoder model should be explored as it involves trained neural networks that learn such patterns from the complex transaction data. The following paper demonstrates application of anomaly detection using deep autoencoders in the banking big data transactions. It focuses on the theoretical bases, network design, preparedness and the testing measures for deep autoencoders. On the other hand, it solves problems such as high dimensionality and imbalanced dataset. This research paper shows deep autoencoders effectiveness in deep learning and how the network identifies different fraudulent big data transactions, money laundry and unauthorized access. It also encompasses recent developments of cloud environments and future methods using deep autoencoders including the fact that constant search for new possible solutions is a must. The insights delivered contribute to the discourse in financial security community, which incorporates researchers, practitioners, and policymakers involved in anomaly detection in cloud. 2024 IEEE. -
Detecting Cyberbullying in Twitter: A Multi-Model Approach
With cyberbullying surging across social media, this study investigates the effectiveness of four prominent deep learning models - CNN, Bi-LSTM, GRU, and LSTM - in identifying cyberbullying within Twitter texts. Driven by the urgent need for robust tools, this research aims to enrich the field of cyberbullying detection by thoroughly evaluating these models' capabilities. A dataset of Twitter texts served as the training ground, rigorously preprocessed to ensure optimal model compatibility. Each model, CNN, Bi-LSTM, GRU, and LSTM, underwent independent training and evaluation, revealing distinct performance levels: CNN achieved the highest accuracy at 83.10%, followed by Bi-LSTM (81.90%), GRU (81.73%), and LSTM (16.07%). These differences highlight the unique strengths of each architecture in analysing and representing text data. The findings highlight the CNN model's superior performance, indicating its potential as a highly effective tool for Twitter-based cyberbullying detection. While the deep learning models explored here offer promising avenues for detecting cyberbullying on Twitter, their performance highlights the complexities inherent in this task. The limited space of tweets can often obscure the true intent behind words, making accurate identification a nuanced challenge. Despite this, the CNN model's robust performance suggests that carefully chosen architectures hold significant potential for combating online harassment. This research paves the way for further explorations in harnessing the power of AI to create a safer and more civil online experience where respectful communication can flourish even within the constraints of concision. 2024 IEEE. -
Artificial Intelligence-Based L&E-Refiner forBlind Learners
An Artificial Intelligence (AI)-based scribe known as L &E Refiner for blind learners is a technology that utilizes natural language processing and machine learning techniques to automatically transcribe lectures, books, and other written materials into audio format. This system is designed to provide an accessible learning experience for blind students, allowing them to easily access and interact with educational content. The AI scribe is able to recognize and understand various forms of text, including handwriting, printed text, and digital documents, and convert them into speech output that blind learners easily comprehend. This technology has the potential to significantly improve the accessibility and inclusion of education for blind individuals. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024. -
Predictive Modeling of Solar Energy Production: A Comparative Analysis of Machine Learning and Time Series Approaches
In this study, we dive into the world of renewable energy, specifically focusing on predicting solar energy output, which is a crucial part of managing renewable energy resources. We recognize that solar energy production is heavily influenced by a range of environmental factors. To effectively manage energy usage and the power grid, it's vital to have accurate forecasting methods. Our main goal here is to delve into various predictive modeling techniques, encompassing both machine learning and time series analysis, and evaluate their effectiveness in forecasting solar energy production. Our study seeks to address this by developing robust models capable of capturing these complex dynamics and providing dependable forecasts. We took a comparative route in this research, putting three different models to the test: Random Forest Regressor, a streamlined version of XGBoost, and ARIMA. Our findings revealed that both the Random Forest and XGBoost models showed similar levels of performance, with XGBoost having a slight edge in terms of RMSE.. By providing a comprehensive comparison of these different modeling techniques, our research makes a significant contribution to the field of renewable energy forecasting. We believe this study will be immensely helpful for professionals and researchers in picking the most suitable models for solar energy prediction, given their unique strengths and limitations. 2024 IEEE. -
A Novel Ensemble based Model for Intrusion Detection System
In the present interconnected world, the increasing reliance on computer networks has made them susceptible to multiple security threats and intrusions. Intrusion Detection Systems (IDS) is essential for shielding these networks by detecting and mitigating potential threats in real-time. This research paper presents an in-depth study of employing the Random Forest algorithm for building an effective intrusion detection System. The proposed IDS uses the power of the Random Forest algorithm, a popular ensemble learning technique, to detect various types of intrusions in network traffic effectively. The algorithm integrates more than one decision trees to produce a robust and accurate classifier, capable of handling large-scale and complex datasets typical of network traffic. The proposed system can be used in various industries and sectors to protect critical assets, ensuring the uninterrupted operation of computer networks. Evolving cyber threats have encouraged further research into ensemble analytics methods to increase the resilience of Intrusion Detection Systems in an ever-changing threat landscape. 2024 IEEE. -
Investigation on Preserving Privacy of Electronic Medical Record using Split Learning
Artificial Intelligence is deployed in multiple areas, including healthcare. Utmost research is done in AI enabled healthcare industry because of the demands like accurate result, data security, exact prediction, huge volume of data, etc. In conventional deep learning models, the training happens with the dataset that are stored in a single device. This requires a huge storage space and highly efficient machines to train the data. Usage of big data, demands for innovative models that can be deployed and used in confined storage. Split learning is one such collaborative distributed deep learning model that allows the data to be stored in a split fashion. Split learning supports desirable features like less storage, more privacy to raw data, ability to work with resource constraints, etc., making it suitable for storing electronic medical record of patients. This paper discusses the advantages of using split learning for healthcare, the possible configurations of split learning that supports data privacy in healthcare and finally discusses the open research challenges in implementing split learning for healthcare. 2024 The Authors. Published by Elsevier B.V. -
Hybrid Approach for Multi-Classification of News Documents Using Artificial Intelligence
In the context of news articles, text classification is essential for organizing and retrieving useful information from massive amounts of textual data. Effectively categorizing news titles has gotten more challenging due to the development of online news outlets and the ongoing production of news. A multi-text classification technique primarily targeted at news titles is shown. The suggested approach automates the classification of news titles into predetermined classes or subjects by combining deep learning approaches and natural language processing (NLP) algorithms. Data preprocessing, which includes text normalization, tokenization, and feature extraction, is the first step in the procedure. This prepares the raw news titles for deep learning models. 2024 IEEE. -
An Intelligent Portfolio Management Scheme Based On Hybrid Deep Reinforcement Learning and Cumulative Prospective Approach
Stock markets retain an extensive role towards economic growth of diverse countries and it is a place where investors invest assured amount to earn more profit and the issuers pursue the investors for project investing. However, it is deliberated as a challenging task to buy and sell because of its explosive and complex nature. The existing portfolio optimization models are primarily focused on just improving the returns whereas, the selection of optimal assets is least focused. Hence, the proposed research article focuses on the integration of stock prediction with the portfolio optimization model (SPPO). Initially, the stock prices for the next period are predicted using the hybrid deep reinforcement learning (DRL) model. Within this prediction model, the gated recurrent unit network (GRUN) model is utilized to simulate the interactions of the agent with the environment. The best actions in the prediction model are determined throughout the prediction process using the quantum differential evolution algorithm (Q-DEA). After the prediction of best assets, the optimal portfolio with the best assets is selected using the cumulative prospect theory (CPT) model. The work will be implemented in python and evaluated using the NIFTY-50 Stock Market Data (2000 -2021) dataset. Minimal error rates of 0.130, 0.114, 0.148 and 0.153 is obtained by the proposed model in case of MSE, MAE, RMSE and MAPE. 2024 IEEE. -
Blockchain Computing: Unveiling the Benefits, Overcoming Difficulties, and Exploring Applications in Decentralized Ledger Infrastructure
The protocol known as blockchain, which is composed of blocks, utilizes a decentralized distributed system of nodes (miners). There are three parts to every block: information, which is represented by a hash, and the hash of a previous transaction. In order to regulate data after it has been stored, it is quite difficult to make changes. Mining is compensated for each encrypted function computation they carry out to verify the transaction. This research paper will provide a comprehensive understanding of blockchain-based technologies and how they are applied in a variety of industries, including those that deal with digital currencies, financial services, medical manufacturing, privacy, and a number of other fields. Digital money, notably the cryptocurrency Bitcoin, had previously been one of the most well-known network applications. As there have lately been several studies about the unique utilization of this sort of technology, we will discuss some of these academic works as well as the challenges encountered during the development of these kinds of applications. Blockchain technology is a quickly growing area of database technology that has recently found use in a wide range of industries, including the use of digital money, hospital administration, and other academic subjects. Because of how blockchain technology works and operates, these types of applications are now possible. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024. -
Gems of Prediction: From Clarity to Carats - Unveiling Diamond Prices with Machine Learning in Waikato Environment for Knowledge Analysis
Background: This research focuses on using Weka's toolkit to test machine learning models for predicting diamond prices. The complexity of diamond value characteristics, such as carat, cut, color, and clarity, motivates the study to find the most accurate models. The goal is to promote fairer market processes and customer education. Methods used: The research rigorously preprocesses a diamond attributes dataset using Weka for analysis. Various machine learning algorithms are examined, including simple algorithms like Decision Stump and ZeroR, sophisticated models like M5P and REP Tree, and advanced ensemble approaches like Bagging with REP Tree. Model performance is evaluated using train/test splits (80-70-60%) and cross-validation (5-fold and 10-fold) with metrics such as Correlation Coefficient, MAE, and RMSE. Results achieved: The research finds that ensemble approaches, particularly Bagging with REP Tree, outperform simple and sophisticated models in diamond price prediction. These techniques demonstrate higher accuracy and lower error rates, highlighting the need for multiple models to capture the complexity of diamond valuation. Simple models provide benchmarks and insights into dataset trends but are less precise. Concluding remarks: This study contributes to the understanding of machine learning algorithms for diamond price prediction, an important economic valuation subject. It demonstrates the effectiveness of complex data analysis methods using Weka. The research also highlights the accessibility and sophistication of machine learning at the crossroads, with Weka's cutting-edge algorithms making complicated analytical methods more accessible for practical, everyday use. This work adds to the knowledge of the dynamics of diamond prices and the role of machine learning in economic research. 2024 IEEE. -
Identification of Student Programming Patterns through Clickstream Data
In present educational era, teaching programming to the undergraduates is challenging. For an instructor, focusing on each of the aspect of programming like coding language, logical reasoning, debugging errors, troubleshooting code and problem solving is very daunting task. So, educational researchers are identifying ways to easily identify the student's struggles during programming so that timely assistance can be provided. Using programming platforms or software, a lot of programming data is generated in the form of activity logs or clickstream data. Using machine learning along with data analytics over this programming data can reveal programming patterns of students that may help in early interventions. This study focusses on identifying programming patterns of the students through clustering and groups the students into three major categories namely low performers, strugglers, and high scorers. Further, relevant features like test case success, code compile success and failure, finish test etc. that majorly contribute towards the student programming scores are identified through regression analysis. Through this research, educators can early categorize the students based on their programming patterns and provide timely intervention when necessary, ensuring that no student gets left behind in the fast-paced world of programming education. 2024 IEEE. -
Implementation of Movie Recommendation System Using Hybrid Filtering Methods and Sentiment Analysis of Movie Reviews
In present era of digitization of entertainment, immense volume of movies are produced, which results in the necessity of sophisticated recommendation systems. In the streaming platform these systems empower users to discover new and relevant movies, benefiting both viewers and the entertainment industry. This research paper offers a comprehensive method for incorporating movie review sentiment analysis into a hybrid recommendation system. The study focuses on 4890 movies using a broad dataset containing the detailed descriptions of the movies along with the reviews. To employ the demographic filtering, the popularity score of the movies were calculated, then to apply the collaborative filtering, the textual movie descriptions were vectorized using the countvectorizer method. To predict the sentiment of the movie reviews, the high accuracy model "ControX/Sen1"was used. This hybrid recommendation system ranked the movies based on the user's preferences by employing cosine similarity, the sorted list was further filtered with the positive sentiment reviews. By including sentiment analysis, this research advances sophisticated movie recommendation systems by providing a comprehensive method for addressing user preferences and emotional resonance in film selections. 2024 IEEE. -
PE-v-SVR based Architecture to Predict and Prevent Low and Slow-Rate DDoS Attacks using Machine Learning
Distributed Denial of Service (DDoS) attacks continue to emerge; low and slow attacks pose a serious threat. These small-scale attacks often evade traditional security protections and increase the risk of long-term outages and loss of service. Our research aims to develop effective predictive models and strategic defences to detect and mitigate slow DDoS attacks. The proposed model combines Power Spectral entropy and V-Support Vector Regression. More importantly, the version achieves the first-class error price in the variety of zero to at least one, demonstrating its effectiveness in detecting and predicting DDoS attacks. Research results show the effectiveness of the proposed design using PSD (power spectral density) entropy and V-SVR. The best mean square error obtained further confirms the ability of the model in this context. V-SVR in low and sluggish DDoS assaults. 2024 Bharati Vidyapeeth, New Delhi. -
An Empirical and Statistical Analysis of Fetal Health Classification Using Different Machine Learning Algorithm
The health of both the mother and the baby is affected by how well the fetus is doing during pregnancy, making it a matter of utmost importance. To achieve the best results possible, it is essential to regularly monitor and intervene when needed. While there are many ways to observe the wellbeing of the fetus in the mother's womb, using artificial intelligence (AI) has the potential to enhance accuracy, efficiency, and speed when it comes to diagnosing any issues. This study focuses on developing a machine learning-driven system for accurate fetal health classification. The dataset comprises detailed information on the signs and symptoms of pregnant individuals, particularly those at risk or with emerging fetal health issues. Employing a set of ten machine learning models namely Nae Bayes, Logistic Regression, Decision Tree, Random Forest, KNN, SVM, Gradient Boosting, Linear Discriminant Analysis, Quadratic Discriminant Analysis Light Gradient Boosting Machine (LGBM) along with ensemble-based processes, the Light Gradient Boosting Machine (LGBM) has been identified as a standout performer, accomplishing an accuracy of 96.9%. Furthermore, our exploration demonstrates overall performance like character fashions, signaling promising prospects for sturdy and correct fetal fitness class systems. This study highlights the power of machine learning that could revolutionize prenatal care by identifying fetal health problems early. 2024 IEEE. -
Design & Analysis of CPE Based Fractional Filters
In this paper, a design and analysis of a constant phase element (CPE) based fractional-order filter (FOF) is presented. This paper leverages a voltage differencing transconductance amplifier (VDTA) to design a current-mode fractional-order filter, capable of realizing four types: low-pass, high-pass, band-pass, and band-reject, all with just two VDTAs. The circuit utilizes both a standard integer-order capacitor and a novel fractional-order capacitor. The proposed filter is resistor-less and electronically tunable. Mathematical formulations are outlined for the transfer functions of FOF. All the filter responses are obtained at varying value of ?=0.5,0.6, 0.7, 0.8 and 0.9. All the simulations are carried out using Cadence Virtuoso at 45nm CMOS technology node. 2024 IEEE. -
Regression Analysis as a Metric for Sustainability Development: Validation of Indian Territory
The 2030 Development Agenda styled' Transforming our world The 2030 Agenda for Sustainable Development' was hugged by the transnational locales of the UN General Assembly in 2015. Monitoring the progress of countries towards achieving these pretensions is pivotal for sustainable development. This exploration paper offers an innovative stance toward foretelling the SDG Index of Indian states for the near future times using machine learning ways, logical and visualization tools. The paper focuses on India's sweats towards achieving the SDGs and investigates the factors impacting the SDG performance of individual Indians states. A comprehensive dataset is collected, encompassing a wide range of socio-profitable pointers, demographic data, and environmental criteria applicable to each SDG target. Literal SDG Index scores and corresponding state-specific data are collected to assay and find some trends. The study demonstrates the eventuality of vaticination ways in vaticinating the unborn SDG Index scores of Indian states. The time series graph showcases varying degrees of delicacy across different SDGs, indicating the complexity and diversity of experimental challenges. 2024 IEEE. -
Predictive Modelling of Heart Disease: Exploring Machine Learning Classification Algorithms
In addressing the critical challenge of early and accurate heart failure diagnosis, this study explores the application of five machine learning models, including XGBoost, Decision Tree, Random Forest, Logistic Regression, and Gaussian Naive Bayes. Employing cross-validation and grid search techniques to enhance generalization, the comparative analysis reveals XGBoost as the standout performer, achieving a remarkable accuracy of 85%. The findings emphasize the significant potential of XGBoost in advancing heart failure diagnosis, paving the way for earlier intervention, and potentially improving patient prognosis. The study suggests that integrating XGBoost into diagnostic processes could represent a valuable and impactful advancement in the realm of heart failure prediction, offering promising avenues for improved healthcare outcomes. 2024 IEEE. -
Design and implementation of Adaptive PI control based dynamic voltage restorer for solar based grid integration
This paper introduces an innovative approach to address voltage fluctuations in solar-based grid integration by implementing an adaptive PI control-based Dynamic Voltage Restorer (DVR). This DVR is engineered to counteract voltage disruptions resulting from grid disturbances and the intermittent nature of solar energy generation. To achieve optimal performance in diverse operating conditions, the adaptive PI controller dynamically adjusts its parameters, adapting to changes in load and solar generation. The system is realized on a digital signal processor (DSP) and evaluated within a laboratory-scale solar-based grid integration setup. The findings reveal that the proposed system effectively mitigates voltage fluctuations, ensuring a stable integration of solar energy into the grid. The adaptive PI control-based DVR outperforms traditional PI control-based DVRs, particularly when dealing with variable solar energy generation. This approach holds significant potential for practical applications in solar-based grid integration systems. 2024 IEEE. -
Design and Development of Teaching and Learning Tool Using Sign Language Translator to Enhance the Learning Skills for Students With Hearing and Verbal Impairment
This research paper presents a system designed for the students with verbal and hearing impairments by enabling realtime Sign-to-Text and Text-to-Sign Language conversion, with a specific focus on the Indian Sign Language (ISL). The proposed study aligns to the United Nations Sustainable Development Goal (SDG) of Quality Education. The system leverages cutting-edge technologies, MediaPipe for holistic key point extraction encompassing hand and facial movements, and Long Short-Term Memory (LSTM) architecture powered by TensorFlow and Keras for accurate sign language interpretation. This comprehensive approach ensures nuanced aspects of sign language, such as facial expressions and hand movements, are faithfully represented. On the receiving end, the system excels at Text-to-Sign Language conversion, allowing non-sign language users to interact naturally with sign language users through textual input transformed into sign language animations and Sign-to-Text conversion where the information from the sign language users is converted to text which ensures smooth communication. A user-friendly web application, developed using HTML, CSS, and JavaScript, enhances accessibility and intuitive usage for realtime communication. This research represents a significant advancement in assistive technology, promoting inclusivity and communication accessibility. It underlines the transformative potential of innovation infostering a more connected and inclusive world for all, regardless of their hearing abilities 2024 IEEE. -
Combining Text Information and Sentiment Dictionary for Sentiment Analysis on Twitter During Covid
Presence of heterogenous huge data leads towards the 'big data' era. Technique's proliferation is rapidly increasing data and making dynamic changes that results in 'big data' world. Progressive transition in technologies and adoption of social media in the society also stepped into the 'big data' epoch. Social media popularity is uprising attention in the community. This platform reduces the communication gap among people. Recently, tweeter use increased with unprecedented rate. Presence of social media like tweeter has broken the boundaries and touches the mountain in generating the unstructured data. It opened research gate with great opportunities for analyzing data and mining 'valuable information'. Sentiment analysis is the most demanding, versatile research to know user viewpoint. Society current trend can be easily observed through social network websites. These opportunities bring challenges that leads to proliferation of tools. This research works to analyze sentiments using tweeter data using Hadoop technology. This study explores the big data arduous tool called Hadoop. Further, it explains the need of Hadoop in present scenario and role of Hadoop in storing ample of data and analyzing it. Hadoop cluster, HDFS, and Hive are also discussed in detail. Researchers enthusiastic work is deeply studied and presented here. Dataset used in performing the experiment is explained briefly. Moreover, this research explains thoroughly the implementation work and provide workflow. Next session provides the experimental results and analyzes of result. Finally, last session concludes the paper, its purpose, and how it can be used in upcoming research. 2024 IEEE.
