Browse Items (2150 total)
Sort by:
-
Deploying NLP techniques in Twitch application to comprehend online user behaviour
Sentiment analysis of emotion entails identifying and analyzing subjective information from language, such as views and attitudes, and helps to improve data visualization by employing a variety of strategies, tactics, and tools. New media channels have significantly changed how people interact, exchange ideas, and share information. Numerous businesses have begun to mine this data, concentrating on social media since it is a popular platform for customers to voice their ideas about various brands or goods and because it gives users an audience, enhancing the visibility and potential effect of this input. So far, as the internet expands and modern technology advances, new avenues have emerged with a higher ability to offer businesses pertinent feedback on their goods. The goal of this study is to investigate the many forms of online behaviour by analyzing chat interactions from the well-known streaming service Twitch. Emotes were occasionally employed in place of letters, to get attention, or to communicate emotions. We propose a system that may take in chat logs from a certain stream, use a sentiment analysis algorithm to classify each message, and then display the data in a way that might permit users to analyze the results according to its polarity (positive message, negative message, or neutral message). This application must be sufficiently versatile to be used with any platform broadcast type and to handle the datasets at very huge level. 2023 IEEE. -
Automatic Weld Features Identification and Weld Quality Improvement in Laser Sensor Integrated Robotic Arc Welding
In this study, an integration of point laser sensor in robotic arc welding has been performed for achieving robotic positional accuracy automatically in every welding cycle. With the help of defined focal length of laser sensor, weld seam positions as well as weld gap have been found automatically for any newly positioned work-piece. If there is any change in robot positioning compared to the master job, the shift in every axis is sent as signal to the robot controller so that robot end effector will adjust the shift amount automatically. The welding process parameters are set at optimal values. Taguchi approach so that maximum values of weld quality in terms of depth of penetration, yield strength and ultimate strength can be achieved in every welding cycle. Overall, with the proposed approach, a smart and productive way of operating industrial welding robot has been proposed which can be implemented in any medium to large scale industries for obtaining welding joints with minimum defects. 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Artificial Intelligence and Deep Learning Based Brain Tumor Detection Using Image Processing
In the field of medical science, applications that are particularly used for diagnostic purposes, are used in the detection of brain tumors since detecting an error in MRI scanning is becoming a major task for radiologists and requires a lot of their focus. Flaws that are prevalent during tumor detection must be taken care of to avoid further complications. MRI scanning is one of the most recently developing technologies. The radiologist is a key player in the identification of the brain tumor. Radiologists have to check every image perfectly to avoid the errors in identifying the brain tumor. There is a probability that sometimes cerebral fluid may also appear as mass tissue during the MRI scan. The model that is proposed in this research uses a machine learning algorithm which helps to improve the validity of the classification of the images that are taken in MRI scans. The study focuses on having an automated system that carries out an essential role in determining whether a lump is present in the brain or not. The study tries to resolve the primary flaws in detection necessary to evade further complications in MRI images in brain detection. The main aim of this study is to train the algorithm in a more extensive dataset and to check the patient-level validity with the help of various new datasets. 2023 IEEE. -
Brain Tumor Detection using Hyper Parameter Tuning and Transfer Learning
Brain Tumor is the development of abnormal cells in our brain. There are cancerous and noncancerous brain tumors. Because they can press against healthy brain tissue or spread there, brain tumors are harmful. The early diagnosis of brain tumors is a highly challenging assignment for radiologists. The typical size of a brain tumor doubles in just twenty-five days due to its rapid growth. If not properly cared for, the patient's survival rate typically does not exceed six months. It may quickly result in death. For the purpose of early brain tumor identification, an automatic method is necessary. In this study, an automated strategy is suggested for quickly distinguishing between malignant and non-cancerous brain images. Most of the time, it can be treated if caught during the early stages. Hence the need for more and improved brain tumor detection. The most crucial part here is image processing. The medical images obtained during the test have to be appropriately analysed. Various methods such as MobileNet, EfficientNetB7, and EfficientNetV2 have been used and their efficiency has been analysed. Here we classify the dataset containing 300 images into two. The suggested system will offer improved clinical support for the field of medicine. 2023 IEEE. -
Fake News Detection and Classify the Category
A new type of disinformation has emerged: fake news, or untrue stories that have been presented as actual occurrences. We can no longer tell whether the information is true from fraudulent since so much information is published on social media these days. Artificial intelligence algorithms are helpful in resolving the fake news identification issue. In the field of natural language processing, fake news identification is a crucial yet difficult issue (NLP). In this article, we discuss similar duties as well as the difficulties associated with finding bogus news. Based on these findings, we suggest intriguing avenues for future study, such as developing more accurate, thorough, fair, and useful detection models. The average public's life is impacted by mass media since it happens regularly. Because of this, news stories are written that are somewhat true or even entirely untrue. Using online social networking sites, people deliberately promote these fake goods. It is crucial to decide whether the news is false owing to its potential to have detrimental social and national effects. The false news identification process made use of many criteria, including the headline and body content of the news piece. The suggested method works effectively in terms of producing results with excellent accuracy, precision, and memory. Comparing all the models employed in this study, it was discovered that Distillbert and multinomial nae bayes models perform better than Logistic and others ml models. The credibility of the story may be evaluated using a larger dataset for better results and additional variables like the author and publisher of the news. Grenze Scientific Society, 2023. -
Compendium of Qubit Technologies inQuantum Computing
Quantum computing is information processing based on the principles of quantum mechanics. Qubits are at the core of quantum computing. A qubit is a quantum state where information can be encoded, processed, and readout. Any particle, sub-particle, or quasi-particle having a quantum phenomenon is a possible qubit candidate. Ascendancy in algorithms and coding demands knowledge of the specificities of the inherent hardware. This paper envisages qubits from an information processing perspective and analyses core qubit technologies. 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
A Scoping review of Deep Reinforcement Learning methods in Visual Navigation
Reinforcement Learning (RL) is a subset of Machine Learning that trains an agent to make a series of decisions and take action by interacting directly with the environment. In this approach, the agent learns to attain the goal by the response from its action as rewards or punishment. Recent advances in reinforcement learning combined with deep learning methods have led to breakthrough research in solving many complex problems in the field of Artificial Intelligence. This paper presents recent literature on autonomous visual navigation of robots using Deep Reinforcement Learning (DRL) algorithms and methods. It also describes the algorithms evaluated, the environment used for implementation, and the policy applied to maximize the rewards earned by the agent. The paper concludes with a discussion of the new models created by various authors, their merits over the existing methods, and a briefing on further research. 2023 IEEE. -
Probing the Role of Information and Communication Technology (ICT) in Enhancing Research: An Epilogue of Accessible Research Tools
Information and Communication Technology (ICT) has revolutionized the way researchers conduct their work. It has enabled them to access a wealth of information through online databases, collaborate with colleagues across the globe, and analyze vast amounts of data quickly and accurately. This paper explores the role of ICT in enhancing research tools, highlighting the benefits it provides to researchers in terms of increased efficiency, improved accuracy, and greater access to resources. It also discusses some of the challenges associated with using ICT in research, such as data security and privacy concerns, and offers potential solutions. Overall, the paper concludes that ICT is an essential tool for researchers and will continue to play an increasingly important role in advancing scientific knowledge and innovation. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd 2023. -
User Sentiment Analysis of Blockchain-Enabled Peer-to-Peer Energy Trading
A new way for the general public to consume and trade green energy has emerged with the introduction of peer-to-peer (P2P) energy trading platforms. Thus, how the peer-to-peer energy trading platform is designed is crucial to facilitating the trading experience for users. The data mining method will be used in this study to assess the elements affecting the P2P energy trading experience. The Natural Language Processing (NLP) approach will also be used in this study to evaluate the variables that affect the P2P energy trading experience and look at the role of topic modeling in the topic extraction using LDA. The findings show that the general public was more interested in the new technology and how the energy coin payment system operated during the trade process. This explanation of energy as a CC is an outlier that fits well with the conventional literature. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023. -
Assessing Academic Performance Using Ensemble Machine Learning Models
Artificial Intelligence (AI) shall play a vital role in forecasting and predicting the academic performance of students. Societal factors such as family size, education and occupation of parents, and students' health, along with the details of their behavioral absenteeism are used as independent variables for the analysis. To perform this study, a standardized dataset is used with data instances of 1044 entries and a total of 33 unique variables constituting the feature matrix. Machine learning (ML) algorithms such as Support Vector Machine (SVM), Random Forest (RF), Multilayer Perceptron (MLP), LightGBM, and Ensemble Stacking (ES) are used to assess the specified dataset. Finally, an ES model is developed and used for assessment. Comparatively, the ES model outclassed other ML models with a test accuracy of 99.3%. Apart from accuracy, other parameters of metrics are used to evaluate the performance of the algorithms. 2023 IEEE. -
An exploration of the impact of Feature quality versus Feature quantity on the performance of a machine learning model
About 0.62 trillion bytes of data are generated every hour globally. These figures have been increasing as a result of digitalization and social networks. Some data ecosystems capture, store, and manage this big DATA. The basis is to be able to analyze their information and extract their value. This fact is a gold mine for companies researching and using this data. This leads us to follow how essential and valuable data is in this growing age. For any machine learning model, the selection of data is necessary. In this paper, several experiments have been performed to check the importance of data quality vs. data quantity on model performance. This clearly indicates comparing the data's richness regarding feature quality (e.g., features in images) and the amount of data for any machine learning model. Images are classified into two sets based on features, then removing redundant features from them, then training a machine learning model. Model getting trained with non-redundant data gives highest accuracy (>80%) in all cases versus the one with all features, proving the importance of feature variability and not just the feature count. 2023 IEEE. -
Employee Attrition, Job Involvement, and Work Life Balance Prediction Using Machine Learning Classifier Models
Employee performance is an integral part organizational success, for which Talent management is highly required, and the motivating factors of employee depend on employee performance. Certain variables have been observed as outliers, but none of those variables were operated or predicted. This paper aims at creating predictive models for the employee attrition by using classifier models for attrition rate, Job Involvement, and Work Life Balance. Job Involvement is specifically linked to the employee intentions to turn around that is minimal turnover rate. So, getting justifiable solution, this paper states the novel and accurate classification models. The Ridge Classifier model is the first one it has been used to classify IBM employee attrition, and it gave an accuracy of 92.7%. Random Forest had the highest accuracy for predicting Job Involvement, with accuracy rate of 62.3%. Similarly, Logistic Regression has been the model selected to predict Work Life Balance, and it has a 64.8% accuracy rate, making it an acceptable classification model. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023. -
Bipolar Disease Data Prediction Using Adaptive Structure Convolutional Neuron Classifier Using Deep Learning
The symptoms of bipolar disorder include extreme mood swings. It is the most common mental health disorder and is often overlooked in all age groups. Bipolar disorder is often inherited, but not all siblings in a family will have bipolar disorder. In recent years, bipolar disorder has been characterised by unsatisfactory clinical diagnosis and treatment. Relapse rates and misdiagnosis are persistent problems with the disease. Bipolar disorder has yet to be precisely determined. To overcome this issue, the proposed work Adaptive Structure Convolutional Neuron Classifier (ASCNC) method to identify bipolar disorder. The Imbalanced Subclass Feature Filtering (ISF2) for visualising bipolar data was originally intended to extract and communicate meaningful information from complex bipolar datasets in order to predict and improve day-to-day analytics. Using the Scaled Features Chi-square Testing (SFCsT), extract the maximum dimensional features in the bipolar dataset and assign weights. In order to select features that have the largest Chi-square score, the Chi-square value for each feature should be calculated between it and the target. Before extracting features for the training and testing method, evaluate the Softmax neural activation function to compute the average weight of the features before the feature weights. Diagnostic criteria for bipolar disorder are discussed as an assessment strategy that helps diagnose the disorder. It then discusses appropriate treatments for children and their families. Finally, it presents some conclusions about managing people with bipolar disorder. 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Forecasting Bitcoin Price During Covid-19 Pandemic Using Prophet and ARIMA: An Empirical Research
Bitcoin and other cryptocurrencies are the alternative and speculative digital financial assets in today's growing fintech economy. Blockchain technology is essential for ensuring ownership of bitcoin, a decentralized technology. These coins display high volatility and bubble-like behavior. The widespread acceptance of cryptocurrencies poses new challenges to the corporate community and the general public. Currency market traders and fintech researchers have classified cryptocurrencies as speculative bubbles. The study has identified the bitcoin bubble and its breaks during the COVID-19 pandemic. From 1st April 2018 to 31st March 2021, we used high-frequency data to calculate the daily closing price of bitcoin. The prophet model and Arima forecasting methods have both been taken. We also examined the explosive bubble and found structural cracks in the bitcoin using the ADF, RADF, and SADF tests. It found five multiple breaks detected from 2018 to 2021 in bitcoin prices. ARIMA(1,1,0) fitted the best model for price prediction. The ARIMA and Facebook Prophet model is applied in the forecasting, and found that the Prophet model is best in forecasting prices. 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
A Data Mining approach on the Performance of Machine Learning Methods for Share Price Forecasting using the Weka Environment
It is widely agreed that the share price is too volatile to be reliably predicted. Several experts have worked to improve the likelihood of generating a profit from share investing using various approaches and methods. When used in reality, these methods and algorithms often have too low of a success rate to be helpful. The extreme volatility of the marketplace is a significant contributor. This article demonstrates the use of data mining methods like WEKA to study share prices. For this research's sake, we have selected a HCL Tech share. Multilayer perceptron's, Gaussian Process and Sequential minimal optimization have been employed as the three prediction methods. These algorithms that develop optimal rules for share market analysis have been incorporated into Weka. We have transformed the attributes of open, high, low, close and adj-close prices forecasted share for the next 30 days. Compare actual and predicted values of three models' side by side. We have visualized 1step ahead and the future forecast of three models. The Evaluation metrics of RMSE, MAPE, MSE, and MAE are calculated. The outcomes achieved by the three methods have been contrasted. Our experimental findings show that Sequential minimal optimization provided more precise results than the other method on this dataset. 2023 IEEE. -
Heart Disease PredictionA Computational Machine Learning Model Perspective
Relying on medical instruments to predict heart disease is either expensive or inefficient. It is important to detect cardiac diseases early to avoid complications and reduce the death rate. This research aims to compare various machine learning models using supervised learning techniques to find a better model that gives the highest accuracy for heart disease prediction. This research compares standalone and ensemble models for prediction analysis. Six standalone models are logistic regression, Naive Bayes, support vector machine, K-nearest neighbors, artificial neural network, and decision tree. The three ensemble models include random forest, AdaBoost, and XGBoost. Feature engineering is done with principal component analysis (PCA). The experimental process resulted in random forest giving better prediction analysis with 92% accuracy. Random forest can handle both regression and classification tasks. The predictions it generates are accurate and simple to comprehend. It is capable of effectively handling big datasets. Utilizing numerous trees avoids and inhibits overfitting. Instead of searching for the most prominent feature when splitting a node, it seeks out an optimal feature among a randomly selected feature set in order to minimize the variance. Due to all these reasons, it has performed better. 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Efficient Method for Tomato Leaf Disease Detection and Classification based on Hybrid Model of CNN and Extreme Learning Machine
Through India, most people make a living through agriculture or a related industry. Crops and other agricultural output suffer significant quality and quantity losses when plant diseases are present. The solution to preventing losses in the harvest and quantity of agricultural products is the detection of these illnesses. Improving classification accuracy while decreasing computational time is the primary focus of the suggested method for identifying leaf disease in tomato plant. Pests and illnesses wipe off thousands of tons of tomatoes in India's harvest every year. The agricultural industry is in danger from tomato leaf disease, which generates substantial losses for producers. Scientists and engineers can improve their models for detecting tomato leaf diseases if they have a better understanding of how algorithms learn to identify them. This proposed approaches a unique method for detecting diseases on tomato leaves using a five-step procedure that begins with image preprocessing and ends with feature extraction, feature selection, and model classification. Preprocessing is done to improve image quality. That improved K-Means picture segmentation technique proposes segmentation as a key intermediate step. The GLCM feature extraction approach is then used to extract relevant features from the segmented image. Relief feature selection is used to get rid of the categorization results. finally, classification techniques such as CNN and ELM are used to categorize infected leaves. The proposed approach to outperforms other two models such as CNN and ELM. 2023 IEEE. -
Swarm Intelligence Decentralized Decision Making In Multi-Agent System
This research aims to understand how groups of agents can make decisions collectively without relying on a central authority. The research could focus on developing algorithms and models for distributed problem solving, such as consensus-reaching and voting methods, or for coordinating actions among agents in a decentralized manner. The research could also look into the application of these methods in various fields like distributed robotics, swarm intelligence, and multi-agent systems in smart cities and transportation networks. Swarm intelligence in decentralization is an emerging field that combines the principles of swarm intelligence and decentralized systems to design highly adaptive and scalable systems. These systems consist of a large number of autonomous agents that interact with each other and the environment through local communication and adapt their behaviors based on environmental cues. The decentralized nature of these systems makes them highly resilient and efficient, with potential applications in areas such as robotics, optimization, and block chain technology. However, designing algorithms and communication protocols that enable effective interaction among agents without relying on a centralized controller remains a key challenge. This article proposes a model for swarm intelligence in decentralization, including agents, communication, environment, learning, decision-making, and coordination, and presents a block diagram to visualize the key components of the system. The paper concludes by highlighting the potential benefits of swarm intelligence in decentralization and the need for further research in this area. 2023 IEEE. -
Metaheuristicsbased Task Offloading Framework in Fog Computing for Latency-sensitive Internet of Things Applications
The Internet of Things (IoT) applications have tremendously increased its popularity within a short span of time due to the wide range of services it offers. In the present scenario, IoT applications rely on cloud computing platforms for data storage and task offloading. Since the IoT applications are latency-sensitive, depending on a remote cloud datacenter further increases the delay and response time. Most of the IoT applications shift from cloud to fog computing for improved performance and to lower the latency. Fog enhances the Quality of service (QoS) of the connected applications by providing low latency. Different task offloading schemes in fog computing are proposed in literature to enhance the performance of IoT-fog-cloud integration. The proposed methodology focuses on constructing a metaheuristic based task offloading framework in the three-tiered IoT-fog-cloud network to enable efficient execution of latency-sensitive IoT applications. The proposed work utilizes two effective optimization algorithms such as Flamingo search algorithm (FSA) and Honey badger algorithm (HBA). Initially, the FSA algorithm is executed in an iterative manner where the objective function is optimized in every iteration. The best solutions are taken in this algorithm and fine tuning is performed using the HBA algorithm to refine the solution. The output obtained from the HBA algorithm is termed as the optimized outcome of the proposed framework. Finally, evaluations are carried out separately based on different scenarios to prove the performance efficacy of the proposed framework. The proposed framework obtains the task offloading time of 71s and also obtains less degree of imbalance and lesser latency when compared over existing techniques. 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Trust Model for Cloud Using Weighted KNN Classification for Better User Access Control
The majority of the time, cloud computing is a service-based technology that provides Internet-based technological services. Cloud computing has had explosive growth since its debut, and it is now integrated into a wide variety of online services. These have the primary benefit of allowing thin clients to access the resources and services. Even while it could appear favorable, there are a lot of potential weak points for various types of assaults and cyber threats. Access control is one of the several protection layers that are available as part of cloud security solutions. In order to improve cloud security, this research introduces a unique access control mechanism. For granting users access to various resources, the suggested approach applies the trust concept. For the purpose of predicting trust, the KNN model was recently proposed, however the current approach for categorizing options is sensitive and unstable, particularly when an unbalanced data scenario occurs. Furthermore, it has been discovered that using the exponent distance as a weighting system improves classification performance and lowers variance. The prediction of the users trust levels using weighted K-means closest neighbors is presented in this research. According to the findings, the suggested approach is more effective in terms of throughput, cost, and delay. 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.