Browse Items (1422 total)
Sort by:
-
Applications of artificial intelligence to neurological disorders: Current technologies and open problems
Neurological disorders are caused by structural, biochemical, and electrical abnormalities involving the central and peripheral nervous system. These disorders may be congenital, developmental, or acute onset in nature. Some of the conditions respond to surgical interventions while most require pharmacological intervention and management, and are also likely to be progressive in nature. Owing to a high global burden of the most common neurological disorders, such as dementia, stroke, epilepsy, Parkinsons disease, multiple sclerosis, migraine, and tension-type headache, there exist multiple challenges in early diagnosis, management, and prevention domains, which are further amplified in regions with inadequate medical services. In such situations, technology ought to play an inevitable role. In this chapter, we review artificial intelligence (AI) and machine learning (ML) technologies for mitigating the challenges posed by neurological disorders. To that end, we follow three steps. First, we present the taxonomy of neurological disorders, derived from well-established findings in the medical literature. Second, we identify challenges posed by each of the common disorders in the taxonomy that can be defined as computational problems. Finally, we review AI/ML algorithms that have either stood the test of time or shown the promise to solve each of these problems. We also discuss open problems that are yet to have an effective solution for the challenges posed by neurological disorders. This chapter covers a wide range of disorders and AI/ML techniques with the goal to expose researchers and practitioners in neurological disorders and AI/ML to each others field, leading to fruitful collaborations and effective solutions. 2022 Elsevier Inc. All rights reserved. -
Feature Subset Selection Techniques with Machine Learning
Scientists and analysts of machine learning and data mining have a problem when it comes to high-dimensional data processing. Variable selection is an excellent method to address this issue. It removes unnecessary and repetitive data, reduces computation time, improves learning accuracy, and makes the learning strategy or data easier to comprehend. This chapterdescribes various commonly used variable selection evaluation metrics before surveying supervised, unsupervised and semi-supervised variable selection techniques that tend to be often employed in machine learningtasks including classification and clustering. Finally, ensuing variable selection difficulties are addressed. Variant selection is an essential topic in machine learning and pattern recognition, and numerous methods have been suggested. This chapter scrutinizesthe performance of various variable selection techniques utilizing public domain datasets. We assessed the quantity of decreased variants and the increase in learning assessment with the selected variable selection techniques and then evaluated and compared each approach based on these measures. The evaluation criteria for the filter model are critical. Meanwhile, the embedded model selects variations during the learning model's training process, and the variable selection result is automatically outputted when the training process is concluded. While the sum of squares of residuals in regression coefficients is less than a constant, Lasso minimizes the sum of squares of residuals, resulting in rigorous regression coefficients. The variables are then trimmed using the AIC and BIC criteria, resulting in a dimension reduction. Lasso-dependent variable selection strategies, such as the Lasso in the regression model and others, provide a high level of stability. Lasso techniques are prone to high computing costs or overfitting difficulties when dealing with high-dimensional data. 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
Data Mining-Based Variant Subset Features
A subset of accessible variants data is chosen for the learning approaches during the variant selection procedure. Itincludes the important one with the fewest dimensions and contributes the most to learner accuracy. The benefit of variant selection would be that essential information about a particular variant isnt lost, but if just a limited number of variants are needed,and the original variants are extremely varied, there tends to be a risk of information being lost since certain variants must be ignored. Dimensional reduction, also based on variant extraction, on the other hand, allows the size of the variant space to be reduced without losing information from the original variant space.Filters, wrappers, and entrenched approaches are the three categories of variant selection procedures. Wrapper strategies outperform filter methods because the variation selection procedure is suited for the classifier to be used. Wrapper techniques, on the other hand, are too expensive to use for large variant spaces due to their high computational cost;therefore each variant set must be evaluated using the trained classifier, which slows down the variant selection process. Filter techniques have a lower computing cost and are faster than wrapper procedures, but they have worse classification reliability and are better suited to high-dimensional datasets. Hybrid techniques, which combine the benefits of both filters and wrappers approaches, are now being organized. 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
Research Intention Towards Incremental Clustering
Incremental clustering is nothing but a process of grouping new incoming or incremental data into classes or clusters. It mainly clusters the randomly new data into a similar group of clusters. The existing K-means and DBSCAN clustering algorithms are inefficient to handle the large dynamic databases because, for every change in the incremental database, they simply run their algorithms repeatedly, taking lots of time to properly cluster those new ones coming data. It takes too much time and has also been realized that applying the existing algorithm frequently for updated databases may be too costly. So, the existing K-means clustering algorithm is not suitable for a dynamic environment. Thats why incremental versions of K-means and DBSCAN have been introduced in our work to overcome these challenges.To address the aforementioned issue, incremental clustering algorithms were developed to measure new cluster centers by simply computing the distance of new data from the means of current clusters rather than rerunning the entire clustering procedure. Both the K-means and the DBSCANDBSCAN algorithms use a similar approach. As a result, it specifies the delta change in the original database at which incremental K-means or DBSCANDBSCAN clustering outperforms prior techniques. 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
An Overview of Augmenting AI Application in Healthcare
Artificial intelligence (AI) is showing a paradigm shift in all spheres of the world by mimicking human cognitive behavior. The application of AI in healthcare is noteworthy because of availability of voluminous data and mushrooming analytics techniques. The various applications of AI, especially, machine learning and neural networks are used across different areas in the healthcare industry. Healthcare disruptors are leveraging this opportunity and are innovating in various fields such as drug discovery, robotic surgery, medical imaging, and the like. The authors have discussed the application of AI techniques in a few areas like diagnosis, prediction, personal care, and surgeries. Usage of AI is noteworthy in this COVID-19 pandemic situation too where it assists physicians in resource allocation, predicting death rate, patient tracing, and life expectancy of patients. The other side of the coin is the ethical issues faced while using this technology like data transparency, bias, security, and privacy of data becomes unanswered. This can be handled better if strict policy measures are imposed for safe handling of data and educating the public about how treatment can be improved by using this technology which will tend to build trust factor in near future. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
AI Based Technologies for Digital and Banking Fraud During Covid-19
The only viral thing today is the Covid 19 virus, which has severely disrupted all the economic activity around globe because of which all the businesses are experiencing irrespective of its domain or country of origin. One such major paradigm shift is contactless business, which has increased digital transaction. This in turn has given hackers and fraudsters a lot of space to perform digital scams line phishing, spurious links, malware downloads etc. These frauds have become undesirable part of increased digital transactions, which needs immediate attention and eradication from the system with instant results. In this pandemic situation where, social distancing is key to restrict the spread of the virus, digital payments are the safest and most appropriate payment method, and it needs to be safe and secure for both the parties. Artificial intelligence can be a saviour in this situation, which can help combat the digital frauds. The present study will focus on the different kinds of frauds which customers and facing, and most possible ways Artificial intelligence can be incorporated to identify and eliminate such kind of frauds to make digital payments more secure. Findings of the study suggest that inclusion of AI did bring a change in the business environment. AI used for entertainment has become an essential part in business. Transfiguration from process to platform focused business. The primary requirement of AI is to study the customer experience and how to give a better response for improving the satisfaction. But recently AIs are used not only for customer support, but its been observed that businesses have taken it as marketing strategy to increase demand and sales. 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
An Efficient Comparison on Machine Learning and Deep Neural Networks in Epileptic Seizure Prediction
Electroencephalography signals have been widely used in cognitive neuroscience to identify the brains activity and behavior. These signals retrieved from the brain are most commonly used in detecting neurological disorders. Epilepsy is a neurological impairment in which the brains activity becomes abnormal, causing seizures or unusual behavior. Methods: The benchmark BONN dataset is used to compare and assess the models. The investigations were conducted using the traditional algorithms in machine learning algorithms such as KNN, naive Bayes, decision tree, random forest, and deep neural networks to exhibit the DNN models efficiency in epileptic seizure detection. Findings: Experiments and results prove that deep neural network model performs more than traditional machine learning algorithms, especially with the accuracy value of 97% and area under curve value of 0.994. Novelty: This research aims to focus on the efficiency of deep neural network techniques compared with traditional machine learning algorithms to make intelligent decisions by the clinicians to predict if the patient is affected by epileptic seizures or not. So, the focus of this paper helps the research community dive into the opportunities of innovations in deep neural networks. This research work compares the machine learning and deep neural network model, which supports the clinical practitioners in diagnosis and early treatment in epileptic seizure patients. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Twitter Sentiment Analysis Based on Neural Network Techniques
Our whole world is changing everyday due to the present pace of innovation. One such innovation was the Internet which has become a vital part of our lives and is being utilized everywhere. With the increasing demand to connected and relevant, we can see a rapid increase in the number of different social networking sites, where people shape and voice their opinions regarding daily issues. Aggregating and analysing these opinions regarding buying products and services, news, and so on are vital for todays businesses. Sentiment analysis otherwise called opinion mining is the task to detect the sentiment behind an opinion. Today, analysing the sentiment of different topics like products, services, movies, daily social issues has become very important for businesses as it helps them understand their users. Twitter is the most popular microblogging platform where users put voice to their opinions. Sentiment analysis of Twitter data is a field that has gained a lot of interest over the past decade. This requires breaking up tweets to detect the sentiment of the user. This paper delves into various classification techniques to analyse Twitter data and get their sentiments. Here, different features like unigrams and bigrams are also extracted to compare the accuracies of the techniques. Additionally, different features are represented in dense and sparse vector representation where sparse vector representation is divided into presence and frequency feature type which are also used to do the same. This paper compares the accuracies of Nae Bayes, decision tree, SVM, multilayer perceptron (MLP), recurrent neural network (RNN), convolutional neural network (CNN), and their validation accuracies ranging from 67.88 to 84.06 for different classification techniques and neural network techniques. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
An Empirical Study ofSignal Transformation Techniques onEpileptic Seizures Using EEG Data
Signal processing may be a mathematical approach to manipulate the signals for varied applications. A mathematical relation that changes the signal from one kind to a different is named a transformation technique in the signal process. Digital processing of electroencephalography (EEG) signals plays a significant role in a highly multiple application, e.g., seizure detection, prediction, and classification. In these applications, the transformation techniques play an essential role. Signal transformation techniques are acquainted with improved transmission, storage potency, and subjective quality and collectively emphasize or discover components of interest in an extremely measured EEG signal.The transformed signals result in better classification. This article provides a study on some of the important techniques used for transformation of EEG data. During this work, we have studied six signal transformation techniques like linear regression, logistic regression, discrete wavelet transform, wavelet transform, fast Fourier transform, and principal component analysis with Eigen vector to envision their impact on the classification of epileptic seizures. Linear regression, logistic regression, and discrete wavelet transform provides high accuracy of 100%, and wavelet transform produced an accuracy of 96.35%. The proposed work is an empirical study whose main aim is to discuss some typical EEG signal transformation methods, examine their performances for epileptic seizure prediction, and eventually recommend the foremost acceptable technique for signal transformation supported by the performance. This work also highlights the advantages and disadvantages of all seven transformation techniques providing a precise comparative analysis in conjunction with the accuracy. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Optimal DG Planning and Operation for Enhancing Cost Effectiveness of Reactive Power Purchase
The demand for reactive power support from distributed generation (DG) sources has become increasingly necessary due to the growing penetration of DG in the distribution network. Photovoltaic (PV) systems, fuel cells, micro-turbines, and other inverter-based devices can generate reactive power. While maximizing profits by selling as much electricity as possible to the distribution companies (DisCos) is the main motive for the DG owners, technical parameters like voltage stability, voltage profile and distribution losses are of primary concern to the (DisCos). Local voltage regulation can reduce system losses, improve voltage stability and thereby improve efficiency and reliability of the system. Participating in reactive power compensation reduces the revenue generating active power from DG, thereby reducing DG owners profits. Payment for reactive power is therefore being looked at as a possibility in recent times. Optimal power factor (pf) of operation of DG becomes significant in this scenario. The study in this paper is presented in two parts. The first part proposes a novel method for determining optimal sizes and locations of distributed generation in a radial distribution network. The method proposed is based on the recent optimization algorithm, TeachingLearning-Based Optimization with Learning Enthusiasm Mechanism (LebTLBO). The effectiveness of the method has been compared with existing methods in the literature. The second part deals with the determination of optimal pf of operation of DG sources to minimize reactive power cost, reduce distribution losses and improve voltage stability. The approachs effectiveness has been tested with IEEE 33 and 69 bus radial distribution systems. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Ensemble Model of Machine Learning for Integrating Risk in Software Effort Estimation
The development of software involves expending a significant quantum of time, effort, cost, and other resources, and effort estimation is an important aspect. Though there are many software estimation models, risks are not adequately considered in the estimation process leading to wide gap between the estimated and actual efforts. Higher the level of accuracy of estimated effort, better would be the compliance of the software project in terms of completion within the budget and schedule. This study has been undertaken to integrate risk in effort estimation process so as to minimize the gap between the estimated and the actual efforts. This is achieved through consideration of risk score as an effort driver in the computation of effort estimates and formulating a machine learning model. It has been identified that risk score reveals feature importance and the predictive model with integration of risk score in the effort estimates indicated an enhanced fit. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Removal of Occlusion in Face Images Using PIX2PIX Technique for Face Recognition
Occlusion of face images is a serious problem encountered by the researchers working in different areas. Occluded face creates a hindrance in extracting the features thereby exploits the face recognition systems. Level of complexity increases with changing gestures, different poses, and expression. Occlusion of the face is one of the seldom touched areas. In this paper, an attempt is made to recover face images from occlusion using deep learning techniques. Pix2pix a condition generative adversarial network is used for image recovery. This method is used for the translation of one image to another by converting an occluded image to a non-occluded image. Webface-OCC dataset is used for experimentation, and the efficacy of the proposed method is demonstrated. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Prediction of Users Behavior on the Social Media Using XGBRegressor
The previous decennium has seen the growth and advance with respect to social media and such that has violently also immensely expanded to infiltrate each side of user lives. In addition, mobile network empowers clients to admittance to MSNs at whenever, anyplace, for any character, including job and gathering. Accordingly, the association practices among clients and MSNs are getting completer and more confounded. The goal of this paper is to examine the number of followers, likes, and post for Instagram users. The dataset yielded several fundamental features, which were used to create the model with multimedia social networks (MSNs). Then, natural language processing (NLP) features should be added and finally incorporate features derived in distinction to a machine learning technique like XGBRegressor with TF-IDF technique. We use two performance indicators to compare the different models: root mean square error (RMSE) and the R2 value. We achieved average accuracy using XGBRegressor which is 82%. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Intelligent Water Drops Algorithm Hand Calculation Using a Mathematical Function
The intelligent water droplets (IWD) approach is based on the dynamic of events and changes that take place in a river system. The IWD method is a solution-oriented methodology in which a group of individuals moves in discrete stages from one node to the next as a complete population of solutions is generated. The velocity, soil are the features of natural water drops in the IWD algorithm are modified over a sequence of transitions relating to water drop movement. In this study, the IWD algorithm approach is used with a mutation-based local search to obtain the optimal values of numerical functions. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
A Comprehensive Study on Computer-Aided Cataract Detection, Classification, and Management Using Artificial Intelligence
The day-to-day popularity of computer-aided detection is increasing medical field. Cataract is a main cause of blindness in the entire world. Compared with the other eye diseases, computer-aided development in the area of cataract is remaining underexplored. Several researches are done for automated detection of cataract. Many study groups have proposed many computer-aided systems for detecting cataract, classifying the different type, identification of stages, and calculation of lens power selection prior to cataract surgery. With the advancement in the artificial intelligence and machine learning, future cataract-related research work can undergo very useful achievements in the coming days. The paper studies various recent researches done related to cataract detection, classification, and grading using various artificial intelligence techniques. Various comparisons are done based on the methodology used, type of dataset, and the accuracy of various methodologies. Based on the comparative study, research gap is identified, and a new method is proposed which can overcome the disadvantages and gaps of the studied work. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Face and Emotion Recognition from Real-Time Facial Expressions Using Deep Learning Algorithms
Emotions are faster than words in the field of humancomputer interaction. Identifying human facial expressions can be performed by a multimodal approach that includes body language, gestures, speech, and facial expressions. This paper throws light on emotion recognition via facial expressions, as the face is the basic index of expressing our emotions. Though emotions are universal, they have a slight variation from one person to another. Hence, the proposed model first detects the face using histogram of gradients (HOG) recognized by deep learning algorithms such as linear support vector machine (LSVM), and then, the emotion of that person is detected through deep learning techniques to increase the accuracy percentage. The paper also highlights the data collection and preprocessing techniques. Images were collected using a simple HAAR classifier program, resized, and preprocessed by removing noise using a mean filter. The model resulted in an accuracy percentage for face and emotion being 97% and 92%, respectively. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Implementation of Morphological Gradient Algorithm for Edge Detection
This paper shows the implementation of a morphological gradient in MATLAB and colab platforms to analyze the time consumed on different sizes of grayscale images and structuring elements. A morphological gradient is an edge detecting technique that can be derived from the difference of two morphological operations called dilation and erosion. In order to apply the morphological operations to an image, padding is carried out which involves inserting 0 for dilation operation and 225 for erosion. Padding for the number of rows or columns is based on the size of the structuring element. Further, dilation and erosion are implemented on the image to obtain morphological gradient. Since central processing unit (CPU) implementation follows sequential computing, with the increase in the image size, the time consumption also increases significantly. To analyze the time consumption and to verify the performance across various platforms, the morphological gradient algorithm is implemented in MATLAB and colab. The results demonstrate that colab implementation is ten times faster when constant structuring element with varying image size is used and five times faster when constant image size with varying structuring element size is used than the MATLAB implementation. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Limaco?n Inspired Particle Swarm Optimization forLarge-Scale Optimization Problem
Large-scale optimization problems are a complex problem in the class of NP-Hard. These problems are not solvable by traditional methods in a reasonable time. Single machine total weighted tardiness scheduling problem (SMTWTSP) is a complex problem in this category. It has a set of different events with varying criteria that need to be scheduled on one machine. The main aim of this problem is to find the minimum possible total weighted tardiness. Particle swarm optimization (PSO) algorithm has performed admirably in the field of optimization. To solve complex optimization problems, several new variants of this algorithm are being developed since its inception. This work proposed an influential local search (LS) technique inspired by limaco?n curve. The new local search is hybridized with PSO and named Limaco?n inspired PSO (LimPSO) algorithm. The efficiency and accuracy of the designed LimPSO strategy are tested over the large-scale SMTWTS problem, which shows that LimPSO can be considered an effective method for solving the combinatorial optimization problems. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
A Review on Preprocessing Techniques for Noise Reduction in PET-CT Images for Lung Cancer
Cancer is one of the leading causes of death. According to World Health Organization, lung cancer is the most common cause of cancer deaths in 2020, with over 1.8 million deaths. Therefore, lung cancer mortality can be reduced with early detection and treatment. The components of early detection require screening and accurate detection of the tumor for staging and treatment planning. Due to the advances in medicine, nuclear medicine has become the forefront of precise lung cancer diagnosis. Currently, PET/CT is the most preferred diagnostic modality for lung cancer detection. However, variable results and noise in the imaging modalities and the lung's complexity as an organ have made it challenging to identify lung tumors from the clinical images. In addition, the factors such as respiration can cause blurry images and introduce other artifacts in the images. Although nuclear medicine is at the forefront of diagnosing, evaluating, and treating various diseases, it is highly dependent on image quality, which has led to many approaches, such as the fusion of modalities to evaluate the disease. In addition, the fusion of diagnostic modalities can be accurate when well-processed images are acquired, which is challenging due to different diagnostic machines and external and internal factors associated with lung cancer patients. The current works focus on single imaging modalities for lung cancer detection, and there are no specific techniques identified individually for PET and CT images, respectively, for attaining effective and noise-free hybrid imaging for lung cancer detection. Based on the survey, it has been identified that several image preprocessing filters are used for different noise types. However, for successful preprocessing, it is essential to identify the types of noise present in PET and CT images and the appropriate techniques that perform well for these modalities. Therefore, the primary aim of the review is to identify efficient preprocessing techniques for noise and artifact removal in the PET/CT images that can preserve the critical features of the tumor for accurate lung cancer diagnosis. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Artificial Ecosystem-Based Optimization for Optimal Location and Sizing of Solar Photovoltaic Distribution Generation in Agriculture Feeders
In this paper, an efficient nature-inspired meta-heuristic algorithm called artificial ecosystem-based optimization (AEO) is proposed for solving optimal locations and sizes of solar photovoltaic (SPV) systems problem in radial distribution system (RDS) towards minimization of grid dependency and greenhouse gas (GHG) emission. Considering loss minimization as main objective function, the location and size of solar photovoltaic systems (SPV) are optimized using AEO algorithm. The results on Indian practical 22-bus agriculture feeder and 28-bus rural feeders are highlighted the need of optimally distributed SPV systems for maintaining minimal grid dependency and reduced GHG emission from conventional energy (CE) sources. Moreover, the results of AEO have been compared with different heuristic approaches and highlighted its superiority in terms of convergence characteristics and redundancy features in solving the complex, nonlinear, multi-variable optimization problems in real time. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.