Browse Items (11810 total)
Sort by:
-
Supervised Learning-Based Data Classification and Incremental Clustering
Using supervised learning-based data classification and incremental clustering, an unknown example can be classified using the most common class among K-nearest examples. The KNN classifier claims, Tell me who your neighbors are, and it will tell you who you are. The supervised learning-based data classification and incremental clustering technique is a simple yet powerful approach with applications in computer vision, pattern recognition, optical character recognition, facial recognition, genetic pattern recognition, and other fields. Its also known as a slacker learner because it doesnt develop a model to classify a given test tuple until the very last minute. When we say yes or no, there may be an element of chance involved. However, the fact that a diner can recognise an invisible food using his senses of taste, flavour, and smell is highly fascinating. At first, there can be a brief data collection phase: what are the most noticeable spices, aromas, and textures? Is the flavour of the food savoury or sweet? This information can then be used by the diner to compare the bite to other items he or she has had in the past. Earthy flavours may conjure up images of mushroom-based dishes, while briny flavours may conjure up images of fish. We view the discovery process through the lens of a slightly modified adage: if it smells like a duck and tastes like a chicken, youre probably eating chicken. This is a case of supervised learning in action. Machine learning can benefit from supervised learning, which is a concept that can be applied to it (ML). 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
Real-Time Application with Data Mining and Machine Learning
Data mining and machine learning are the most expressive research and application domain. All real-time application directly or indirectly depends on data mining and machine learning. There are manyrelevantfields, like data analysis in finance,retail, telecommunications sector, analyzing biological data, otherscientific uses, and intrusiondetection.The most expressive research and application domain is data mining and machine learning. Data mining and machine learning are used in all real-time applications, whether directly or indirectly. Data analysis in finance, retail, telecommunications, biological data analysis, extra scientific applications, and intrusion detection are just a few exampleswhere it can be used. Because it captures a lot of data from sales, client purchase histories, product transportation, consumption, and services, DM has a lot of applications in the retail industry. It's only logical that the amount of data collected will continue to climb as the Internet's accessibility, cost, and popularity increase. In the retail industry, DM assists in the detection of customer buying behaviors and trends, resulting in improved customer service and increased customer retention and satisfaction. 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
A Brief Concept on Machine Learning
Machine learning is a subset of AI. Its a research project aimed at gathering computer programscapable of performing intelligent actions based on prior facts or experiences. Most of us utilize various machine learning techniques every day when we use Netflix, YouTube, Spotify recommendation algorithms, and Google and Yahoo search engines and voice assistants like Google Home and Amazon Alexa. All of the data is labeled, and algorithms learn to anticipate the output from the input. The algorithms learn from the datas underlying structure, which is unlabelled. Because some data is labeled, but not all are, a combination of supervised and unsupervised techniques can be used. 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
Play and Play Spaces for Global Health, Happiness, and Well-Being
Play has a significant role in an individuals learning and holistic development. Play and recreation are a need and right. Research on play indicates that the significance of play is neglected among the current generation. Play spaces are shrinking, and physical play is becoming extinct in most communities. This current scenario may or have led to poor physical and mental health outcomes. The proposed book chapter aims to present play and play spaces in physical and mental health. The literature of play theories in child development shows the role of play in socioemotional, physical, and cognitive development. The current paper brings together literature on play across the lifespan, highlighting how play and recreation impacts children, youth, adults, and older adults physical and mental health. The change in lifestyle patterns has contributed to the neglect of play and recreation. The paper throws light on the need for the attention of professionals and policymakers for interventions and advocacy at both local and global levels in promoting play and preserving natural play spaces. The Editor(s) (if applicable) and The Author(s), under exclusive license to Taylor and Francis Pte Ltd. 2022. -
Review on Segmentation of Facial Bone Surface from Craniofacial CT Images
Three-dimensional (3D) representation of facial bone surface is needed in the virtual surgical planning for orthognathic surgery. Segmentation of facial bone surface from computed tomography images is first step in developing the 3D model. With the advent in the computer vision techniques, various automatic and semi-segmentation algorithms were developed in the recent years for segmentation of facial bone surface from craniofacial computed tomography images. In the proposed paper, the various segmentation techniques for extracting bone surface from 3D CT images for corrective jaw surgery available in the literature have been discussed. By reviewing all the methods available in the literature, it is found that each method has its own merits and demerits. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Data Classification and Incremental Clustering Using Unsupervised Learning
Data modelling, which is based on mathematics, statistics, and numerical analysis, is used to look at clustering. Clusters in machine learning allude to hidden patterns; unsupervised learning is used to find clusters, and the resulting system is a data concept. As a result, clustering is the unsupervised discovery of a hidden data concept. The computing needs of clustering analysis are increased becausedata mining deals with massive databases. As a result of these challenges, data mining clustering algorithms that are both powerful and widely applicable have emerged. Clustering is also known as data segmentation in some applications because it splits large datasets into categories based on their similarities. Outliers (values that are far away from any cluster) can be more interesting than typical examples; hence outlier detection can be done using clustering. Outlier detection applications include the identification of credit card fraud and monitoring unlawful activities in Internet commerce.As a result, multiple runs with alternative initial cluster center placements must be scheduled to identify near-optimal solutions using the K-means method. A global K-means algorithm is used to solve this problem, which is a deterministic global optimization approach that uses the K-means algorithm as a local search strategy and does not require any initial parameter values. Insteadof selecting initial values for all cluster centers at random, as most global clustering algorithms do, the proposed technique operates in stages, preferably adding one new cluster center at a time. 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
Corporate Social Responsibility and Sustainability Development Mapping: Practical Application Beer and Nigel Roome Model
Toyota is one of the highly appreciated companies for CSR performance in India. The central theme of this research paper is to apply Beers and Nigel Roome model of sustainability to the Japanese automobile manufacturer operating in India. In the first part of analysis, we have discussed the relevance of Beers model to the Toyota CSR activity. In the second part of analysis, we make an effort to take forward the work of Basavaraj et al. (2018) who framed the questionnaire to map the Nigel Roome model to Toyota CSR case. The contribution of this case is to bring the new dimension in data presentation and analysis of Roome theoretical model on weak sustainability versus strong sustainability. The case analysis provides new insights for CSR managers to map their performance pictorially using Tableau software. Finally, we conclude, Toyota follows significant components of cybernetic model. Due to conscious effort of Toyota, India has positioned itself substantially well in the CSR ranking. 2022, Springer Nature Singapore Pte Ltd. -
Introduction to Data Mining and Knowledge Discovery
Data mining is a process of discovering some necessary hidden patterns from a large chunk of data that can be stored in multiple heterogeneous resources. It has an enormous use to make strategic decisions by business executives after analyzing the hidden truth of data. Data mining one of the steps in the knowledge-creation process. A data mining system consists of a data warehouse, a database server, a data mining engine, a pattern analysis module, and a graphical user interface. Data mining techniques include mining the frequent patterns and association learning rules with analysis, sequence analysis. Data mining technique is applicable on the top of various kinds of intelligent data storage systems such as data warehouses. It provides some analysis processes to make some useful strategic decisions. There are various issues and challenges faced by a data mining system in large databases. It provides a great place to work for data researchers and developers. Data mining is the process of classification, which can be executed based on the examination of training data (i.e., objects whose class label is predefined). With the help of an expert set of previous class objects with known class labels, it can find a model that can predict a class object with an unknown class label. These classification models can be classified into a variety of categories, including nearest neighbor, neural network, and others. Bayesian model, decision tree, neural network Random forest, decision trees Support vector machine, random forest SVM (support vector machine), for example. By analyzing the most common class among k closest samples, the K-Nearest Neighbor (KNN) technique aids in predicting of the class object with the unknown class label. Its an easy-to-use strategy that yields a solid classification result from any distribution. The Naive Bayes theory helps to perform the classification. It is one of the fastest classification algorithms, capable of efficiently handling real-world discrete data. 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
Applications of artificial intelligence to neurological disorders: Current technologies and open problems
Neurological disorders are caused by structural, biochemical, and electrical abnormalities involving the central and peripheral nervous system. These disorders may be congenital, developmental, or acute onset in nature. Some of the conditions respond to surgical interventions while most require pharmacological intervention and management, and are also likely to be progressive in nature. Owing to a high global burden of the most common neurological disorders, such as dementia, stroke, epilepsy, Parkinsons disease, multiple sclerosis, migraine, and tension-type headache, there exist multiple challenges in early diagnosis, management, and prevention domains, which are further amplified in regions with inadequate medical services. In such situations, technology ought to play an inevitable role. In this chapter, we review artificial intelligence (AI) and machine learning (ML) technologies for mitigating the challenges posed by neurological disorders. To that end, we follow three steps. First, we present the taxonomy of neurological disorders, derived from well-established findings in the medical literature. Second, we identify challenges posed by each of the common disorders in the taxonomy that can be defined as computational problems. Finally, we review AI/ML algorithms that have either stood the test of time or shown the promise to solve each of these problems. We also discuss open problems that are yet to have an effective solution for the challenges posed by neurological disorders. This chapter covers a wide range of disorders and AI/ML techniques with the goal to expose researchers and practitioners in neurological disorders and AI/ML to each others field, leading to fruitful collaborations and effective solutions. 2022 Elsevier Inc. All rights reserved. -
Feature Subset Selection Techniques with Machine Learning
Scientists and analysts of machine learning and data mining have a problem when it comes to high-dimensional data processing. Variable selection is an excellent method to address this issue. It removes unnecessary and repetitive data, reduces computation time, improves learning accuracy, and makes the learning strategy or data easier to comprehend. This chapterdescribes various commonly used variable selection evaluation metrics before surveying supervised, unsupervised and semi-supervised variable selection techniques that tend to be often employed in machine learningtasks including classification and clustering. Finally, ensuing variable selection difficulties are addressed. Variant selection is an essential topic in machine learning and pattern recognition, and numerous methods have been suggested. This chapter scrutinizesthe performance of various variable selection techniques utilizing public domain datasets. We assessed the quantity of decreased variants and the increase in learning assessment with the selected variable selection techniques and then evaluated and compared each approach based on these measures. The evaluation criteria for the filter model are critical. Meanwhile, the embedded model selects variations during the learning model's training process, and the variable selection result is automatically outputted when the training process is concluded. While the sum of squares of residuals in regression coefficients is less than a constant, Lasso minimizes the sum of squares of residuals, resulting in rigorous regression coefficients. The variables are then trimmed using the AIC and BIC criteria, resulting in a dimension reduction. Lasso-dependent variable selection strategies, such as the Lasso in the regression model and others, provide a high level of stability. Lasso techniques are prone to high computing costs or overfitting difficulties when dealing with high-dimensional data. 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
Data Mining-Based Variant Subset Features
A subset of accessible variants data is chosen for the learning approaches during the variant selection procedure. Itincludes the important one with the fewest dimensions and contributes the most to learner accuracy. The benefit of variant selection would be that essential information about a particular variant isnt lost, but if just a limited number of variants are needed,and the original variants are extremely varied, there tends to be a risk of information being lost since certain variants must be ignored. Dimensional reduction, also based on variant extraction, on the other hand, allows the size of the variant space to be reduced without losing information from the original variant space.Filters, wrappers, and entrenched approaches are the three categories of variant selection procedures. Wrapper strategies outperform filter methods because the variation selection procedure is suited for the classifier to be used. Wrapper techniques, on the other hand, are too expensive to use for large variant spaces due to their high computational cost;therefore each variant set must be evaluated using the trained classifier, which slows down the variant selection process. Filter techniques have a lower computing cost and are faster than wrapper procedures, but they have worse classification reliability and are better suited to high-dimensional datasets. Hybrid techniques, which combine the benefits of both filters and wrappers approaches, are now being organized. 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
Research Intention Towards Incremental Clustering
Incremental clustering is nothing but a process of grouping new incoming or incremental data into classes or clusters. It mainly clusters the randomly new data into a similar group of clusters. The existing K-means and DBSCAN clustering algorithms are inefficient to handle the large dynamic databases because, for every change in the incremental database, they simply run their algorithms repeatedly, taking lots of time to properly cluster those new ones coming data. It takes too much time and has also been realized that applying the existing algorithm frequently for updated databases may be too costly. So, the existing K-means clustering algorithm is not suitable for a dynamic environment. Thats why incremental versions of K-means and DBSCAN have been introduced in our work to overcome these challenges.To address the aforementioned issue, incremental clustering algorithms were developed to measure new cluster centers by simply computing the distance of new data from the means of current clusters rather than rerunning the entire clustering procedure. Both the K-means and the DBSCANDBSCAN algorithms use a similar approach. As a result, it specifies the delta change in the original database at which incremental K-means or DBSCANDBSCAN clustering outperforms prior techniques. 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
An Overview of Augmenting AI Application in Healthcare
Artificial intelligence (AI) is showing a paradigm shift in all spheres of the world by mimicking human cognitive behavior. The application of AI in healthcare is noteworthy because of availability of voluminous data and mushrooming analytics techniques. The various applications of AI, especially, machine learning and neural networks are used across different areas in the healthcare industry. Healthcare disruptors are leveraging this opportunity and are innovating in various fields such as drug discovery, robotic surgery, medical imaging, and the like. The authors have discussed the application of AI techniques in a few areas like diagnosis, prediction, personal care, and surgeries. Usage of AI is noteworthy in this COVID-19 pandemic situation too where it assists physicians in resource allocation, predicting death rate, patient tracing, and life expectancy of patients. The other side of the coin is the ethical issues faced while using this technology like data transparency, bias, security, and privacy of data becomes unanswered. This can be handled better if strict policy measures are imposed for safe handling of data and educating the public about how treatment can be improved by using this technology which will tend to build trust factor in near future. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
AI Based Technologies for Digital and Banking Fraud During Covid-19
The only viral thing today is the Covid 19 virus, which has severely disrupted all the economic activity around globe because of which all the businesses are experiencing irrespective of its domain or country of origin. One such major paradigm shift is contactless business, which has increased digital transaction. This in turn has given hackers and fraudsters a lot of space to perform digital scams line phishing, spurious links, malware downloads etc. These frauds have become undesirable part of increased digital transactions, which needs immediate attention and eradication from the system with instant results. In this pandemic situation where, social distancing is key to restrict the spread of the virus, digital payments are the safest and most appropriate payment method, and it needs to be safe and secure for both the parties. Artificial intelligence can be a saviour in this situation, which can help combat the digital frauds. The present study will focus on the different kinds of frauds which customers and facing, and most possible ways Artificial intelligence can be incorporated to identify and eliminate such kind of frauds to make digital payments more secure. Findings of the study suggest that inclusion of AI did bring a change in the business environment. AI used for entertainment has become an essential part in business. Transfiguration from process to platform focused business. The primary requirement of AI is to study the customer experience and how to give a better response for improving the satisfaction. But recently AIs are used not only for customer support, but its been observed that businesses have taken it as marketing strategy to increase demand and sales. 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
An Efficient Comparison on Machine Learning and Deep Neural Networks in Epileptic Seizure Prediction
Electroencephalography signals have been widely used in cognitive neuroscience to identify the brains activity and behavior. These signals retrieved from the brain are most commonly used in detecting neurological disorders. Epilepsy is a neurological impairment in which the brains activity becomes abnormal, causing seizures or unusual behavior. Methods: The benchmark BONN dataset is used to compare and assess the models. The investigations were conducted using the traditional algorithms in machine learning algorithms such as KNN, naive Bayes, decision tree, random forest, and deep neural networks to exhibit the DNN models efficiency in epileptic seizure detection. Findings: Experiments and results prove that deep neural network model performs more than traditional machine learning algorithms, especially with the accuracy value of 97% and area under curve value of 0.994. Novelty: This research aims to focus on the efficiency of deep neural network techniques compared with traditional machine learning algorithms to make intelligent decisions by the clinicians to predict if the patient is affected by epileptic seizures or not. So, the focus of this paper helps the research community dive into the opportunities of innovations in deep neural networks. This research work compares the machine learning and deep neural network model, which supports the clinical practitioners in diagnosis and early treatment in epileptic seizure patients. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Twitter Sentiment Analysis Based on Neural Network Techniques
Our whole world is changing everyday due to the present pace of innovation. One such innovation was the Internet which has become a vital part of our lives and is being utilized everywhere. With the increasing demand to connected and relevant, we can see a rapid increase in the number of different social networking sites, where people shape and voice their opinions regarding daily issues. Aggregating and analysing these opinions regarding buying products and services, news, and so on are vital for todays businesses. Sentiment analysis otherwise called opinion mining is the task to detect the sentiment behind an opinion. Today, analysing the sentiment of different topics like products, services, movies, daily social issues has become very important for businesses as it helps them understand their users. Twitter is the most popular microblogging platform where users put voice to their opinions. Sentiment analysis of Twitter data is a field that has gained a lot of interest over the past decade. This requires breaking up tweets to detect the sentiment of the user. This paper delves into various classification techniques to analyse Twitter data and get their sentiments. Here, different features like unigrams and bigrams are also extracted to compare the accuracies of the techniques. Additionally, different features are represented in dense and sparse vector representation where sparse vector representation is divided into presence and frequency feature type which are also used to do the same. This paper compares the accuracies of Nae Bayes, decision tree, SVM, multilayer perceptron (MLP), recurrent neural network (RNN), convolutional neural network (CNN), and their validation accuracies ranging from 67.88 to 84.06 for different classification techniques and neural network techniques. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
An Empirical Study ofSignal Transformation Techniques onEpileptic Seizures Using EEG Data
Signal processing may be a mathematical approach to manipulate the signals for varied applications. A mathematical relation that changes the signal from one kind to a different is named a transformation technique in the signal process. Digital processing of electroencephalography (EEG) signals plays a significant role in a highly multiple application, e.g., seizure detection, prediction, and classification. In these applications, the transformation techniques play an essential role. Signal transformation techniques are acquainted with improved transmission, storage potency, and subjective quality and collectively emphasize or discover components of interest in an extremely measured EEG signal.The transformed signals result in better classification. This article provides a study on some of the important techniques used for transformation of EEG data. During this work, we have studied six signal transformation techniques like linear regression, logistic regression, discrete wavelet transform, wavelet transform, fast Fourier transform, and principal component analysis with Eigen vector to envision their impact on the classification of epileptic seizures. Linear regression, logistic regression, and discrete wavelet transform provides high accuracy of 100%, and wavelet transform produced an accuracy of 96.35%. The proposed work is an empirical study whose main aim is to discuss some typical EEG signal transformation methods, examine their performances for epileptic seizure prediction, and eventually recommend the foremost acceptable technique for signal transformation supported by the performance. This work also highlights the advantages and disadvantages of all seven transformation techniques providing a precise comparative analysis in conjunction with the accuracy. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Optimal DG Planning and Operation for Enhancing Cost Effectiveness of Reactive Power Purchase
The demand for reactive power support from distributed generation (DG) sources has become increasingly necessary due to the growing penetration of DG in the distribution network. Photovoltaic (PV) systems, fuel cells, micro-turbines, and other inverter-based devices can generate reactive power. While maximizing profits by selling as much electricity as possible to the distribution companies (DisCos) is the main motive for the DG owners, technical parameters like voltage stability, voltage profile and distribution losses are of primary concern to the (DisCos). Local voltage regulation can reduce system losses, improve voltage stability and thereby improve efficiency and reliability of the system. Participating in reactive power compensation reduces the revenue generating active power from DG, thereby reducing DG owners profits. Payment for reactive power is therefore being looked at as a possibility in recent times. Optimal power factor (pf) of operation of DG becomes significant in this scenario. The study in this paper is presented in two parts. The first part proposes a novel method for determining optimal sizes and locations of distributed generation in a radial distribution network. The method proposed is based on the recent optimization algorithm, TeachingLearning-Based Optimization with Learning Enthusiasm Mechanism (LebTLBO). The effectiveness of the method has been compared with existing methods in the literature. The second part deals with the determination of optimal pf of operation of DG sources to minimize reactive power cost, reduce distribution losses and improve voltage stability. The approachs effectiveness has been tested with IEEE 33 and 69 bus radial distribution systems. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Ensemble Model of Machine Learning for Integrating Risk in Software Effort Estimation
The development of software involves expending a significant quantum of time, effort, cost, and other resources, and effort estimation is an important aspect. Though there are many software estimation models, risks are not adequately considered in the estimation process leading to wide gap between the estimated and actual efforts. Higher the level of accuracy of estimated effort, better would be the compliance of the software project in terms of completion within the budget and schedule. This study has been undertaken to integrate risk in effort estimation process so as to minimize the gap between the estimated and the actual efforts. This is achieved through consideration of risk score as an effort driver in the computation of effort estimates and formulating a machine learning model. It has been identified that risk score reveals feature importance and the predictive model with integration of risk score in the effort estimates indicated an enhanced fit. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Removal of Occlusion in Face Images Using PIX2PIX Technique for Face Recognition
Occlusion of face images is a serious problem encountered by the researchers working in different areas. Occluded face creates a hindrance in extracting the features thereby exploits the face recognition systems. Level of complexity increases with changing gestures, different poses, and expression. Occlusion of the face is one of the seldom touched areas. In this paper, an attempt is made to recover face images from occlusion using deep learning techniques. Pix2pix a condition generative adversarial network is used for image recovery. This method is used for the translation of one image to another by converting an occluded image to a non-occluded image. Webface-OCC dataset is used for experimentation, and the efficacy of the proposed method is demonstrated. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.