Browse Items (9795 total)
Sort by:
-
An Analysis of Levenshtein Distance Using Dynamic Programming Method
An edit distance (or Levenshtein distance) amongst dual verses refers to the slightest amount of replacements, additions and omissions of signs essential to turn one name addicted to the additional is referred to as the edit distance (or Levenshtein distance) amongst dual verses. The challenge of calculating the edit distance of a consistent verbal, that is the set of verses recognised by a fixed mechanism, is addressed in this research. The Levenshtein distance is a straightforward metric for calculating the distance amongst dual words using a string approximation. After witnessing its efficiency, this approach was refined by combining certain comparable letters and minimising the biased modification between associates of the similar set. The findings displayed a considerable enhancement over the old Levenshtein distance method. 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Application of data analytics principles in healthcare
Information technology has transformed the healthcare field worldwide. In many areas of the healthcare industry, implementations of data analytics tools are commonly used recently. Applying data analytics principles in medical sciences appropriately transforms the mere storage of medical records in to discovery of drugs. Data science and analytics are essential tools because they can help make better decisions when it comes to spending and reducing inefficiencies in healthcare. The proposed model of healthcare data analytics provides a framework to accelerate the adoption and implementation of predictive analytics in healthcare. Healthcare data analytics can be applied to prove formulated hypotheses, test those using standard analytics models and predict patient health conditions. It can be used to classify patients at risk of developing diseases such as diabetes, asthma, and other life-long illnesses. In spite of the challenges faced while applying data science predictive analytics in the healthcare environment, there is an enormous opportunity for its usage in providing quality healthcare for patients. BEIESP. -
Service request scheduling based on quantification principle using conjoint analysis and Z-score in cloud
Service request scheduling has a major impact on the performance of the service processing design in a large-scale distributed computing environment like cloud systems. It is desirable to have a service request scheduling principle that evenly distributes the workload among the servers, according to their capacities. The capacities of the servers are termed high or low relative to one another. Therefore, there is a need to quantify the server capacity to overcome this subjective assessment. Subsequently, a method to split and distribute the service requests based on this quantified server capacity is also needed. The novelty of this research paper is to address these requirements by devising a service request scheduling principle for a heterogeneous distributed system using appropriate statistical methods, namely Conjoint analysis and Z-score. Suitable experiments were conducted and the experimental results show considerable improvement in the performance of the designed service request scheduling principle compared to a few other existing principles. Areas of further improvement have also been identified and presented. Copyright 2018 Institute of Advanced Engineering and Science. All rights reserved. -
Capacity Aware Active Monitoring Load Balancing Principle for Private Cloud
Virtual machines (VMs) are the basic compute elements in cloud computing. There are load balancing principles associated with a job scheduler assigns the requests to these computing elements. Deploying an effective load balancing principle enhances better performance that ultimately achieves users satisfaction at the high level. Assigning an equal requests load appropriate to the capacity of the VMs will be a fair principle that can be the objective of any load balancing principle. Active monitoring load balancing principle assigns the requests to a server based on the pre-computed threshold limit. This paper presents a technique for assessing the capacity of the VMs based on a common attribute. This work measures each VMs processing ability as a percentage using the statistical method called Z-score. A threshold is quantified and the requests are proportioned based on this threshold value. Each server is then assigned with the proportioned requests. Suitable experiments were conducted Requests Assignment Simulator (RAS), a customized cloud simulator. The results prove that the performance of the proposed principle is comparatively better than a few load balancing principles. Areas of future extension of this work were also identified. 2021, The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Evaluation and applying feature extraction techniques for face detection and recognition
Detecting the image and identifying the face has become important in the field of computer vision in recognizing and analyzing, reconstructing into 3D, and labelling the image. Feature extraction is usually the first stage in detection and recognition of the image processing and computer vision. It supports the conversion of the image into a quantitative data. Later, this converted data can be used for labelling, classifying and recognizing a model. In this paper, performance of such feature extraction techniques viz. Local Binary Pattern (LBP), Histogram of Oriented Gradients (HOG) and Convolutional Neural Network (CNN) technique are applied to detect and recognize the face. The experiments conducted with a data set addressing the issues like pose variation, facial expression and intensity of light. The efficiency of the algorithms was evaluated based on the computational time and accuracy rate. 2018 Institute of Advanced Engineering and Science. All rights reserved. -
Celebrity endorsements the interplay between intellectual property law and the consumer protection act, 2019
The ambit of Copyright law has expanded over time, leading to development of newer concepts such as, Personality Rights. These rights are vested in individuals who have acquired an identifiable persona in the eyes of the public. There are two important facets to personality rights-Right to Publicity &Right to Privacy. When such identifiable identities use their acquired celebrity status to promote goods and services of a company to attract more consumers, it can be understood as Celebrity Endorsements. This is the most common source of marketing used by major companies to increase sales and garner goodwill and reputation. However, this source of communicating necessary information to the public becomes dangerous when celebrities promote false or misleading advertisements. To counter such issues, the Consumer Protection Act, 2019 introduced provisions tohold celebrities endorsing such products or services to be liable for injury suffered by consumers. The Law mandates that in order to ensure that such misleading advertisements arent promoted, the celebrities must conduct due diligence of the products before endorsing them. However, the question remains that to what extent can celebrities, who are not directly involved in production or manufacturing, be held liable for exploiting their personality rights? This paper aims at addressing the newly created legal interlink between personality rights via celebrity endorsements and protection of consumer interests. 2020, National Institute of Science Communication and Information Resources. All rights reserved. -
Data Reduction Techniques in Wireless Sensor Networks with AI
Due to their numerous uses in practically every part of life and their related problems, such as energy saving, a longer life cycle, and better resource usage, the research of wireless sensor networks is ongoing. Its extensive use successfully saves and processes a considerable volume of sensor data. Since the sensor nodes are frequently placed in challenging locations where less expensive resources are required for data collection and processing, this presents a new difficulty. One method for minimizing the quantity of sensor data is data reduction. A review of data reduction methods has been provided in this publication. The different data reduction approaches that have been put forth over the years have been examined, along with their advantages and disadvantages, ways in which they can be helpful, and whether or not using them in contexts with limited resources is worthwhile. 2022 IEEE. -
Corrosion behaviour in friction stir processed and welded materials
This chapter presents a comprehensive study on the influence of friction stir processing/welding (FSW/FSP) on corrosion behaviour. It briefly discusses the different aspects of corrosion including corrosion types, measurement techniques and data analysis. The corrosion behaviour of a wide range of friction stir processed materials, including light weight metals such as magnesium and aluminum alloys, as well as high strength metals such as steel, has been discussed in detail. The influence of FSP parameters on the microstructural evolution, comprising grain-size and precipitate refinement along with its correlation with the corrosion properties, has been described for different materials. 2014 The authors and contributors. All rights reserved. -
Visible light active bismuth chromate/curcuma longa heterostructure for enhancing photocatalytic activity
Bismuth chromate nanostructures were fabricated via hydrolysis technique using curcuma longa for enhancing the photocatalytic activity. The analytes have been labelled as Bi2CrO6-C, when prepared without using curcuma longa and Bi2CrO6-G, prepared using curcuma longa extract (Bi2CrO6/Curcuma longa). The as-fabricated catalysts have been confirmed via characterization techniques including X-ray diffraction, Transmission electron microscopy (TEM), and Field emission scanning electron microscopy (FESEM), UVVis. DRS. The as-synthesised analytes have been evaluated their photocatalytic efficiency via photodegradation of an organic pollutant, Methyl Orange (MO). The current research findings imposed the effect of inculcation of a green extract curcuma longa reduces particle size and increases surface area of the material and moreover makes heterostructure with Bismuth chromate and inhibits recombination of photogenerated charges for efficient degradation of the organic pollutant. Bi2CrO6-G demonstrates here enhanced photocatalytic activity as compared to Bi2CrO6-C. Akadiai Kiad Budapest, Hungary 2024. -
Diabetes mellitus prediction using machine learning within the scope of a generic framework
Artificial intelligence (AI) based automated disease prediction has recently taken a significant place in the field of health informatics. However, due to unavailability of real time large scale medical data, the dynamic learning of prediction models remains principally subsided. This paper, therefore proposes a dynamic predictive modelling framework for chronic diseases prediction in real-time. The framework premise suggests creation of a centralized patient-indexed medical database to dynamically train machine learning (ML) models and predict risk levels of chronic diseases in real time. In this study, comprehensive empirical evaluations to train seven state-of-the-art ML models for diabetes risk prediction are performed in context of phase 2 of the suggested framework. The selected optimal model can then be dynamically applied to predict diabetes in phase 3 of the framework. Various metrics such as accuracy, precision, Recall, F1-score and receiver operating characteristic (ROC) curve are employed for evaluating performances of the trained models. Parameter tunings using different type of kernels, different number of neighbors and estimators are rigorously performed in order to create a suggestive literature for healthcare prediction ecosystem. Comparative analysis indicates high prediction accuracies on diabetes test data records for neural network and support vector machine (SVM) models as compared to other applied models. 2023 Institute of Advanced Engineering and Science. All rights reserved. -
An enhanced predictive modelling framework for highly accurate non-alcoholic fatty liver disease forecasting
Non-alcoholic fatty liver disease (NAFLD) is a chronic medical ailment characterized by accumulation of excessive fat in the liver of non-alcoholic patients. In absence of any early visible indications, application of machine learning based predictive techniques for early prediction of NAFLD are quite beneficial. The objective of this paper is to present a complete framework for guided development of varied predictive machine learning models and predict NAFLD disease with high accuracy. The framework employs stepby-step data quality enhancement to medical data such as cleaning, normalization, data upscaling using SMOTE (for handling class imbalances) and correlation analysis-based feature selection to predict NAFLD with high accuracy using only clinically recorded identifiers. Comprehensive comparative analysis of prediction results of seven machine learning predictive models is done using unprocessed as well as quality enhanced data. As per the observed results, XGBoost, random forest and neural network machine learning models reported significantly higher accuracies with improved AUC and ROC values using preprocessed data in contrast to unprocessed data. The prediction results are also assessed on various quality metrics such as accuracy, f1-score, precision, and recall significantly support the need for presented methodologies for qualitative NAFLD prediction modelling. 2025 Institute of Advanced Engineering and Science. All rights reserved. -
Beyond boundaries: Role of artificial intelligence and ChatGPT in transforming higher education
The goal of the proposed chapter is to give readers a thorough understanding of the complex effects of ChatGPT on higher education. It will cover the short-and long-term benefits that ChatGPT offers, as well as limitations that may affect both educators and learners. The chapter will also highlight a wide range of ethical issues and challenges that arise while using ChatGPT. Research in this area is very limited and the literature review reveals that there are benefits as well as limitations of using ChatGPT in the domain of higher education but the fact is that it is going to grow further which makes it an urgent need for the policymakers and stakeholders to explore and understand how ChatGPT should be integrated into higher education to deliver more value to the educators and learners. The proposed chapter will cover the evolution of ChatGPT, its growing popularity and impact, benefits it offers, the associated disadvantages and the road ahead. 2024, IGI Global. -
Predicting a Rise in Employee Attrition Rates Through the Utilization of People Analytics
Modern organizations have a multitude of technological tools at their disposal to augment decision-making processes, with artificial intelligence (AI) standing out as a pivotal and extensively embraced technology. Its application spans various domains, including business strategies, organizational management, and human resources. There's a growing emphasis on the significance of talent capital within companies, and the rapid evolution of AI has significantly reshaped the business landscape. The integration of AI into HR functions has notably streamlined the analysis, prediction, and diagnosis of organizational issues, enabling more informed decision-making concerning employees. This study primarily aims to explore the factors influencing employee attrition. It seeks to pinpoint the key contributors to an employee's decision to quit an organization and develop a futuristic data driven model to forecast the possibility of an employee leaving the organization. The study involves training a model using an employee turnover dataset from IBM analytics, including a total of thirty-five features and approximately one thousand and five hundred samples. Post-training, the model's performance is assessed using classical metrics. The Gaussian Nae Bayes classifier emerged as the algorithm delivering the most accurate results for the specified dataset. It notably achieved the best recall (0.54) indicating its ability to correctly identify positive observations and maintained false negative of merely 4.5%. 2023 IEEE. -
Artificial Intelligence in Disaster Management: A Survey
This paper provides a literature review of cutting-edge artificial intelligence-based methods for disaster management. Most governments are worried about disasters, which, in general, are unbelievable events. Researchers tried to deploy numerous artificial intelligence (AI)-based approaches to eliminate disaster management at different stages. Machine learning (ML) and deep learning (DL) algorithms can manage large and complex datasets emerging intrinsically in disaster management circumstances and are incredibly well suited for crucial tasks such as identifying essential features and classification. The study of existing literature in this paper is related to disaster management, and further, it collects recent development in nature-inspired algorithms (NIA) and their applications in disaster management. 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
HR Analytics: An Indispensable Tool for Effective Talent Management
Business organizations have changed tremendously in the way they visualize the human capital of the organization and make all efforts to create a workforce that is productively engaged and is ready to embrace the challenges posed by uncertainty and turbulence in the business environment. This calls for a decision-making approach that is based on observed people behaviours rather than relying on intuition and gut feel. These observed behaviours are reactions or consequences to stimuli and therefore the science of Human Resource Management can be better understood as predicting these dependent variables based on a set of independent variables. This chapter attempts to present a complete framework of HR analytics in terms of concept, need and how it can add immense value to effective talent management in the organizations. The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024. -
Discrimination between scheduled and non-scheduled groups in access to basic services in urban India
Access to basic services such as water, sanitation, and electricity is a key determinant of an individuals well-being. Nevertheless, access to these services is unequally distributed among different social groups in many countries. India is no exception, with the scheduled castes (SC) and scheduled tribes (ST) being one of the countrys most marginalised and disadvantaged groups. This paper analyses the disparities in access to basic services between scheduled and non-scheduled households, investigates the factors contributing to the unequal access, and suggests policy recommendations. Using data from the National Sample Survey 76th Round, we analyse the access to basic services such as durable housing, improved water and sanitation, and access to electricity. The papers objectives are (a) to investigate the factors impacting the quality of basic service delivery in urban India separately for scheduled and non-scheduled households and (b) to quantify the discrimination between scheduled and non-scheduled households in urban India concerning access to quality of basic services through computing a comprehensive index and by using the Fairlie decomposition approach. The analysis corroborates the finding that systemic discrimination exists between scheduled and non-scheduled households in urban India regarding access to good quality basic services up to an extent of 24%. 2024 The Authors. -
Sampling and Categorization of Households for Research in Urban India
Conventional sampling methodologies for citizens/households in urban research in India are constrained due to the lack of readily available, reliable sampling frames. Voter lists, for example, are riddled with errors and, as such may not be able to provide a robust sampling frame from which a representative sample can be drawn. The JanaBrown Citizenship Index project consortium (Janaagraha, India; Brown University, USA) has conceptualized a unique research design that provides an alternative way on how to identify, categorize and sample households (and citizens within) in a city in a representative and meaningful way. The consortium consists of the Janaagraha Centre for Citizenship and Democracy, based in India, and the Brown Center for Contemporary South Asia, part of Brown University, USA. The methodology was designed to enable systematic data collection from citizens and households on aspects of citizenship, infrastructure and service delivery across different demographic sections of society. The article describes how (a) data on communities that are in the minority, such as Muslims, scheduled castes (SC) and scheduled tribes (ST), were used to categorize Polling Parts to allow for stratified random sampling using these strata, (b) geospatial tools such as QGIS and Google Earth were used to create base maps aligning to the established Polling Part unit, (c) the resulting maps were used to create listings of buildings, (d) how housing type categorizations were created (based on the structure/construction material/amenities, etc.) and comprised part of the building listing process, and (e) how the listings were used for sampling and to create population weights where necessary. This article describes these methodological approaches in the context of the project while highlighting advantages and challenges in application to urban research in India more generally. 2022 Lokniti, Centre For The Study Of Developing Societies. -
Revolutionizing Arrhythmia Classification: Unleashing the Power of Machine Learning and Data Amplification for Precision Healthcare
This paper presents a comprehensive exploration of arrhythmia classification using machine learning techniques applied to electrocardiogram (ECG) signals. The study delves into the development and evaluation of diverse models, including K-Nearest Neighbors, Logistic Regression, Decision Tree Classifier, Linear and Kernelized Support Vector Machines, and Random Forest. The models undergo rigorous analysis, emphasizing precision and recall due to the categorical nature of the dependent variable. To enhance model robustness and address class imbalances, Principal Component Analysis (PCA) and Random Oversampling are employed. The results highlight the effectiveness of the Kernelized SVM with PCA, achieving a remarkable accuracy of 99.52%. Additionally, the paper discusses the positive impact of feature reduction and oversampling on model performance. The study concludes with insights into the significance of PCA and Random Oversampling in refining arrhythmia classification models, offering potential avenues for future research in healthcare analytics. 2024 IEEE. -
Transforming Industry 5.0: Real Time Monitoring and Decision Making with IIOT
This chapter explores the transformative potential of Industry 5.0 by leveraging real-time monitoring and decision-making capabilities through the use of IIoT dashboards. It extends in examining how IIoT dashboards enable organizations to gain real-time insights into their operations, facilitating data-driven decision-making and improving overall efficiency. By embracing IIoT dashboards, businesses can effectively transform Industry 5.0, unlocking new levels of productivity, agility, and competitiveness. In this chapter, important challenges such as data integration, data security, scalability, and user experience are identified. It highlights key considerations for implementing IIoT dashboards and offers practical methods for successful adoption of this technology. Remarkable achievements in implementing this technology include applications such as crude oil production with IIoT and edge computing, as well as IIoT-enabled smart agriculture dashboards. Adopting IIoT dashboards may involve initial costs, but they offer long-term benefits and cost-effectiveness, particularly in the era of Industry 5.0 transformation. 2024 selection and editorial matter, C Kishor Kumar Reddy, P R Anisha, Samiya Khan, Marlia Mohd Hanafiah, Lavanya Pamulaparty and R Madana Mohana. -
Review and Design of Integrated Dashboard Model for Performance Measurements
This article presents a new approach for performance measurement in organizations, integrating the analytic hierarchy process (AHP) and objective matrix (OM) with the balanced scorecard (BSC) dashboard model. This comprehensive framework prioritizes strategic objectives, establishes performance measures, and provides visual representations of progress over time. A case study illustrates the methods effectiveness, offering a holistic view of organizational performance. The article contributes significantly to performance measurement and management, providing a practical and comprehensive assessment framework. Additionally, the project focuses on creating an intuitive dashboard for Fursa Foods Ltd. Using IoT technology, it delivers real-time insights into environmental variables affecting rice processing. The dashboard allows data storage, graphical representations, and other visualizations using Python, enhancing production oversight for the company. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.