Browse Items (2150 total)
Sort by:
-
Predictive Analysis of the Recovery Rate from Coronavirus (COVID-19)
Estimation of recovery rate of COVID-19 positive persons is significant to measure the severity of the disease for mankind. In this work, prediction of the recovery rate is estimated based on machine learning technology. Standard data set of Kaggle has been used for experimental purpose, and the data sets of COVID cases in Italy, China and India for these countries are considered. Based on that data set and the present scenario, the proposed technique predicts the recovery rate. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Predictive Analytics for Network Traffic Management
It examines how this can be applied to monitoring network traffic and carrying out predictive analysis to improve the functionality and effectiveness of network management. The study uses historical data of the network traffics and uses machine learning techniques such as the Long Short Term Memory based models and the Ensemble Methods to predict the traffic patterns in the future. It includes data gathering, data pre-processing, feature selection, model choice, model training, model validation, and the architectural setup of the machine learning solution in a real-time stream processing pipeline using Apache Kafka and Apache Flink. It is evident from the results that the proposed models yield a high level of accuracy in terms of prediction and that the Ensemble method alone gives a slightly higher accuracy than LSTM in the specific metrics. Real-time values closely followed actual traffic level, thus allowing real-time adjustments in network usage. In light of this, there is a clear understanding of the significance of having reliable data preprocessing, feature engineering, and model optimization process. The study also notes the need in prediction concerning data quality and scalability issues taking into account that current and future networks are characterized as dynamic and highly complex to offer more effective solutions for intelligent and proactive networking. 2024 IEEE. -
Predictive Analytics for Stock Market Trends using Machine Learning
Navigating the intricacies of stock market trends demands a novel approach capable of deciphering the web of financial data and market sentiment. This research embarks on a transformative journey into the realm of machine learning, where we harness the power of data to forecast stock market trends with increased precision and accuracy. Commencing with an exploration of stock market dynamics and the inherent limitations of traditional forecasting techniques, this paper takes a bold step into the future by embracing the potential of machine learning. The study begins with an in-depth analysis of data preprocessing, unraveling the complexity of feature selection and engineering, setting the stage for a data-driven odyssey. As our exploration progresses, we dive into the deployment of diverse machine learning algorithms, including linear regression, decision trees, random forests, and the formidable deep learning models such as recurrent neural networks (RNNs) and long short-term memory networks (LSTMs). These algorithms act as our guiding lights, revealing intricate patterns concealed within historical stock price data. Our journey reaches new heights as we recognize the significance of augmenting predictive models with external data sources. Incorporating elements like news sentiment analysis and macroeconomic indicators enriches our understanding of the market landscape, enhancing the predictive capabilities of our models. We also delve into the crucial aspects of model evaluation, guarding against overfitting, and selecting appropriate performance metrics to ensure robust and reliable predictions. The research reaches its zenith with a meticulous analysis of real-world case studies, providing a comparative perspective between machine learning models and traditional forecasting methods. The results underscore the remarkable potential of machine learning in predicting stock market trends more accurately. 2023 IEEE. -
Predictive Machine Learning Approaches for Estimating Residential Rental Rates in India
As urban areas like Chennai and Bangalore witness a continuous surge in land and housing prices, accurately estimating the market value of houses has become increasingly crucial. This presents a formidable challenge, prompting a growing demand for an accessible and efficient method to predict house rental prices, ensuring dependable forecasts for future generations. In response to this need, this study delves into the core factors influencing rental prices, with a keen focus on location and area. Leveraging a dataset comprising ten essential features tailored for detecting Rental Price in Metropolitan cities, the research meticulously preprocesses the data using a Python library to ensure data cleanliness, laying a robust foundation for constructing the predictive model. Employing a diverse range of Machine Learning algorithms, including Random Forest, Linear Regression, Decision Tree Regression, and Gradient Boosting, the study evaluates their efficacy in forecasting rental prices. Notably, feature extraction underscores the significance of area and property type in shaping rental prices. In comparison with existing methodologies, this research adopts gradient boosting as its preferred approach, achieving the most satisfactory predictive outcomes. Evaluation metrics are meticulously analyzed to validate the model's performance. Through this comprehensive analysis, the study not only offers valuable insights into rental price prediction but also ensures a rigorous comparison with existing approaches, maintaining originality and relevance in addressing the pressing challenges of housing market dynamics. 2024 IEEE. -
Predictive Modeling for Uber Ride Cancellation and Price Estimation: An Integrated Approach
In the realm of ridesharing services, exemplified by Uber, two formidable challenges have surfaced: ride cancellations and precise fare estimation. This research introduces an innovative, integrated approach that leverages predictive modeling to address both issues. By analyzing historical ride data, we identify the intricate factors influencing cancellations, and through machine learning techniques, we develop predictive models to forecast cancellation likelihood. Additionally, we pioneer a dynamic approach to fare estimation by considering historical data alongside real-time variables. By unifying these strategies, we aim to enhance user satisfaction, optimize driver allocation, and promote trust and transparency within the ridesharing ecosystem. 2024 IEEE. -
Predictive Modeling of Solar Energy Production: A Comparative Analysis of Machine Learning and Time Series Approaches
In this study, we dive into the world of renewable energy, specifically focusing on predicting solar energy output, which is a crucial part of managing renewable energy resources. We recognize that solar energy production is heavily influenced by a range of environmental factors. To effectively manage energy usage and the power grid, it's vital to have accurate forecasting methods. Our main goal here is to delve into various predictive modeling techniques, encompassing both machine learning and time series analysis, and evaluate their effectiveness in forecasting solar energy production. Our study seeks to address this by developing robust models capable of capturing these complex dynamics and providing dependable forecasts. We took a comparative route in this research, putting three different models to the test: Random Forest Regressor, a streamlined version of XGBoost, and ARIMA. Our findings revealed that both the Random Forest and XGBoost models showed similar levels of performance, with XGBoost having a slight edge in terms of RMSE.. By providing a comprehensive comparison of these different modeling techniques, our research makes a significant contribution to the field of renewable energy forecasting. We believe this study will be immensely helpful for professionals and researchers in picking the most suitable models for solar energy prediction, given their unique strengths and limitations. 2024 IEEE. -
Predictive Modelling of Heart Disease: Exploring Machine Learning Classification Algorithms
In addressing the critical challenge of early and accurate heart failure diagnosis, this study explores the application of five machine learning models, including XGBoost, Decision Tree, Random Forest, Logistic Regression, and Gaussian Naive Bayes. Employing cross-validation and grid search techniques to enhance generalization, the comparative analysis reveals XGBoost as the standout performer, achieving a remarkable accuracy of 85%. The findings emphasize the significant potential of XGBoost in advancing heart failure diagnosis, paving the way for earlier intervention, and potentially improving patient prognosis. The study suggests that integrating XGBoost into diagnostic processes could represent a valuable and impactful advancement in the realm of heart failure prediction, offering promising avenues for improved healthcare outcomes. 2024 IEEE. -
Prefabricated Houses - A Model to Sustainable Housing Market
Effective spatial planning in an urban center is the need of the hour especially for offering affordable and sustainable houses. separate sheet. Spatial planning may help in achieving Sustainable Development goals (SDG) (09) Industry, Innovation and Infrastructure, (11) Sustainable cities and communities, and (15) Life on Land. The prefabricated house model can be used as a strategy in achieving above mentioned SDGs. It is important to study the prefabricated housing market for a country like India, considering its growing population and the necessity of access to affordable and sustainable houses. The main objective of this study is to identify the determinant factors of prefabricated houses and its impact on preference among urban consumers. The study is quantitative in nature and adopts a survey method. SEM model is used to analyze the data. A structured questionnaire is developed based on the objectives of the study. The questionnaire majorly focused on the perception of Sustainability, Affordability, Durability, Barriers, Opportunities, and Quality. The Electrochemical Society -
Preparation and Characterization of Tungsten Carbide/Epoxy Composites for J-Ray Shielding
Polymer composites have attracted considerable attention as potential light weight and cost-effective shielding materials which could be used for applications in nuclear reactors, nuclear waste transportation, as protective cloth/apron for personnel in hospitals, and shielding instruments on-board satellites from space radiations. In this context, we have developed diglycidyl ether of bisphenol A (DGEBA)-based epoxy resin composites loaded with tungsten carbide (WC) for J-ray shielding. Epoxy composites containing different loadings (0, 10, 30 and 50 wt%) of WC were synthesized by room temperature solution casting technique. Structural and morphological studies of the composites were performed using X-ray diffraction (XRD) and scanning electron microscopy (SEM). Thermal and tensile properties of epoxy were enhanced in the presence of WC fillers. Thermogravimetric analysis revealed the major degradation temperature occurring between 430C and 580C for all epoxy/WC composites. The tensile strength and Youngs modulus of the composites enhanced with loading, owing to greater intermolecular reinforcing effect, uniform stress distribution and enhanced energy-absorbing capacity. J-Ray attenuation studies performed in the energy region of 0.356 1.332 MeV using NaI(Tl) detector spectrometer showed the 50 wt% tungsten carbide/epoxy composites to have highest radiation attenuation at all the energies. The overall enhancement in thermal, mechanical, and radiation shielding characteristics of the composites may be attributed to the uniformity in distribution of the fillers in epoxy matrix. These nontoxic tungsten carbide/epoxy composites may be suitable as materials for shielding in radiation environments. 2022 American Institute of Physics Inc.. All rights reserved. -
Preprocessing Big Data using Partitioning Method for Efficient Analysis
Big data collection is the process of gathering unprocessed and unstructured data from disparate sources. As data deluge, the large volume of data collected and integrated consist missing values, outliers, and redundant records. This makes the big dataset insignificant for processing and mining knowledge. Also, it unnecessarily consumes large amount of valuable storage for storing redundant data and meaningless data. The result obtained after applying mining techniques in this insignificant data lead to wrong inferences. This makes it inevitable to preprocess data in order to store and process big dataset effectively and draw correct inferences. When data is preprocessed before analytics the storage consumption is less and computation and communication complexity is reduced. The analytics result is of high quality and the needed time for processing is considerably reduced. Preprocessing data is inevitable for applying any analytics algorithm to obtain valuable pattern. The quality of knowledge mined from large volume of big data depends on the quality of input data used for processing. The major steps in big data preprocessing include data integration from disparate sources, missing value imputation, outlier detection and treatment, and handling redundant data. The process of integration includes steps such as extraction, transformation, and loading. The data extraction step gathers useful data used for analytics and the transformation process organize the collected data in structured format suitable for analytics. The role of load process is to store transformed data into secured storage so that data can be obtained and processed effectively in future. This work provides preprocessing techniques for big data that deals with missing values and outliers and results in obtaining quality data partitions. 2023 IEEE. -
Preventing Data Leakage and Traffic Optimization in Software-Defined Programmable Networks
The first widely used communication infrastructure was the telephone network, often known as a connection-oriented or circuit-switched network. While making a phone call, these networks will first set up a connection, and then tear it down after the call has ended. The connection made during the call would not be used again. Thus, connectionless or packet-switched networks have been introduced, with an aim to send voice signals as data packets. When compared to conventional network architecture, SDN's separation of the data plane and control plane of networking devices makes the management of these devices directly programmable via a centralised controller. It uses a MAS-based distributed architecture to categorise network flows, and it's called the Traffic Classification Module. Each host or server's high-priority application traffic is isolated via Deep Packet Inspection (DPI). The time consumed for a packet to travel from one endpoint to another is referred to as the average packet delay, whereas the controller's reaction time is twice the average packet delay. Few works existed that utilised routing strategies to decrease the typical packet delay in SDN. To reduce the controller's response time, Software-Defined Networks (SDNs) need a routing algorithm that reduces the average packet delay. Each of the proposed modules and the whole combined SDN-MASTE framework were put through their paces in a series of experiments and emulation-based tests to see how well they performed. 2023 IEEE. -
Prevention and Mitigation of Intrusion Using an Efficient Ensemble Classification in Fog Computing
Cloud services in fog network is a platform that inherits software services to a network to handle cloud-specific problems. A significant component of the security paradigm that supports service quality is represented by intrusion detection systems (IDSs). This work develops an optimization environment to mitigate intrusion using RSLO classifier on a cloud-based fog networks. Here, a three-layer approach namely the cloud, end point, and fog layers is used as a trio to carry out all of the processing. In the cloud layer, three layers of processing are required for handling the dataset metrics which are data transformation metrics, feature selection metrics, and classification processes. With log transformation, data is transformed using KS correlation-based filter which is used to choose a feature. The classification using an ensemble methodology of RideNN classifiers which is a Rider Sea Lion Optimization (RSLO), a created classifier, is used to tune the ensemble classifier. Physical work is carried out at another layer called an end point layer. A trained ensemble classifier is used for intrusion detection in the fog layer. A greater precision, recall, and F-measure were obtained with an accuracy approximately 95%, with all benefits of the suggested RSLO-based ensemble strategy. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024. -
Prevention of Data Breach by Machine Learning Techniques
In today's data communication environment, network and system security is vital. Hackers and intruders can gain unauthorized access to networks and online services, resulting in some successful attempts to knock down networks and web services. With the progress of security systems, new threats and countermeasures to these assaults emerge. Intrusion Detection Systems are one of these choices (IDS). An Intrusion Detection System's primary goal is to protect resources from attacks. It analyses and anticipates user behavior before determining if it is an assault or a common occurrence. We use Rough Set Theory (RST) and Gradient Boosting to identify network breaches (using the boost library). When packets are intercepted from the network, RST is used to pre-process the data and reduce the dimensions. A gradient boosting model will be used to learn and evaluate the features chosen by RST. RST-Gradient boost model provides the greatest results and accuracy when compared to other scale-down strategies like regular scaler. 2022 IEEE. -
Primary mirror active control system simulation of Prototype Segmented Mirror Telescope
The upcoming large astronomical telescopes are trending towards the Segmented Mirror Telescope (SMT) technology, initially developed at the W M Keck Observatory in Hawaii, where two largest SMTs in the world are in use. SMT uses large number of smaller hexagonal mirror segments aligned and positioned by the use of three position/force actuators and six intersegment edge sensors. This positioning needs to be done within nanometer range to make them act like a monolithic primary mirror in the presence of different disturbances like wind, vibration & thermal effects. The primary mirror active control system of SMT does this important task at two levels. First at a global scale, by measuring edge sensor information continuously and commanding actuators to correct for any departure from the reference surface. And second at local actuator level, where all the actuators maintain their position to the reference control inputs. The paper describes our novel approach of primary mirror active control simulation of Prototype Segmented Mirror Telescope (PSMT) under design and development at Indian Institute of Astrophysics (IIA), Bangalore. The PSMT is a 1.5m segmented mirror telescope with seven hexagonal segments, 24 inductive edge sensors, and 21 soft actuators. The state space model of the soft actuator with Multiple-Input Multiple-Output (MIMO) capability is developed to incorporate dynamic wind disturbances. Further, a segment model was developed using three such actuators which accept actuator position commands from the global controller and telescope control system and yields tip-tilt-piston (TTP) of a single segment. A dynamic wind disturbance model is developed and used to disturb the mirror in a more realistic way. A feed forward PID controller is implemented, and gains are appropriately tuned to get the good wind rejection. In the last phase, a global controller is implemented based on SVD algorithm to command all the actuators of seven segments combined to act as a single monolithic mirror telescope. In this paper, we present the progress of PSMT active control system simulation along with the simulation results. 2017 IEEE. -
Prior Cardiovascular Disease Detection using Machine Learning Algorithms in Fog Computing
The term latent disease refers to an infection that does not show symptoms but remains forever. In this paper, proposed a novel methodology for addressing latent diseases in machine learning by integrating fog computing techniques. Here there is a link between HIV to heart disease, that is when a person progresses to the next stage of HIV, a plague infection develops, causing cholesterol deposits to form. Plaque development causes the inside of the arteries to constrict over time, which may stimulate the release of numerous heat shock proteins and immune complexes into the bloodstream, potentially leading to heart disease. Heart disease has long been considered as a significant life-threatening illness in humans. Heart disease is driven by a range of factors including unhealthy eating, lack of physical exercise, gaining overweight, tobacco, as well as other hazardous lifestyle choices. Five different classifiers are used to perform the precision; they are Support vector machine, K-nearest neighbor, decision tree, and random forest, after we have used the classifier, the recommended ideal will split disease into groups which is created based on their threat issues. This will be beneficial to doctors assisting doctors in analyzing the risk factors associated with their patients. 2023 IEEE. -
Prioritizing Factors Affecting Customers Satisfaction in the Internet Banking Using Artificial Intelligence
Internet banking has revolutionised the way customers interact with their banks, providing them with convenient access to a wide range of financial services from the comfort of their homes or mobile devices. Customer satisfaction the success of an endeavour is contingent upon a vital component internet banking Service provision, as it pertains directly impacts customer retention and loyalty. This research explores the application of artificial intelligence (AI) techniques, specifically random forest and convolutional neural networks (CNN), to prioritise the factors that affect customer satisfaction in internet banking. The study begins with data collection from a diverse sample of internet banking customers, including demographic information, transaction history, and customer feedback. These may include the ease of navigation, the response time of the platform, and the level of trust in the bank's security measures. Furthermore, convolutional neural networks (CNN) are utilised to analyse unstructured data such as customer feedback and reviews. By applying natural language processing techniques, CNN s extract sentiment and topic information from customer comments. This approach can ultimately lead to improved customer retention and loyalty, ensuring the long-term success and competitiveness of internet banking platforms. In conclusion, this study showcases the power of AI, specifically Random Forest and CNN, in prioritising factors affecting customer satisfaction in internet banking. It highlights the significance of using both quantitative and qualitative investigations in order to attain a comprehensive comprehension of customer sentiments and preferences in the digital banking landscape. 2024 IEEE. -
Priority-driven Unbalanced Transportation Problem (PUTP) to obtain better Initial Feasible Solution
In this paper, we tackle the Priority-driven Unbalanced Transportation Problem (PUTP), a scenario where total demand exceeds total supply. An innovative algorithm, the Penalty-driven Priority-driven Unbalanced Transportation Problem (PPUTP) is introduced to solve this challenge. PPUTP allocates supplies to high-priority demands by computing penalties and sequentially addressing the most penalized demands, thereby ensuring priority demands are met efficiently. A comparative analysis with Vogel's Approximation Method (VAM) across various problem sets ranging from 5x5 to 50x50 dimensions demonstrates the efficiency of our algorithms. PPUTP consistently shows lower percentage increments from the optimal solution, indicating its robustness in providing near-optimal solutions. This study highlights the importance of algorithm selection based on problem set dimensions and complexity in Priority-driven Unbalanced Transportation Problem, with PPUTP emerging as a versatile and robust solution across various scenarios. 2024 IEEE. -
Privacy Optimization in Sensors Based Networks With Industrial Processes Management
The Internet of Things (IoT) also known as IoT has the potential that is required to revolutionize industries, this has been discussed in this research article. Advancements in technology have made devices affordable, efficient and reliable. Different sectors have already started to incorporate these devices into their operations to boost productivity, to minimize failure and downtime. They also use it to optimize resource utilization which is also an important factor. However, the use of these devices also has some security challenges which need to be handled. This research paper proposes a security model specifically designed for process management in the industries. The goal of this model is to find the vulnerabilities, to minimize the risks and threats. Also ensuring integrity, confidentiality and availability of processes is a part of the goal. This paper gives evidence from its implementation and trial apart from its explanation. During the implementation phase, the sensitive data achieved a 100% encryption rate, for protection. Also, integrity checks were conducted on 99.8% of data to guarantee data integrity. 2023 IEEE. -
Privacy-preserving federated learning in healthcare: Fundamentals, state of the art and prospective research directions
Recent collaborations in medical diagnostic systems are based on data private collaborative learning using Federated Learning (FL). In this approach, multiple organizations train a machine-learning model at the same time eventually leading to global model generation. This paper reviews the fundamentals of FL and its evolution path in Healthcare. The objective of this review is to scope a wide variety of healthcare applications in FL. Exactly what research direction is moving in interesting for research communities to guide their future course. This review uniquely focuses on examining numerous FL-based healthcare implementations, detailing their core methodologies and performance metrics, which, to our knowledge, have not been previously available. Privacy-preserving collaborative distributed learning through federated learning in healthcare enhances research collaborations, thereby resulting in better-performing models. This comprehensive review will act as a valuable reference for researchers exploring new FL applications in the healthcare domain. 2024 IEEE. -
Probing the Role of Information and Communication Technology (ICT) in Enhancing Research: An Epilogue of Accessible Research Tools
Information and Communication Technology (ICT) has revolutionized the way researchers conduct their work. It has enabled them to access a wealth of information through online databases, collaborate with colleagues across the globe, and analyze vast amounts of data quickly and accurately. This paper explores the role of ICT in enhancing research tools, highlighting the benefits it provides to researchers in terms of increased efficiency, improved accuracy, and greater access to resources. It also discusses some of the challenges associated with using ICT in research, such as data security and privacy concerns, and offers potential solutions. Overall, the paper concludes that ICT is an essential tool for researchers and will continue to play an increasingly important role in advancing scientific knowledge and innovation. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd 2023.