Browse Items (11808 total)
Sort by:
-
Towards Computation Offloading Approaches in IoT-Fog-Cloud Environment: Survey on Concepts, Architectures, Tools and Methodologies
The Internet of Things (IoT) provides communication and processing power to different entities connected to it, thereby redefining the way objects interact with one another. IoT has evolved as a promising platform within short duration of time due to its less complexity and wide applicability. IoT applications generally rely on cloud for extended storage, processing and analytics. Cloud computing increased the acceptance of IoT applications due to enhanced storage and processing. However, the integration does not offer support for latency-sensitive IoT applications. The latency-sensitive IoT applications had greatly benefited with the introduction of fog/edge layer to the existing IoT-Cloud architecture. The fog layer lies close to the edge of the network making the response time better and reducing the delay considerably. The three-tier architecture is still in its earlier phase and needs to be researched further. This paper addresses the offloading issues in IoT-Fog-Cloud architecture which helps to evenly distribute the incoming workload to available fog nodes. Offloading algorithms have to be carefully chosen to improve the performance of application. The different algorithms available in literature, the methodologies and simulation environments used for the implementation, the benefits of each approach and future research trends for offloading are surveyed in this paper. The survey shows that the offloading algorithms are an active research area where more explorations have to be done. 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
A Hybrid Machine Learning Model (NB-SVM) for Cardiovascular Disease Prediction
One of the leading causes of death is heart disease. The prediction of cardiovascular disease remains as a significant challenge in the clinical data analysis domain. Although predicting cardiac disease with a high degree of accuracy is highly challenging, it is possible with Machine Learning (ML) approaches. The implementation of an effective ML system can minimize the need for additional medical testing, minimize human intervention, and predict cardiovascular diseases with high accuracy. This type of assessment can reduce the disease's severity and mortality rate. Only a few studies show how machine learning techniques might forecast cardiac disease. This study presents a method for improving cardiovascular disease prediction accuracy using Machine Learning (ML) technologies. Various feature combinations and many known classification techniques are used to develop various cardio vascular disease prediction models. The proposed hybrid Machine Learning (ML) prediction model for heart disease leverages a higher degree of performance and accuracy. 2023 IEEE. -
Machine Learning Techniques for Resource-Constrained Devices in IoT Applications with CP-ABE Scheme
Ciphertext-policy attribute-based encryption (CP-ABE) is one of the promising schemes which provides security and fine-grain access control for outsourced data. The emergence of cloud computing allows many organizations to store their data, even sensitive data, in cloud storage. This raises the concern of security and access control of stored data in a third-party service provider. To solve this problem, CP-ABE can be used. CP-ABE cannot only be used in cloud computing but can also be used in other areas such as machine learning (ML) and the Internet of things (IoT). In this paper, the main focus is discussing the use of the CP-ABE scheme in different areas mainly ML and IoT. In ML, data sets are trained, and they can be used for decision-making in the CP-ABE scheme in several scenarios. IoT devices are mostly resource-constrained and has to process huge amounts of data so these kinds of resource-constrained devices cannot use the CP-ABE scheme. So, some solutions for these problems are discussed in this paper. Two security schemes used in resource-constrained devices are discussed. 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
A Deep Learning Method for Classification in Brain-Computer Interface
Neural activity is the controlling signal used in enabling BCI to have direct communication with a computer. An array of EEG signals aid in the selection of the neural signal. The feature extractors and classifiers have a specific pattern of EEG control for a given BCI protocol, which is tailor-made and limited to that specific signal. Although a single protocol is applied in the deep neural networks used in EEG-based brain-computer interfaces, which are being used in the feature extraction and classification of speech recognition and computer vision, it is unclear how these architectures find themselves generalized in other area and prototypes. The deep learning approach used in transferring knowledge acquired from the source tasks to the target tasks is called transfer learning. Conventional machine learning algorithms have been surpassed by deep neural networks while solving problems concerning the real world. However, the best deep neural networks were identified by considering the knowledge of the problem domain. A significant amount of time and computational resources have to be spent to validate this approach. This work presents a deep learning neural network architecture based on Visual Geometry Group Network (VGGNet), Residual Network (ResNet), and inception network methods. Experimental results show that the proposed method achieves better performance than other methods. 2023 IEEE. -
CNN-Bidirectional LSTM based Approach for Financial Fraud Detection and Prevention System
Detecting fraudulent activity has become a pressing issue in the ever-expanding realm of financial services, which is vital to ensuring a positive ecosystem for everyone involved. Traditional approaches to fraud detection typically rely on rule-based algorithms or manually pick a subset of attributes to perform prediction. Yet, users have complex interactions and always display a wealth of information when using financial services. These data provide a sizable Multiview network that is underutilized by standard approaches. The proposed method solves this problem by first cleaning and normalizing the data, then using Kernel principal component analysis to extract features, and finally using these features to train a model with CNN-BiLS TM, a neural network architecture that combines the best parts of the Bidirectional Long Short-Term Memory (BiLS TM) network and the Convolution Neural Network (CNN). BiLSTM makes better use of how text fits into time by looking at both the historical context and the context of what came after. 2023 IEEE. -
Applications of Classification and Recommendation Techniques to Analyze Soil Data and Water Using IOT
As we are moving to a computerized and scientific world, data becomes an intrinsic part of our life. Agriculture sector is still unorganized with regard to automation and data analytics. This task is accomplished through sensors, data mining and analysis. In this paper, we propose real-time sensors to detect the soil features and predict the suitable crop cultivation using trained dataset. This would help the farmers to predict the type of cultivation to be done depending on the soil features. Today, the farmer can understand what type of cultivation will be prepared in the soil. Also, people of the upcoming generation will be using that sensor, different plant can be make. The cost of cultivation can be improved. Water level of the soil can be easily predicted. Which type of plant will be produced in the different soil can be predicted. So, this new type of cultivation followed by the next generation also. This paper has presented an improved by the pH sensor, water level sensor. 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Structured text programming to visualize the distribution of packages on a conveyor
Automation is a process of increasing production and reducing the downtime of any industry. With the integration of sensor data to the cloud using an OPC-VA communication protocol, the automation becomes more prominent and interesting. However, many existing industrial controllers do not support open platform communication unified architecture (OPC-VA) and it needs an IIoT device to connect the cloud. The existing programmable logic controller in any industry have to be connected to an IIoT device through Ethernet. Sensors connected to the controller will transmit the data to the IIoT device. The transmission can also be bidirectional. In this paper, a conveyor which distributes packages is simulated in Codesys and it is visualized in a human-machine interface (HMI) screen which is in-built in the software. The hardware set-up is made with the industrial controller to execute the same. A methodology to send the data from the controller to the cloud using open platform communication unified architecture (OPC-UA) is proposed 2023 IEEE. -
Leaf Disease Detection in Crops based on Single-Hidden Layer Feed-Forward Neural Network and Hierarchal Temporary Memory
Insects, mites, and fungi are common causes in plant disease, which can significantly reduce yields if not addressed promptly. Farmers are losing money as a result of crop illnesses. As the average under cultivation increases, it becomes more of a burden for farmers to keep an eye on everything. In this study, the median filter is used as a preprocessing step to transform the input image into a grayscale representation which used YCbCr color space. Otsu's segmentation is used to divide photographs that contain bright items on a dark background. Feature extraction using Grey Level Co-occurrence Matrix (GLCM). The proposed technique, known as ELM-HTM combines a simple yet adaptable extreme learning machine (ELM) with a Hierarchical Temporal Memory (HTM). This approach outperforms the ELM and HTM model with an accuracy of about 98.8%. 2023 IEEE. -
Enhancing IoT Security Through Deep Learning-Based Intrusion Detection
The Internet of Things (IoT) has revolutionized the way we interact with technology by connecting everyday devices to the internet. However, this increased connectivity also poses new security challenges, as IoT devices are often vulnerable to intrusion and malicious attacks. In this paper, we propose a deep learning-based intrusion detection system for enhancing IoT security. The proposed work has been experimented on IoT-23 dataset taken from Zenodo. The proposed work has been tested with 10 machine learning classifiers and two deep learning models without feature selection and with feature selection. From the results it can be inferred that the proposed work performs well with feature selection and in deep learning model named as Gated Recurrent Units (GRU) and the GRU is tested with various optimizers namely Follow-the-Regularized-Leader (Ftrl), Adaptive Delta (Adadelta), Adaptive Gradient Algorithm (Adagrad), Root Mean Squared Propagation (RmsProp), Stochastic Gradient Descent (SGD), Nesterov-Accelerated Adaptive Moment Estimation (Nadam), Adaptive Moment Estimation (Adam). Each evaluation is done with the consideration of highest performance metric with low running time. 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
Hyperspectral Image Classification Using Denoised Stacked Auto Encoder-Based Restricted Boltzmann Machine Classifier
This paper proposes a novel solution using an improved Stacked Auto Encoder (SAE) to deal with the problem of parametric instability associated with the classification of hyperspectral images from an extensive training set. The improved SAE reduces classification errors and discrepancies present within the individual classes. The data augmentation process resolves such constraints, where several images are produced during training by adding noises with various noise levels over an input HSI image. Further, this helps in increasing the difference between multiple classes of a training set. The improved SAE classifies HSI images using the principle of Denoising via Restricted Boltzmann Machine (RBM). This model ambiguously operates on selected bands through various band selection models. Such pre-processing, i.e., band selection, enables the classifier to eliminate noise from these bands to produce higher accuracy results. The simulation is conducted in PyTorch to validate the proposed deep DSAE-RBM under different noisy environments with various noise levels. The simulation results show that the proposed deep DSAE-RBM achieves a maximal classification rate of 92.62% without noise and 77.47% in the presence of noise. 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
Front-End Security Analysis forCloud-Based Data Backup Application Using Cybersecurity Tools
In this challenging, demanding, daunting, and competitive business world, the rise, and growth of cybercrimes are very high. With the proliferation of Cloud Computing techniques, usually in industrial arenas, business information and important clients data are stored and managed using cloud platforms. Application programs are developed to handle such valuable information assets of the organizations. Cloud backups are provided for these client data where security is the most concerning aspect. There are many vulnerabilities in the current scenario where intruders can cause havoc. Destruction of the product can happen by exploiting vulnerabilities that can put the company and the product in jeopardy. It may create a bad impression about the organization among the customers, competitors, and the public world. This paper shows the work done by a cyber security team whose main objective is to run vulnerability analysis and mitigate threats on an application that backs up the clients data to the cloud. Cyber Security is an important aspect in all types of businesses because it protects all categories of data such as fragile data, private information, intellectual property data, and other data including governmental and industrial information systems from theft and damage which concludes in huge financial loss and loss of client data. 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Optimal Sizing and Placement of Distributed Generation in Eastern Grid of Bhutan Using Genetic Algorithm
Power system has to be stable and reliable for its users. Nevertheless, due to the aging and ignorance, it tends to be unreliable and unstable. Distributed Generation (DG) is a small-scale energy production which are usually connected towards the load. It helps in the reduction of power losses and improvement of profile of voltage in a distribution network. However, if a DG is not optimally placed and sized, it will rather lead to an increase in a power loss and also deteriorates the voltage profile. This report exhibits the importance of DG placement and sizing in a distribution network using Genetic Algorithm (GA). Apart from the optimum DG placement and sizing, different scenarios with numbers of DGs is also being analyzed in this report. On eastern grid of Bhutan, a detailed analysis for its performance is carried out through MATLAB platform to demonstrate and study the effectiveness and reliability of a methodology that is being proposed. 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
Blockchain Scalability: Solutions, Challenges and Future Possibilities
In recent years, blockchain has received a lot of interest and has also been widely adopted. Yet, blockchain scalability is proving to be a difficult problem. To create a new node in platforms like Bitcoin takes few days of time. This scalability problem has few proposed solutions. The present alternatives to blockchain scalability are divided into two groups in this paper: first layer and second layer techniques. Second layer solutions suggest procedures that are deployed outside of the blockchain, while first layer methods propose adjustments to the blockchain (i.e., altering the blockchain design, such as block size). We concentrate on sharding as a viable first-layer solution to the scalability problem. The thought behind sharding is to split the blockchain network into numerous groups, each processing a different set of transactions. Furthermore, we compare few of the already available sharding-based blockchain solutions and present a performance-based comparative analysis in form of the benefits and drawbacks of the existing solutions. 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Evolution, Trends, and Future Developments of Business Intelligence
A decision-making process backed by the integration and evaluation of an organization's data resources is referred to as business intelligence. Since information has been recognized as a business's most valuable asset, it is a crucial resource for its growth and plays an increasingly important role in a variety of organization kinds. This research article examines the history of business intelligence technologies, their relevance in current times, and all the future developments that seem possible. Organizations are transforming into various approaches based on the information and networking in the twenty-first century in response to a chaotic and ambiguous environment marked by hazy organizational boundaries and rapid change. Knowledge-based assets become apparent to be the core of long-term strategic edge and the cornerstone of success in the twenty-first century in such situations. The primary characteristics of business intelligence are determined by data analysis, processing, and visualization. Relational tables are used by business intelligence technologies to store and display a lot of organized and unstructured data. They utilize specialized tools and mathematics to produce intricate visual reports. This research has been aggravated to focus on the upcoming strategic revolution in the market with numerous cutting-edge business intelligence technologies. 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Hybrid Model Using Interacted-ARIMA andANN Models forEfficient Forecasting
When two models applied to the same dataset produce two different sets of forecasts, it is a good practice to combine the forecasts rather than using the better one and discarding the other. Alternatively, the models can also be combined to have a hybrid model to obtain better forecasts than the individual forecasts. In this paper, an efficient hybrid model with interacted ARIMA (INTARIMA) and ANN models is proposed for forecasting. Whenever interactions among the lagged variables exist, the INTARIMA model performs better than the traditional ARIMA model. This is validated through simulation studies. The proposed hybrid model combines forecasts obtained through the INTARIMA model from the dataset, and those through the ANN model from the residuals of INTARIMA, and produces better forecasts than the individual models. The quality of the forecasts is evaluated using three error metrics viz., Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE). Empirical results from the application of the proposed model on the real dataset - lynx - suggest that the proposed hybrid model gives superior forecasts than either of the individual models when applied separately. The methodology is replicable to any dataset having interactions among the lagged variables.. 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
Earlier Stage Identification of Bone Cancer with Regularized ELM
A major focus of current research in the field of image processing is the application of such methods to the field of medical imaging. While dealing with biological issues like fractures, canoers, ulcers, etc., image processing facilitated pinpointing the precise cause and tailoring a remedy. In the field of tumor identification, medical imaging has set a new standard by overcoming a number of challenges. Medical imaging is the practice of generating images of the human body for diagnostic or exploratory purposes. Because of its high image quality, MRI is the method of choice for detecting tumors. This research study proposes the integration of RLM to detect tumors and presents an automatic bone cancer detection system to assist oncologists in making early diagnosis of bone malignancies, which in turn allows patients to receive treatment as soon as possible. This research work also proposes to detect bone tumors by using a combination of the RELM based M3 filtering, Canny Edge segmentation, and the Enhanced Harris corner approach. When compared to other models like CNN, ELM, and RNN, the suggested technique achieves an accuracy of around 97.55%. 2023 IEEE. -
On Automatic Target Recognition (ATR) using Inverse Synthetic Aperture Radar Images
Inverse Synthetic Aperture Radar (ISAR) is used to image sea surface targets during day/night and all-weather capabilities for applications such as coastal surveillance, ship self-defense, suppression of drug trafficking etc. Hence automating classification of ships by means of machine learning methods has become more significant. Typical classification approaches consist of pre-processing, feature extraction and processing by classifiers. Image processing techniques are applied for pre-processing ISAR images. Transformation invariant features are then extracted to which classifiers such as SVM, Neural Networks (NNs) are applied. The performance of these algorithms depend on the manually chosen features and is trained to perform well for different target profiles. The target image (profile of target) varies depending on the target type, aspect angle and motion introduced due to different sea states. In addition, Deep learning methods are also being explored for classification of ships. The challenge is to classify ships for different sea conditions and image acquisition parameters with limited database and processing resource. 2023 IEEE. -
Characteristic Mode Analysis of Metallic Automobile Logo Geometry
This paper presents a characteristic mode analysis of a few popular automobile logo geometries. It is performed to get an insight into the physical behavior of those geometries which can be employed as a radiating element, such as an antenna. Such an analysis helps design multi-band and multi-mode antennas suitable for 5G sub-6 GHz bands. The resonant behavior, bandwidth capability, and modal current distribution analysis are presented for various modes of different automobile logo geometries, demonstrating that Audi, Suzuki, and Volkswagen logos show multi-band performance. Moreover, due to having symmetric modes, the BMW logo was found to be suitable for designing a circularly polarized antenna. 2023 IEEE. -
Innovative Method for Alzheimer Disease Prediction using GP-ELM-RNN
Brain illnesses are notoriously challenging because of their fragility, surgical complexity, and high treatment costs. Contrarily, it is not obligatory to carry out the operation, as the outcomes of the procedure may fall short of expectations. Adult-onset Alzheimer's disease, which causes memory loss and losing information to varied degrees, is one of the most common brain diseases. This will vary from person to person based on their current health situation. This highlights the need of using CT brain scans to classify the extent of memory loss and determine the patient's risk for Alzheimer's disease. The four main goals of Alzheimer's disease detection are preprocessing the data, extracting features, selecting features, and training the model with GP-ELM-RNN. The Replicator Neural Network has been utilized earlier for AD detection, however this study offers an improved version of the network, modified with ELM learning and the Garson algorithm. From this study, it is deduced that the proposed method is not only efficient, but also quite precise. In this research, GP-ELM-RNN network is built to four groups of images representing different stages of Alzheimer's disease: very mildly demented, mildly demented, averagely demented, and non-demented. The class of very mildly demented patients was found to have the highest accuracy (99.1%) and specificity (0.984%). As compared to the ELM and RNN models, this technique achieves superior accuracy (around 99.23%). 2023 IEEE. -
Enhancing red wine quality prediction through Machine Learning approaches with Hyperparameters optimization technique
In light of the intricacy of the winemaking process and the wide variety of elements that could affect the taste and quality of the finished product, predicting red wine quality is difficult. ML methods have been widely used in forecasting red wine quality from its chemical characteristics in recent years. This Paper evaluated the comparison of classification and regression methods to predict the quality of red wine and performed the initial data analysis and exploratory data analysis on the data. This study implemented different Classifiers and Regressors that were trained and tested. Contrasted and Comparative analysis of the accuracies of eight models with hyperparameter tuning optimization, including Logistic Regression, Gradient Boosting, Extra Tree, Ada Boost, Random Forest, Support Vector Classifier, and Decision Tree, Knn and measured the classification report with F1, Accuracy, and Recall Scores. For Imbalance data, SMOTE Classifier was used. This study performed the Cross-validation technique, such as Grid search and with the best hyperparameters tuning. The study's findings demonstrated that the Gradient Boosting technique accurately predicted red wine quality. This research shows the promising results of Gradient Boosting for predicting red wine quality and adds important context to the usage of machine learning classifiers for this challenge. 2023 IEEE.