Browse Items (11855 total)
Sort by:
-
Performance Evaluation of OTFS Under Different Channel Conditions for LEO Satellite Downlink
Orthogonal Time Frequency Space (OTFS) modulation scheme is being actively pursued as a viable alternative to Orthogonal Frequency Division Multiplexing (OFDM) modulation scheme in future wireless standards due to the inherent ability of OTFS to mitigate the Doppler effects in high mobility scenarios. The inclusion of Non Terrestrial Network (NTN) in Release 17 of 3GPP (3rd generation partnership project) New Radio (NR) standard signifies the vital role of Satellite Communications to achieve coverage extension, capability augmentation and seamless global connectivity. In this context, it becomes important to study the suitability of OTFS modulation scheme with respect to satellite channel scenarios. In this paper, we consider the downlink channel scenarios defined by 3GPP NR NTN for Low Earth Orbit (LEO) satellites at sub-6 GHz and millimetre wave frequencies for evaluating the performance of OTFS modulation schemes. Simulation results using LMMSE (Linear Minimum Mean Square Error) and MRC (Maximum Ratio Combining) detection algorithms confirm that OTFS modulation is highly robust against Doppler effects and performs consistently across all channel conditions. From simulation results, it is observed that the performance of iterative MRC detection is better than LMMSE for 16QAM and 64QAM modulation schemes by achieving respective gains of around 5 dB and 10 dB for corresponding Bit Error Rate (BER) values of 0.01 and 0.1. 2023 IEEE. -
Lung Cancer Detecting using Radiomics Features and Machine Learning Algorithm
Lung Cancer Incidence across the globe is the second leading cancer type tallying to about 2,206,771 during 2020 and is estimated to rise to about 3,503,378 by 2040 for both male and female sexes and for all ages accounting to 11.4% as per Globocan 2020 [1]. It is the leading death-causing cancer. Lung Cancer [2] in broad terms encompasses Trachea, bronchus as well as lungs. Purpose: The study is aimed to understand Radiomics based approach in the identification as well as classification of CT Images with Lung Cancer when Machine Learning (ML) algorithms are applied. Method: CT Image from LIDC-IDRI [4] Dataset has been chosen. CT Image Dataset was balanced and image features by PyRadiomics library were collected. Various ML features classification algorithms are utilized to create models and matrices adopted in judging their accuracies. The models, distinctive capacity is assessed by receiver operating characteristics (ROC) analysis. Result: The Accuracy scores and ROC-AUC values obtained for various Classification Model are as follows, for Ada Boosting, the accuracy score was 0.9993 ROC-AUC was 0.9993 and followed by GBM, the accuracy score was 0.9993, was 0.9992. Conclusion: Extracting texture parameters on CT images as well as linking the Radiomics method with ML would categorize Lung Cancer commendably. 2023 IEEE. -
Predicting the Cerebral Blood Flow Change Condition during Brain Strokes using Feature Fusion of FMRI Images and Clinical Features
By fusing clinical information with functional magnetic resonance imaging (fFMRI) pictures, this study describes a novel method for predicting changes in cerebral blood flow during brain strokes. The FMRI data and patient-specific variables, such as age, gender, and medical history, are combined via feature fusion in the proposed technique. As a result, the model developed can accurately forecast changes in cerebral blood flow that occur during brain strokes. The efficiency of the suggested strategy is shown by experimental findings. The performance of the model is greatly enhanced when FMRI data and clinical characteristics are combined as opposed to just one data source. The findings of this study have important ramifications for increasing the accuracy of stroke diagnosis and treatment and, eventually, for bettering patient outcomes. The experimental results showed that the proposed method a high level of accuracy in predicting changes in cerebral blood flow after brain strokes. The performance of the model was much enhanced by combining clinical characteristics with FMRI data as opposed to using only one of these data sources. This emphasizes the value of including pertinent clinical information in the diagnosis and management of stroke. 2023 IEEE. -
Implementing Ensemble Machine Learning Techniques for Fraud Detection in Blockchain Ecosystem
A new era of digital innovation, notably in the area of financial transactions, has been conducted in by the rise pertaining to block-chain technology. Although the decentralized nature of blockchain technology renders it prone to fraud, it has been praised for its capacity to offer a safe and transparent platform for financial transactions. The integrity of the entire blockchain network may be compromised by fraudulent transactions, which may also damage user and stakeholder trust. This study aims to assess machine learning's efficacy in detecting fraudulent transactions within blockchain networks and identifying the most effective model. To achieve its objectives, this study used a combination of data collection, data preprocessing, and machine learning techniques. The data used in this study was dataset of blockchain transactions and pre-processed using techniques such as feature engineering and normalization. Then trained and evaluated using several machine learning models, including Logistic Regression (LR), Naive Bayes (NB), SVM, XGboost, LightGBM, Random Forest(RF), and Stacking, in order to determine their effectiveness in detecting fraudulent transactions. XGBoost demonstrated the highest accuracy of 0.944 in the stacking model, establishing it as the top-performing model, closely followed by Light GBM. The study's discoveries offer significant practical implications for advancing fraud detection methods in blockchain networks. By pinpointing the most efficient machine learning model and crucial predictive fraud features, this research provides vital insights for refining precise detection algorithms, enhancing blockchain network security, and broadening their reliability across various applications. 2023 IEEE. -
Evaluating Energy Consumption Patterns in a Smart Grid with Data Analytics Models
With the rapid pace of technological advancement, it is a well established fact that in todays era, economical and industrial development go hand in hand with the growth in technology. Today, massive amounts of data are generated everyday and are only growing exponentially. The collected data, whether structured or unstructured, could prove to be very beneficial in terms of improving operational efficiency by analyzing and extracting valuable information to find patterns to optimize asset utilization and improve asset intelligence. Big data analytics can very well contribute to the evolution of the digital electrical power industry. The objective of this paper is to explore how smart grid technology can be enhanced by leveraging big data analytics. Different predictive models are used for the purpose. Among them, decision tree model outperformed others recording a training and tetsing accuracy of 94.4% and 92.7% respectively while noting a least execution latency of only 4.3 seconds. 2023 IEEE. -
Brain Tumor Detection and Classification Using a Hyperparameter Tuned Convolutional Neural Network
Brain tumor detection using MRI scans when integrated with a deep learning approach can be immensely applied in identifying the tumor at early stages, with minimum medical professional aid. This research paper aims to develop an advanced predictive model that accurately classify brain tumors as benign or malignant using MRI scans. Here, a novel convolutional neural network (CNN) model is proposed to automate tumor detection and improve diagnosis accuracy. The model used a dataset of around 7000 brain cancer data classified into 4 labels which include glioma, meningioma, pituitary, and no tumor. Data wrangling and pre-processing are then applied to unify the images into a single format and remove any inconsistencies. Further the records are segregated into train and test samples with a 70-30 split. The proposed model recorded an optimum accuracy of 94.82%, precision of 94.2%, recall value of 93.7% and f-score metric of 93.9% respectively. In conclusion, the paper concluded that the proposed model can be applied to enhance the precision of both brain tumor diagnosis and prognosis. 2023 IEEE. -
Comparative Performance Analysis of Machine Learning and Deep Learning Techniques in Pneumonia Detection: A Study
Pneumonia is a bacterial or viral infection that inflames the air sacs in one or both lungs. It is a severe life-threatening disease, making it increasingly necessary to develop accurate and reliable artificial intelligence diagnosis models and take early action. This paper evaluates and compares various Machine Learning and Deep Learning models for pneumonia detection using chest X-rays. Six machine learning models -Logistic Regression, KNN, Decision Tree, Random Forest, Naive Bayes, and Support Vector Machines - and three deep learning models - CNN, VGG16, and ResNet - were created and compared with each other. The results exhibit how just the model choice can significantly affect the quality and inerrancy of the final diagnostic tool. 2023 IEEE. -
Weather Forecasting Accuracy Enhancement Using Random Forests Algorithm
In today's world, weather forecasting is essential for decision-making in a variety of fields, including agriculture, transportation, and disaster preparedness. It's not simple to make weather predictions. Today, both in business and academia, data analytics is growing in importance as a tool for decision-making. The adoption of data-driven concepts is for our graduates, enhancing their marketability. Data Analytics us a study belonging to science that analyses gathered raw data, which makes conclusions about the particular information. Data analytics has been used by many sectors recently, such as hospitality, where this industry can collect data, find out where the problem is, and manage to fix the problem. Nominal, ordinal, interval, and ratio data levels are the four types of data measurement. Applications of data analytics can be found in many industries, including shipping and logistics, manufacturing, security, education, healthcare, and web development. Any business that wants to succeed in the modern digital economy should make analytics a core focus. To make such data meaningful, a transformation engine was used with types from several sources. Ironically, this has made analytics harder for businesses. As businesses employ more platforms and applications, the amount of data available has grown tremendously. This article focuses on different applications of data analytics in the modern world. Weather forecasting is a highly intricate and multifaceted process that draws upon data from various sources. It relies on a combination of scientific studies and sophisticated weather models to decipher the vast amount of information available. 2023 IEEE. -
Machine Learning Based Crime Identification System using Data Analytics
Poverty is known to be the mother of all crimes, and a vast percentage of people in India live below the poverty line. In India, the crime rate is rapidly rising. The police officers must spend a significant amount of time and personnel to identify suspects and criminals using current crime investigation. In this research, the method presented for designing and implementing crime identification and criminal recognition systems for Indian metropolitans is utilizing techniques of data mining. These occurrences are represented by 35 predefined crime attributes. Access to the crime database is protected by safeguards. The pending four subjects are important for crime unmasking, identification and estimation of criminals, and crime authentication, in that order. The detection of crime is investigated with the help of K-Means clustering, which iteratively builds two crime batches based on congruent criminal features. Google Maps is to enhance the k-means visualization. K-Nearest Neighbor classification is used to examine criminal identification and forecasting. This is used for the authentication of the results. The technique benefits society by helping investigative authorities in crime solving and criminal recognition, resulting in lower crime rates. This research study describes a way for creating and deploying crime solving and criminal recognition systems for Indian metro's using data mining tools in this study. The method consists of data evulsion, data pre- processing, clustering, Google map delegation and classification. The first module, data evulsion, retrieves unformed or unrecorded crime datasets from several criminal sources online from 2000 to 2012. In the second module, Data pre-processing cleans, assimilates, and reduces the obtained criminal data into organized 5,038 crime occurrences. Several predefined criminal traits represent these instances. Safeguards are in place to prevent unauthorized access to the crime index. The remaining components are critical for detecting crimes, criminal identity and prediction, and crime verification, in that sequence. The investigation of crimes is investigated using k-means clustering, which gives results repeatedly. 2023 IEEE. -
Resource Aware Weighted Least Connection Load Balancing Technique in Cloud Computing
Cloud computing became a pivotal for the most of the real time applications. In cloud computing, the customer demands the services with the best performance even when the application is expanding rapidly. Therefore, it is essential to manage the resources effectively because the number of users and services growing proportionately. The main aim of the load balancing technique is to allocate the customers' requests with the large pool of resources efficiently. The problem is how to evenly distribute the load of requests among the compute nodes according to their capacity. Therefore, there is a need for an effective load balancing technique for smooth continuity of operations in a distributed environment with a heterogeneous server configuration. This paper presents a novel load balancing technique, namely, Resource aware weighted least connection load balancing which addresses the above said problem efficiently. The essence of this work is to assign the requests across multiple servers based on the requested resource and the status of the number of connections presently served by each server. This work used standard score technique to enumerate the weight of each node. Experiments were conducted using Cloud Analyst, a famous cloud simulator breed from CloudSim. Appropriate performance parameters were analysed to measure the effectiveness of the proposed technique. Future directions for the extension of the implemented technique also identified. 2023 IEEE. -
Martian Habitats: A Review
Establishing colonies in Lunar and Martian environments is the major task of our primary means to become a multi-planetary civilization. The Space Exploration Initiative (SEI), administered by President George H.W. Bush in 1989, was the first spark that ignited humanity's vision to establish space settlements beyond low Earth orbit (LEO) (Marc M. Cohen, 2015). At present, private space companies (like SpaceX and Blue Origin) are competing to be the first ones to colonise space. From the late 1980s to the present space race, many space habitat designs to suit human factors, ensure protection from space radiation, and be capable of regulating our day-to-day activities have been proposed for both lunar and martian settlements, respectively. In this paper, only Martian settlements are focused, and the reason for that follows next. While the moon is closer to Earth than Mars, Mars has several other advantages that make it an equal, if not a better candidate for colonisation. Some of the reasons why martian colonisation is preferred over lunar colonization include the presence of an atmosphere on Mars, its resource-rich nature, and its rotation period being closer to Earth's rotation period (Mars has 24.5 hours per day, while the moon has 28-day days) (Kamrun Narher Tithi, 2017). Another added advantage is its proximity to the main belt asteroids, which will further increase the potential for space mining in the future. So this paper will be a review of the various Martian habitat designs proposed over the last one and a half decades in terms of their designs, construction and challenges. To do so, it is assumed that every step associated with delivering the habitats to the Martian environment is achievable. These steps include the following: propulsion systems for long-term spaceflights; launch vehicles capable of lifting the habitats and fitting the habitat modules within them (Marc M. Cohen, 2015). Copyright 2023 by the International Astronautical Federation (IAF). All rights reserved. -
Streamlined Deployment and Monitoring of Cloud-Native Applications on AWS with Kubernetes Prometheus Grafana
As organizations increasingly move their applications to the cloud, it becomes essential to have an efficient and cost-effective method for deploying and managing those applications. Manual deployment can be time-consuming, error-prone, and expensive. Additionally, managing logs and monitoring resources for each deployment can lead to even greater costs. To address these challenges, we propose implementing an automation strategy for deployment in the cloud. With automation, the deployment process can be streamlined and standardized across different cloud providers, reducing the potential for errors and saving time and resources. Furthermore, a central log system can be implemented to manage logs from different deployments in one location. This provides a unified view of all logs and allows for easier troubleshooting and analysis. Automation can also be used to set up monitoring resources, such as alerts and dashboards, across different deployments. Overall, implementing an automation strategy for deployment in the cloud can help organizations save time and resources while improving their ability to manage and monitor their applications. A centralized log management system can further enhance these benefits by providing a unified view of logs from all deployments 2023 IEEE. -
An Empirical and Statistical Analysis of Regression Algorithms Used for Mental Fitness Prediction
In today's focus on mental well-being, technology's capability to predict and comprehend mental fitness holds substantial significance. This study delves into the relationship between mental health indicators and mental fitness levels through diverse machine learning algorithms. Drawing from a vast dataset spanning countries and years, the research unveils concealed patterns shaping mental well-being. Precise analysis of key mental health conditions reveals their prevalence and interactions across demographics. Enriched by insights into Disability-Adjusted Life Years (DALYs), the dataset offers a comprehensive view of mental health's broader impact. Through rigorous comparative analysis, algorithms like Linear Regression, Random Forest, Support Vector Regression, Gradient Boosting, K-nearest neighbors and Theil Sen Regression are assessed for predictive accuracy. Mean squared error (MSE), root mean squared error (RMSE), and Rsquared (R2) scores are used to assess the predictive accuracy of each algorithm. Results show that Mean Squared Error (MSE) ranged from 0.030 to 1.277, Root Mean Squared Error (RMSE) from 0.236 to 1.130, and R-squared (R2) scores ranged between 0.734 and 0.993, with Random Forest Regressor achieving the highest accuracy. This study offers precise prognostications regarding mental fitness and establishes the underpinnings for the creation of effective tracking tools. Amidst society's endeavor to tackle intricate issues surrounding mental health, our research facilitates well-informed interventions and individualized strategies. This underscores the noteworthy contribution of technology in shaping a more Invigorating trajectory for the future. 2023 IEEE. -
Object Detection with Augmented Reality
This study describes an artificial intelligence (AI)-based object identification system for detecting real-world items and superimposing digital information in Augmented Reality (AR) settings. The system evaluates the camera stream from an AR device for real-Time recognition using deep learning algorithms trained on a collection of real-world items and their related digital information. Object recognition applications in AR include gaming, education, and marketing, which provide immersive experiences, interactive learning, and better product presentations, respectively. However, challenges such as acquiring larger and more diverse datasets, developing robust deep learning algorithms for varying conditions, and optimizing performance on resource-constrained devices remain. The AI-based object recognition system demonstrates the potential to transform AR experiences across domains, while emphasizing the need for ongoing research and development to fully realize its capabilities. 2023 IEEE. -
Deep Learning for Arrhythmia Classification: A Comparative Study on Different Deep Learning Models
Arrhythmias, or irregular heart rhythms, are a major global health concern. Since arrhythmias can cause fatal conditions like cardiac failure and strokes, they must be rapidly identified and treated. Traditional arrhythmia diagnostic techniques include manual electrocardiogram (ECG) image interpretation, which is time consuming and frequently required for expertise. This research automates and improves the identification of heart problems, with a focus on arrhythmias, by utilizing the capabilities of deep learning, an advanced machine learning technique that performs well at recognizing patterns in data. Specifically, we implement and compare Custom CNN, VGG19, and Inception V3 deep learning models, which classify ECG images into six categories, including normal heart rhythms and various types of arrhythmias. The VGG19 model excelled, achieving a training accuracy of 95.7% and a testing accuracy of 93.8%, showing the effectiveness of deep learning in the comprehensive diagnosis of heart diseases. 2023 IEEE. -
An Early-Stage Diabetes Symptoms Detection Prototype using Ensemble Learning
Diabetes is one of the most increasing health issues that the whole world is facing. Recent research has shown that diabetes is spreading quickly in India. Having more than 77 million sufferers, India is actually regarded as the diabetes capital of the world. The lifestyle and eating patterns of people who move from rural to urban settings alter, which raises the prevalence of diabetes. Diabetes has been linked to consequences like vision loss, renal failure, nerve damage, cardiovascular disease, foot ulcers, and digestive issues. Diabetes can harm the blood arteries and neurons in a variety of organs. FPG (Flaccid Plasma Glucose) is a popular test that is done to find out whether a person is a diabetic patient or not. However, not all people consistently take medication and neither monitor their blood sugar levels on a regular basis. Early detection of this disease is also an important thing that people usually don't do. Technology these days has emerged a lot in the healthcare zone. Many prototypes have already been made for the detection of diabetes. The prototype discussed in this paper is an ensemble learning approach for the detection of diabetes in a very early stage. Ensemble learning which includes the use of multiple model prediction has been used to make the outcome stronger and more trustworthy. The overall accuracy achieved by the model is 96.54%. XGBoost also records the minimal execution time of 2.77 seconds only. 2023 IEEE. -
Sustainable Technologies for Recycling Process of Batteries in Electric Vehicles
The effective management of batteries has always been a key concern for people because of the imposing challenges posed by battery waste on the environment. This paper explores strategic perspectives on the sustainable management of batteries incorporating modern techniques and scientific methodologies giving batteries a second-life application. A paradigm shift towards the legitimate use of the batteries by the introduction of round economy for battery materials and simultaneously checking the biological impression of this fundamental innovation area. 2023 IEEE. -
An Improved Image Up-Scaling Technique using Optimize Filter and Iterative Gradient Method
In numerous realtime applications, image upscaling often relies on several polynomial techniques to reduce computational complexity. However, in high-resolution (HR) images, such polynomial interpolation can lead to blurring artifacts due to edge degradation. Similarly, various edge-directed and learning-based systems can cause similar blurring effects in high-frequency images. To mitigate these issues, directional filtering is employed post corner averaging interpolation, involving two passes to complete the corner average process. The initial step in low-resolution (LR) picture interpolation involves corner pixel refinement after averaging interpolation. A directional filter is then applied to preserve the edges of the interpolated image. This process yields two distinct outputs: the base image and the detail image. Furthermore, an additional cuckoo-optimized filter is implemented on the base image, focusing on texture features and boundary edges to recover neighboring boundary edges. Additionally, a Laplacian filter is utilized to enhance intra-region information within the detailed image. To minimize reconstruction errors, an iterative gradient approach combines the optimally filtered image with the sharpened detail image, generating an enhanced HR image. Empirical data supports the effectiveness of the proposed algorithm, indicating superior performance compared to state-of-the-art methods in terms of both visual appeal and measured parameters. The proposed method's superiority is demonstrated experimentally across multiple image datasets, with higher PSNR, SSIM, and FSIM values indicating better image degradation reduction, improved edge preservation, and superior restoration capabilities, particularly when upscaling High-Frequency regions of images. 2023 IEEE. -
Students Perception of Chat GPT
An artificial intelligence based Chatbot, ChatGPT was launched by Open AI in November 2022. In the field of education, ChatGPT has several benefits as well as challenges. Chat GPT can be considered as an advanced and a powerful tool to enhance the learning experience. It adds value to the education system only when it is used wisely. However, it is important to understand that the challenges must be addressed. It may act as a good source for collating the information, but it is always advised by the researchers that ones own perspective must be added to draw inferences from the output generated by ChatGPT. Our study supports the finding that ChatGPT can be used for the generation of ideas or to learn a new language. It also becomes imperative for the faculties to motivate students to use ChatGPT and add their inferences as well. AI models like ChatGPT can provide assistance, answer questions and provide explanations on various topics, making learning more accessible and tailored to individual needs. With this paper, we aim to provide a more informed discussion around the usage of ChatGPT in education. 2023 IEEE. -
Machine Learning Based Spam E-Mail Detection Using Logistic Regression Algorithm
The rise of spam mail, or junk mail, has emerged as a significant nuisance in the modern digital landscape. This surge not only inundates user's email inboxes but also exposes them to security threats, including malicious content and phishing attempts. To tackle this escalating problem, the proposed machine learning-based strategy that employs Logistic Regression for accurate spam mail prediction. This research is creating an effective and precise spam classification model that effectively discerns between legitimate and spam emails. To achieve this, we harness a meticulously labeled dataset of emails, each classified as either spam or non-spam. This model is to apply preprocessing techniques to extract pertinent features from the email content, encompassing word frequencies, email header data, and other pertinent textual attributes. The choice of Logistic Regression as the foundational classification algorithm is rooted in its simplicity, ease of interpretation, and appropriateness for binary classification tasks. To process train the model using the annotated dataset, refining its hyper parameters to optimize its performance. By incorporating feature engineering and dimensionality reduction methodologies, bolster the model's capacity to generalize effectively to unseen data. Our evaluation methodology encompasses rigorous experiments and comprehensive performance contrasts with other well-regarded machine learning algorithms tailored for spam classification. The assessment criteria encompass accuracy, precision, recall, and the F1 score, offering a holistic appraisal of the model's efficacy. Furthermore, we scrutinize the model's resilience against diverse forms of spam emails, in addition to its capacity to generalize to new data instances. This model is to findings conclusively demonstrated that our Logistic Regression-driven spam mail prediction model achieves a competitive performance standing when juxtaposed with cutting-edge methodologies. The model adeptly identifies and sieves out spam emails, thereby cultivating a more trustworthy and secure email environment for users. The interpretability of the model lends valuable insights into the pivotal features contributing to spam detection, thereby aiding in the identification of emerging spam patterns. 2023 IEEE.