Browse Items (2150 total)
Sort by:
-
An Enhanced Data-Driven Weather Forecasting using Deep Learning Model
Predicting present climate and the evolution of the ecosystem is more crucial than ever because of the huge climatic shift that has occurred in nature. Weather forecasts normally are made through compiling numerical data on from the atmospheric state at the moment and also applying scientific knowledge in the atmospheric processes to forecast on how the weather atmosphere would evolve. The most popular study subject nowadays is rainfall forecasting because of complexity in handling the data processing in addition to applications in weather monitoring. Four different state temperature data were collected and applied deep learning methods to predict the temperature level in the forthcoming months. The results brought out with the accuracy from 92.5% to 97.2% for different state temperature data. 2023 IEEE. -
Lane Detection using Kalman Filtering
Autonomous vehicles are the future of transportation. Modern high-tech vehicles use a sequence of cameras and sensors and in order to assess their atmosphere and aid to the driver by generating various alerts. While driving, it is always a challenging task for drivers to notice lane lines on the road, especially at night time, it becomes more difficult. This research proposes a novel way to recognize lanes in a variety of environments, including day and night. First various pre-processing techniques are used to improve and filter out the noise present in the video frames. Then, a sequence of procedure with respect to lane detection is performed. This stable lane detection is achieved by Kalman filter, by removing offset errors and predict future lane lines. 2023 Elsevier B.V.. All rights reserved. -
The Troubling Emergence of Hallucination in Large Language Models - An Extensive Definition, Quantification, and Prescriptive Remediations
The recent advancements in Large Language Models (LLMs) have garnered widespread acclaim for their remarkable emerging capabilities. However, the issue of hallucination has parallelly emerged as a by-product, posing significant concerns. While some recent endeavors have been made to identify and mitigate different types of hallucination, there has been a limited emphasis on the nuanced categorization of hallucination and associated mitigation methods. To address this gap, we offer a fine-grained discourse on profiling hallucination based on its degree, orientation, and category, along with offering strategies for alleviation. As such, we define two overarching orientations of hallucination: (i) factual mirage (FM) and (ii) silver lining (SL). To provide a more comprehensive understanding, both orientations are further sub-categorized into intrinsic and extrinsic, with three degrees of severity - (i) mild, (ii) moderate, and (iii) alarming. We also meticulously categorize hallucination into six types: (i) acronym ambiguity, (ii) numeric nuisance, (iii) generated golem, (iv) virtual voice, (v) geographic erratum, and (vi) time wrap. Furthermore, we curate HallucInation eLiciTation (), a publicly available dataset comprising of 75,000 samples generated using 15 contemporary LLMs along with human annotations for the aforementioned categories. Finally, to establish a method for quantifying and to offer a comparative spectrum that allows us to evaluate and rank LLMs based on their vulnerability to producing hallucinations, we propose Hallucination Vulnerability Index (HVI). Amidst the extensive deliberations on policy-making for regulating AI development, it is of utmost importance to assess and measure which LLM is more vulnerable towards hallucination. We firmly believe that HVI holds significant value as a tool for the wider NLP community, with the potential to serve as a rubric in AI-related policy-making. In conclusion, we propose two solution strategies for mitigating hallucinations. 2023 Association for Computational Linguistics. -
Utilizing Machine Learning for Sport Data Analytics in Cricket: Score Prediction and Player Categorization
Cricket is a popular sport with complex gameplay and numerous variables that contribute to team performance. In recent years, sports analytics has gained significant attention, aiming to extract valuable insights from large volumes of cricket data. Cricket has many fans in India. With a strong fan following, many try to use their cricket intuition to predict the outcome of a match. A set of rules and a points system govern the game. The venue and the performance of each player greatly affect the outcome of the match. The game is difficult to predict accurately as the various components are closely related. The CRR (Current Run Rate) approach is used to predict the final score of the first innings of a cricket match. Total points are calculated by multiplying the average number of runs scored in each over by the total number of overs. For ODI cricket, these methods are useless as the game can change very quickly regardless of the current run rate. The game may be decided by 1 or 2 overs. For more accurate score predictions, a system is needed that can more accurately predict the outcome of an inning. This research paper explores the application of machine learning techniques to predict scores and classify players based on their roles in the squad. The study utilizes a comprehensive dataset comprising various attributes of cricket matches, including player statistics, match conditions, and historical performance. Linear Regression, Logistic Regression, Naive Bayes, Support Vector Machines (SVM), Decision Tree, and Random Forest regression models are employed to predict scores. Additionally, player categorization is performed using a classification approach. The results demonstrate the effectiveness of machine learning techniques in enhancing performance analysis and decision-making in the game of cricket. 2023 IEEE. -
A Dynamic Anomaly Detection Approach for Fault Detection on Fire Alarm System Based on Fuzzy-PSO-CNN Approach
Early detection is crucial due to the catastrophic threats to life and property that are involved with fires. Sensory systems used in fire alarms are prone to false alerts and breakdowns, endangering lives and property. Therefore, it is essential to check the functionality of smoke detectors often. Traditional plans for such systems have included periodic maintenance; however, because they don't account for the condition of the fire alarm sensors, they are sometimes carried out not when necessary but rather on a predefined conservative timeframe. They describe a data-driven online anomaly detection of smoke detectors, which analyzes the behavior of these devices over time and looks for aberrant patterns that may imply a failure, to aid in the development of a predictive maintenance approach. The suggested procedure begins with three steps: preprocessing, segmentation, and model training. A pre-processing unit can enhance data quality by compensating for sensor drifts, sample-to-sample volatility, and disturbances (noise). The proposed approach normalizes the data in preparation. The smoke source can be detected by using segmentation to differentiate it from the background. Following segmentation, Fuzzy-PSO-CNN is used to train the models. CNN and PSO, two of the most used alternatives, are both outperformed by the proposed method. 2023 IEEE. -
Enhancing Customer Experience and Sales Performance in a Retail Store Using Association Rule Mining and Market Basket Analysis
The retail business grows steadily year after year andemploys an abounding amounts of people globally, especially with the soaring popularity of online shopping. The competitive character of this fast-paced sector has been increasingly evident in recent years. Customers desire to blend the advantages of old purchasing habits with the ease of use of new technology. Retailers must thus guarantee that product quality is maintained when it comes to satisfying customer demands and requirements. This research paper demonstrates the potential value of advanced data analytics techniques in improving customer experience and sales performance in a retail store. Apriori, FP-Growth, and Eclat algorithms are applied in the real time transactional data to discover sociations and patterns in transactional data. Support, confidence and lift ratio parameters are used and apriori algorithm puts out several candidate item sets of increasing lengths and prunes those that fail to offer the assistance that is required threshold. We identified lift values are more when considering frozen meat, milk, and yogurt. if the customer decides to buy any of these items together, there is a chance that the customer will buy 3rd item from that group. Research arrived High confidence score is for Items like Semi Finished Bread and Milk so these products should be sold together, Followed by Packaged food and rolls. As retailers continue to face increasing competition and pressure to improve their operations, The aforementioned techniques may provide you a useful tool to comprehend consumer buying habits and tastes and for utilising that knowledge to come up with data-driven decisions that optimise product placement, enhance customer satisfaction, and attract sales. 2023 IEEE. -
Prior Cardiovascular Disease Detection using Machine Learning Algorithms in Fog Computing
The term latent disease refers to an infection that does not show symptoms but remains forever. In this paper, proposed a novel methodology for addressing latent diseases in machine learning by integrating fog computing techniques. Here there is a link between HIV to heart disease, that is when a person progresses to the next stage of HIV, a plague infection develops, causing cholesterol deposits to form. Plaque development causes the inside of the arteries to constrict over time, which may stimulate the release of numerous heat shock proteins and immune complexes into the bloodstream, potentially leading to heart disease. Heart disease has long been considered as a significant life-threatening illness in humans. Heart disease is driven by a range of factors including unhealthy eating, lack of physical exercise, gaining overweight, tobacco, as well as other hazardous lifestyle choices. Five different classifiers are used to perform the precision; they are Support vector machine, K-nearest neighbor, decision tree, and random forest, after we have used the classifier, the recommended ideal will split disease into groups which is created based on their threat issues. This will be beneficial to doctors assisting doctors in analyzing the risk factors associated with their patients. 2023 IEEE. -
A Comparative Analysis of LSB & DCT Based Steganographic Techniques: Confidentiality, Contemporary State, and Future Challenges
In order to maintain anonymity and security, the steganography is the technique of cloaking confidential data within what seems like harmless digital material. Several steganographic methods have been established devised over time, but those centered around the discrete cosine transformation (DCT) and the least significant bit (LSB) have drawn the most consideration. In this study, two common steganographic methods are compared and contrasted with an emphasis on the secrecy they can keep, the usage they are now receiving, and any potential difficulties in the future. As an alternative, the DCT-based method uses the frequency domain properties of cover media to obfuscate hidden information. Since it spreads the concealed information across several frequency coefficients, it provides greater security than LSB-based techniques. The resilience and imperceptibility of the concealed data are improved by a variety of DCT-based algorithms, such as the modified quantization and matrix encoding approaches, which we explore in detail. We also give a general summary of both approaches'current state in terms of their application, constraints, and areas in which they may be used. We evaluate the benefits and drawbacks of each approach, considering elements like payload size, computing difficulty, and detection resistance. 2023 IEEE. -
Unraveling Campus Placement Success Integrating Exploratory Insights with Predictive Machine Learning Models
The dynamics of campus placements have garnered considerable attention in recent years, with educational institutions, students, and employers all keenly invested in understanding the factors that drive successful recruitment. This surge in interest stems from the potential implications for academic curricula, student preparation, and hiring strategies. In this study, we aimed to unravel the myriad factors that influence a student's placement success, drawing from a comprehensive dataset detailing a range of academic and demographic attributes. Our methodology combined thorough exploratory data analysis with advanced predictive modeling. The exploratory phase unveiled notable patterns, particularly highlighting the roles of gender, academic performance analysis, Degree and MBA specialization in placement outcomes. In the predictive modeling phase, the spotlight was on state-of-the-art machine learning models, with a particular emphasis on their capacity to forecast placement success. Notably, algorithms like Logistic Regression and Support Vector Machines not only confirmed the insights from our exploratory analysis but also showcased remarkable predictive prowess, with accuracy scores nearing perfection. These findings not only demonstrate the capabilities of machine learning in the academic and recruitment spheres but also emphasize the enduring importance of core academic achievements in influencing placement outcomes. As a prospective direction, future research might benefit from examining how placement trends evolve over time and integrating qualitative insights to provide a holistic view of the campus recruitment process. 2023 IEEE. -
An Efficient and Robust Explainable Artificial Intelligence for Securing Smart Healthcare System
The advent of IoT technologies has a tremendous impact on the healthcare sector enabling efficient monitoring of patients and utilizing the data for better analytics. Since every activity related to a patients health is monitored, the focus on smart healthcare applications has significantly transferred from service provision to a security perspective. As most healthcare applications are automated security plays a vital role. The technique of machine learning has been widely used in securing smart healthcare systems. The major challenge is that these applications require high-quality labeled images, which are difficult to acquire from real-time security applications. Further, it highly time-consuming and cost-expensive process. To address these constraints, in this paper, we define an efficient and robust explainable artificial intelligence technique that takes a small quantity of labeled data to train and de-ploy the security countermeasure for targeted healthcare applications. The proposed approach enhances the security measure through the detection of drifting samples with explainability. It is observed that the proposed approach improved accuracy, high fidelity, and explanation measures. Also, this approach is proven to be considerably resistant against numerous security threats. 2023 IEEE. -
Area and Energy Efficient Method Using AI for Noise Cancellation in Ear Phones
Adaptive filters are suitable for most of the Digital Signal Processing (DSP) applications such as channel equalization, noise cancellation, echo cancellation, channel estimation and system identification. Nowadays due to the advancement in semiconductor technology, the need for Active Noise Cancellation (ANC) headphones in compact devices is increased. The major idea behind this proposed work is to design an area and energy efficient novel adaptive filter suitable for in-ear headphones by combining Normalized Least Mean Square (NLMS) and Block LMS (BLMS). The proposed filter is designed and simulated using Xilinx ISE 13.2. The simulation results shows that the proposed design mitigates the unwanted noises in various frequency bands. 2023 IEEE. -
Enhancement of Accuracy Level in Parking Space Identification by using Machine Learning Algorithms
Parking space identification is a crucial component in the development of intelligent transportation systems and smart cities. Accurate detection of parking spaces in urban areas can significantly improve traffic management, reduce congestion, and enhance overall parking efficiency. This proposed model is focuses on enhancing the accuracy of parking space identification through the utilization of Support Vector Machine (SVM) algorithms. The proposed methodology involves the following steps. First, a dataset comprising labelled parking space images is collected and pre-processed to ensure optimal quality and consistency. Next, feature extraction techniques are applied to capture certain relevant spatial and textural information from the images in the dataset, enabling the creation of informative feature vectors. These feature vectors are then utilized to train a SVM model, which is well-known for its capability to handle complex classification tasks. To measure the effectiveness of the SVM-based approach, a comprehensive set of experiments is carried out using real-world parking data. The performance metrics is to analysis accuracy level of the parking space identification. Comparative analysis has been done by comparing the proposed SVM approach with other popular machine learning algorithmsto demonstrate the superiority. The results indicate that the SVM-based model achieves a significantly higher accuracy level in parking space identification compared to other existing algorithms. 2023 IEEE. -
Evaluating the Effectiveness of a Facial Recognition-Based Attendance Management System in a Real-World Setting
Face recognition technology has been extensively used in multiple verticals of security, surveillance, and human-computer interaction. Conventional techniques including manual sign-ins, identity cards, or biometric verification have been used by traditional attendance systems. Face recognition systems have, however, become a popular way to track attendance, thanks to developments in computer vision and machine learning. The construction of an attendance registration application is the main topic of this research study, which also offers a thorough overview of facial recognition attendance systems. This study seeks to provide light on the benefits, drawbacks, and potential applications of these fast-developing technologies. Face recognition technology may be integrated into attendance systems to increase productivity, accuracy, and user comfort. However, issues like privacy worries and technological constraints must be resolved. With predicted future improvements in machine learning algorithms and hardware capabilities, face recognition attendance systems look to have a bright future. This research article adds to a deeper understanding and successful application of facial recognition technology in attendance systems by examining these features. 2023 IEEE. -
A Intelligent Approach for Fault Detection in Solar Photovoltaic Systems based on BERT-BiGRU Network
Large-scale photovoltaic (PV) plant problem identification and diagnosis is expected to grow more difficult in the future as more and more plants of increasing capacity enter into existence. To keep large-scale PV installations safe, reliable, and productive, automatic identification and localization of any mal-operation among thousands of PV modules is necessary. In order to identify problems in PV plants, the suggested method compares the 'residuals' (fault indicator signals) generated by each string to a predetermined threshold. The suggested method relies on three distinct processes: data preparation, feature extraction, and model training. Preprocessing employs the method of Transform Invariant Low-rank Textures (TILT). The most useful and efficient measurements from feature extraction are kept while less important ones are discarded in favor of the Reduced Kernel PCA technique. Let's move on to model training with BERT-BiGRU. The proposed method is clearly superior compared to the two leading options, BERT and GRU. The proposed method had a 97.36% success rate. 2023 IEEE. -
Predicting Work Environment and Job Environment Among Employees using Transfer Learning Approach
Today's enterprises face numerous challenges as a result of the world's rapid evolution. Maintaining a content workforce is crucial to a company's success and survival in today's fast-paced business environment. The efficacy, productivity, efficiency, and dedication of the company's staff are directly associated with the company's capacity to meet the needs of its employees in the workplace. The focus of this system is to identify the factors that contribute to a satisfying work environment for the participants. Preprocessing, feature selection, and model training are the first three steps in the suggested methodology. Data mining systems should get in the habit of normalizing data as a preliminary processing step. The multiple elements assessing company culture and worker satisfaction were consolidated using Principal Components Analysis (PCA) in the feature selection phase. Once features have been selected, KNN-SVM is utilized for model training. When compared to the two most popular alternatives, SVM and KNN, the proposed technique performs better. 2023 IEEE. -
A Predictive Modelling of Factors Influencing Job Satisfaction Through a CNN-BiGRU Algorithm
The fields of humanities, psychology, and sociology are where the word 'job satisfaction' originated. According to psychology, it is a condition in which a worker experiences his circumstances emotionally and responds by experiencing either pleasure or suffering. It is regarded as a variable in various sociological categories pertaining to how each employee assesses and thinks about his work. Because a satisfied employee contributes to and builds upon an organization's success, job satisfaction is intimately tied to an employee's performance and the quality of the work they do. As a result, job satisfaction directly correlates to an organization's success. The proposed strategy incorporates data preprocessing, feature selection, and model training. The missing value is a common feature of data preparation. Feature selection is chosen using the ANOVA F-Test Filter, the Chi-Square Filter, and the full Data Set Construction procedure. The model's efficacy can be evaluated with the help of CNN-BiGRU. The proposed technique is compared to two more models: BiGRU and CNN. It has been shown that our proposed technique outperforms two other models. 2023 IEEE. -
Smell Technology: Advancements and Prospects in Digital Scent Technology and Fragrance Algorithms
Smell technology, a rapidly expanding sector of the scent business, aims to digitally replicate and transmit aromas. Its applications include virtual reality, e-commerce, and healthcare. Recent advances in the field include the creation of smell algorithms and the use of artificial intelligence to create more realistic fragrances. Fragrance algorithms are mathematical models that predict the scent of a fragrance based on its chemical composition. They might be used to the perfume industry to streamline the production of perfumes and do away with the need for expensive trial-and-error methods. Artificial intelligence is also being used to create digital representations of fragrances that closely resemble the real thing by analysing the chemical composition of actual odours. A possible benefit of this technology in the healthcare sector is that synthetic odours may mimic the scents of diseases and aid physicians in making more precise diagnoses. Additionally, some companies are developing small devices that can be connected to computers or mobile devices to emit odours on demand, providing users of virtual reality, gaming, and online shopping with a more realistic experience. In spite of these advancements, it is still exceedingly challenging to recreate the complexity of natural scents, which can include hundreds of different components. With more research and development, there is still a tonne of promise for fragrance technology in the future. 2023 IEEE. -
XGBoost Classification of XAI based LIME and SHAP for Detecting Dementia in Young Adults
As technology progresses on a fast pace, it is imperative that shall be used in the field of medicine for the early detection and diagnostics of dementia. Dementia affects humans by deteriorating the cognitive functions, and as such many algorithms have been used in the detection of the same but all these algorithms remain a black box to the medical fraternity which is still dubious about the nature and credibility of the prediction. To ease this issue, the use of explainable artificial intelligence has been proposed and implemented in this paper, which makes it easy to understand why and how the model is giving a particular output. In this paper the XGBoost classification algorithm has been used which give an accuracy of 93.33% and to understand these predictions, two separate algorithms namely Local Interpretable Model-agnostic Explanations (LIME) and Shapely Additive Explanations (SHAP) have been used. These algorithms are compared based on the type of explanation they provide for the same input and thus the weakness of LIME algorithm has been found out at certain intervals based on the clinically important features of the dataset. On the other hand, both the algorithms make it easy for medical practitioners to understand the dominating factors of a predicted output thereby helping to eliminate the black-box nature of dementia detection. 2023 IEEE. -
A Stacked BiLSTM based Approach for Bus Passenger Demand Forecasting using Smart Card Data
Demand forecasting is crucial in the business sector. Despite the inherent uncertainty of the future, it is essential for any firm to be able to accurately predict the market for both short- and long-term planning in order to place itself in a profitable position. The proposed approach focus on the passenger transport sector because it is particularly vulnerable to fluctuations in consumer demand for perishable commodities. At every stage of the planning process from initial network designs to final pricing of inventory for each vehicle in a route-an accurate prediction of demand is essential. Forecasting passenger demand is crucial since passenger transportation is responsible for a substantial chunk of global commerce. The suggested method relies on three distinct techniques: data preparation, feature selection, and model training. Data modification, cleansing, and reduction are the three sub-processes that make up preprocessing. When it comes to feature selection, partition-based clustering algorithms like k-means are the norm. Let's go on to training the models with stacked BiLSTM. The proposed method is demonstrably superior to both LSTM and BiLSTM, the two most common competing approaches. The proposed method had a success rate of 98.45 percent. 2023 IEEE. -
Enhancing Software Cost Estimation using COCOMO Cost Driver Features with Battle Royale Optimization and Quantum Ensemble Meta-Regression Technique
This research suggests a unique method for improving software cost estimates by combining Battle Royale Optimisation (BRO) and Quantum Ensemble Meta-Regression Technique (QEMRT) with COCOMO cost driver characteristics. The strengths of these three strategies are combined in the suggested strategy to increase the accuracy of software cost estimation. The COCOMO model is a popular software cost-estimating methodology that considers several cost factors. BRO is a metaheuristic algorithm that mimics the process of the fittest people being selected naturally and was inspired by the Battle Royale video game. The benefits of quantum computing and ensemble learning are combined in the machine learning approach known as QEMRT. Using a correlation-based feature selection technique, we first identified the most important COCOMO cost drivers in our study. To get the best-fit model, we then used BRO to optimize the weights of these cost drivers. To further increase the estimation's accuracy, QEMRT was utilized to meta-regress the optimized model. The suggested method was tested on two datasets for software cost estimating that are available to the public, and the outcomes were compared with other cutting-edge approaches. The experimental findings demonstrated that our suggested strategy beat the other approaches in terms of accuracy, robustness, and stability. In conclusion, the suggested method offers a viable strategy for improving the accuracy of software cost estimation, which might help software development organizations by improving project planning and resource allocation. 2023 IEEE.