Browse Items (11810 total)
Sort by:
-
Dark matter, dark energy, and alternate models: A review
The nature of dark matter (DM) and dark energy (DE) which is supposed to constitute about 95% of the energy density of the universe is still a mystery. There is no shortage of ideas regarding the nature of both. While some candidates for DM are clearly ruled out, there is still a plethora of viable particles that fit the bill. In the context of DE, while current observations favour a cosmological constant picture, there are other competing models that are equally likely. This paper reviews the different possible candidates for DM including exotic candidates and their possible detection. This review also covers the different models for DE and the possibility of unified models for DM and DE. Keeping in mind the negative results in some of the ongoing DM detection experiments, here we also review the possible alternatives to both DM and DE (such as MOND and modifications of general relativity) and possible means of observationally distinguishing between the alternatives. 2017 COSPAR -
Data acquisition using NI LabVIEW for test automation
In a fighter aircraft, the pilot's safety is of utmost importance, and the pressure sensing in the pilot's mask is essential for ensuring the pilot's safety. This innovative solution ensures the swift and accurate measurement of pressure, minimizing the risk of potential hazards and enhancing military aviation safety. Additionally, it provides a robust and reliable solution that can withstand the harsh and challenging conditions often encountered in the field. This chapter explores the advanced capabilities and benefits of utilizing the National Instrument USB-6363, programmed with LabVIEW, in military aviation, highlighting its potential for revolutionizing pressure measurement processes in this critical field. It describes a research study on developing a pressure-sensing system for pilot masks using NI USB 6363 and LabVIEW. 2023, IGI Global. -
Data Analysis and Machine Learning Observation on Production Losses in the Food Processing Industry
Food wastage and capturing lineage from production to consumption is a bigger concern. Yielding, storage and transportation areas have evolved to a great extent associated to manufacturing and automation which lead to technical advancements in food processing industry. In such situation, losses are generally observed in the crop production which are sometimes minimal and ignored. However, in some cases these losses are huge and are becoming a threat to the both producers and consumers. Here we considered data related to dairy products and analysed the production losses especially while processing them in the treating unit. Literature on parameters and associated data analysis in the form of graphical representation are provided in the appropriate sections of the paper. Linear regression and correlation were envisaged in view of incorporating machine learning techniques understanding production losses. Karl Pearson's correlation provides an observation related to association of parameters which are desired to be less coupled in terms of employing proposed newer methodology. 2023 IEEE. -
Data analysis in road accidents using ann and decision tree
Road accidents have become some of the main causes for fatal death globally. A report tells that road accident is the major cause for high death rate other than wars and diseases. A study by World Health Organization (WHO), Global status report on road safety 2015 says over 1.24 million people die every year due to road accidents worldwide and it even predicts by 2020 this number can even increase by 20-50%. This can affect the GDP of the Country, for developing countries this can affect adversely. This paper shows the use of data analytics techniques to build a prediction model for road accidents, so that these models can be used in real time scenario to make some policies and avoid accidents. This paper has identified the attributes which has high impact on accident severity class label. IAEME Publication. -
Data Analysis on Hypothyroid Profiles using Machine Learning Algorithms
Machine learning algorithms enable computers to learn from data and continuously enhance performance without explicit programming. Machine learning algorithms have significantly improved the accuracy and efficacy of thyroid diagnosis. This study identified and analysed the usefulness of several machine-learning algorithms in predicting hypothyroid profiles. The main goal of this study was to see the extent to which the algorithms adequately assessed whether a patient had hypothyroidism. Age, sex, health, pregnancy, and other factors are among the many factors considered. Extreme Gradient Boosting Classifier, Logistic Regression, Random Forest, Long-Term Memory, and K-Nearest Neighbors are some of the machine learning methods used. For this work, two datasets were used and analysed. Data on hypothyroidism was gathered via DataHub and Kaggle. These algorithms were applied to the collected data based on metrics such as Precision, Accuracy, F1 score and Recall. The findings showed that the Extreme Gradient Boosting classification method outperformed the others regarding F1 score, accuracy, precision, and recall. The research demonstrated how machine learning algorithms might predict thyroid profiles and identify thyroid-related illnesses. 2023 IEEE. -
Data Analytics and ML for Optimized Performance in Industry 4.0
Industry 4.0, the fourth industrial revolution, has revolutionized manufacturing and production systems by integrating Data Analytics (DA) and Machine Learning (ML) techniques. Predictive maintenance, which predicts equipment malfunctions and schedules maintenance in advance, is a crucial application of DA and ML within Industry 4.0. It reduces downtime, improves productivity, and lowers costs. Demand forecasting, which uses historical data and ML algorithms to predict future product demand, and anomaly detection, which identifies abnormal patterns or events within large datasets, are also critical applications of DA and ML in Industry 4.0. They enhance operational efficiency and reduce costs. However, the adoption of DA and ML presents several challenges for organizations, including infrastructure, personnel, ethical, and privacy concerns. To realize the benefits of DA and ML, companies must invest in appropriate hardware and software and develop the necessary expertise. They must also handle data responsibly and transparently to ensure privacy and ethical standards. Despite these challenges, the integration of DA and ML in Industry 4.0 is critical for optimized performance, improved productivity, and cost savings. 2024 selection and editorial matter, Nidhi Sindhwani, Rohit Anand, A. Shaji George and Digvijay Pandey; individual chapters, the contributors. -
Data Analytics for Social Microblogging Platforms
Data Analysis for Social Microblogging Platforms explores the nature of microblog datasets, also covering the larger field which focuses on information, data and knowledge in the context of natural language processing. The book investigates a range of significant computational techniques which enable data and computer scientists to recognize patterns in these vast datasets, including machine learning, data mining algorithms, rough set and fuzzy set theory, evolutionary computations, combinatorial pattern matching, clustering, summarization and classification. Chapters focus on basic online micro blogging data analysis research methodologies, community detection, summarization application development, performance evaluation and their applications in big data. 2023 Elsevier Inc. All rights reserved. -
Data and Its Dimensions
In current times Data is the biggest economic opportunity. As per the studies, it is observed that the world is becoming 2.5 quintillions data-rich every day, with an average of every human contributing 1.7MB of data per second. Every individual has a good appetite for data, as it gives immense insight to explore and expand the business. With the invention of smart devices and innovation in the field of connectivity such as 4G-5G Mobile Networks and Wi-Fi, the generation and consumption of the data are steadily increasing. These smart devices continuously generate data, leading to a bigger pool for better decision-making. This chapter presents data, the various forms and sources, and the concept of Data Science; it discusses how the ownership and value of data are decided; and also highlights the use, abuse, and overuse of the data along with data theft, and a case study to represent data breach. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Data Augmentation for Handwritten Character Recognition of MODI Script Using Deep Learning Method
Deep learning-based methods such as convolutional neural networks are extensively used for various pattern recognition tasks. To successfully carry out these tasks, a large amount of training data is required. The scarcity of a large number of handwritten images is a major problem in handwritten character recognition; this problem can be tackled using data augmentation techniques. In this paper, we have proposed a convolutional neural network-based character recognition method for MODI script in which the data set is subjected to augmentation. The MODI script was an official script used to write Marathi, until 1950, the script is no more used as an official script. The preparation of a large number of handwritten characters is a tedious and time-consuming task. Data augmentation is very useful in such situations. Our study uses different types of augmentation techniques, such as on-the-fly (real-time) augmentation and off-line method (data set expansion method or traditional method). A performance comparison between these methods is also performed. 2021, The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Data Classification and Incremental Clustering Using Unsupervised Learning
Data modelling, which is based on mathematics, statistics, and numerical analysis, is used to look at clustering. Clusters in machine learning allude to hidden patterns; unsupervised learning is used to find clusters, and the resulting system is a data concept. As a result, clustering is the unsupervised discovery of a hidden data concept. The computing needs of clustering analysis are increased becausedata mining deals with massive databases. As a result of these challenges, data mining clustering algorithms that are both powerful and widely applicable have emerged. Clustering is also known as data segmentation in some applications because it splits large datasets into categories based on their similarities. Outliers (values that are far away from any cluster) can be more interesting than typical examples; hence outlier detection can be done using clustering. Outlier detection applications include the identification of credit card fraud and monitoring unlawful activities in Internet commerce.As a result, multiple runs with alternative initial cluster center placements must be scheduled to identify near-optimal solutions using the K-means method. A global K-means algorithm is used to solve this problem, which is a deterministic global optimization approach that uses the K-means algorithm as a local search strategy and does not require any initial parameter values. Insteadof selecting initial values for all cluster centers at random, as most global clustering algorithms do, the proposed technique operates in stages, preferably adding one new cluster center at a time. 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG. -
Data Economy: Data and Money
The article explores the concept of data economy, which is based on the sharing of data across platforms and ecosystems. Data has evolved from factual information to a new asset for companies worldwide, and the article discusses its evolution from brittle paper records to complex databases and algorithms like blockchain. With a prediction of a data explosion of about 175 zettabytes by 2025, data is used extensively in various domains, from agriculture to healthcare. The article also discusses how the data economy is not domain-specific but is a universal shift as all companies transition to become technology-driven companies. The data network effect is a cycle that uses data to acquire service users and generate more data. This has become a B2B service model that has added profits to various tech giants balance sheets. The article concludes by exploring the current need for data sharing across organizations and the future scope of the data economy. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024. -
Data Encryption Algorithm for Local Area Network (LAN)
The volume of traffic moving over the Internet is expanding exponentially every day due to increase in communication through Emails, branch offices remotely connect to their corporate network and commercial transactions. Hence protection of networks and their services from unauthorized modification and destruction is very much needed. TCPIP is the most commonly used communication protocol in the Internet domain. IP packets are exchanged between the end hosts as plain text (without any encryption). As Internet uses PSDN (Packet Switching Data Network) anybody who has access to PSDN can access/modify the data. Hence securing data over the network is difficult. The goal of network security is to provide, authenticity, confidentiality and integrity. Confidentiality is making sure that no body other than the receiver will be able to read the data. Integrity is making sure that the data didnt get modified by intruder or by some other means while it is getting transmitted. Authenticity is making sure that the data is coming from the right sender. In this paper we propose a new data encryption algorithm based on private key (symmetric key) cryptography method. Keys are shared between two end hosts using simple algorithm. Cyber block chaining method is used while encrypting/decrypting the data. Large prime numbers are generated well in advance and kept for further key refreshments. The keys were refreshed periodically, so it gives very minimal time for the hackers to attack the system. As simple operations are used, we will be able to achieve fast and secure data encryption/decryption using this method. The behavior of the proposed approach is verified through various tests. -
Data Encryption and Decryption Techniques Using Line Graphs
Secure data transfer has become a critical aspect of research in cryptography. Highly effective encryption techniques can be designed using graphs in order to ensure secure transmission of data. The proposed algorithm in this paper uses line graphs along with adjacency matrix and matrix properties to encrypt and decrypt data securely in order to arrive at a ciphertext using a shared-key. 2021, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Data encryption and decryption using graph plotting
Cryptography plays a vital role in today's network. Security of data is one of the biggest concern while exchanging data over the network. The data need to be highly confidential. Cryptography is the art of hiding data or manipulating data in such a way that where no third party can understand the original data while transmission from source to destination. In this paper, a modified affine cipher algorithm has been used to encrypt the data. The encrypted data will be plot onto a graph. Later, graph will be converted into image. This system allows sender to select his/her own keys to encrypt the original data before plotting graph. Then, Receiver will use the same key to decrypt the data. This system will provide the better security while storing the data in cloud in the form of secret message embedding in graphical image file in network environment. IAEME Publication. -
Data encryption in public cloud using multi-phase encryption model
Cloud computing the most used word in the world of Information Technology, is creating huge differences in IT industry. Nowadays huge amount of data is being generated and the researchers are finding new ways of managing these data. Basically, the word cloud refers to a virtual database that stores huge data from various clients. There are three types of cloud public, private and hybrid. Public cloud is basically for general users where users can use cloud services free or by paying. Private cloud is for any particular organizations and hybrid one is basically combine of both. Cloud offers various kind of services such as IAAS, PAAS, SAAS where services like platform for running any application, accessing the huge storage, can use any application running under cloud are given. The cloud also has a disadvantage regarding the security for the data storage facility. Basically, the public cloud is prone to data modification, data hacking and thus the integrity and confidentiality of the data is being compromised. Here in our work the concern is to protect the data that will be stored in the public cloud by using the multi-phase encryption. The algorithm that we have proposed is a combination of Rail Fence cipher and Play Fair cipher. 2018 Snata Choudhury, Dr. Kirubanand V.B. -
Data Engineering and Data Science: Concepts and Applications
DATA ENGINEERING and DATA SCIENCE Written and edited by one of the most prolific and well-known experts in the field and his team, this exciting new volume is the one-stop shop for the concepts and applications of data science and engineering for data scientists across many industries. The field of data science is incredibly broad, encompassing everything from cleaning data to deploying predictive models. However, it is rare for any single data scientist to be working across the spectrum day to day. Data scientists usually focus on a few areas and are complemented by a team of other scientists and analysts. Data engineering is also a broad field, but any individual data engineer doesnt need to know the whole spectrum of skills. Data engineering is the aspect of data science that focuses on practical applications of data collection and analysis. For all the work that data scientists do to answer questions using large sets of information, there have to be mechanisms for collecting and validating that information. In this exciting new volume, the team of editors and contributors sketch the broad outlines of data engineering, then walk through more specific descriptions that illustrate specific data engineering roles. Data-driven discovery is revolutionizing the modeling, prediction, and control of complex systems. This book brings together machine learning, engineering mathematics, and mathematical physics to integrate modeling and control of dynamical systems with modern methods in data science. It highlights many of the recent advances in scientific computing that enable data-driven methods to be applied to a diverse range of complex systems, such as turbulence, the brain, climate, epidemiology, finance, robotics, and autonomy. Whether for the veteran engineer or scientist working in the field or laboratory, or the student or academic, this is a must-have for any library. 2023 Scrivener Publishing LLC. -
Data Ethics
Ethics is all about living an ethical life. As rational beings, humans have always been in the pursuit of ethical life in spite of the contrary temptations. Society and engagement in society are helpful for a person to be ethical. However, it is the choice one makes in critical situations that define the ethical nature of a person. Swarmed by a vast pool of data, the complex nature of ethical decision-making is getting far more complex for human beings with autonomous cognitive faculty. One needs to be conscious and focused to face any dilemma in ones life. Dealing prudently with private and public data and understanding the science of data would help the homo sapiens to prove her/his relevance in this data-driven world. 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. -
Data Ingestion - Cloud based Ingestion Analysis using NiFi
Data Ingestion has been an integral part of Data Analysis. Bringing the data from various heterogeneous sources to one common place and ensuring the data is captured in the appropriate format is the key for performing any Big data task. Data ingestion is performed using multiple frameworks across the industry and they all have their own set of benefits and drawbacks. Apache NiFi is one popular ingestion framework which is used widely and does Ingestion effectively. Ingestion is performed on various sources and the data is generally stored in clusters or cloud storage. In this paper, we have done the File Data Ingestion using the NiFi framework on a local machine and then on two cloud-based platforms, namely Google Cloud Platform (GCP) and Amazon Web Services (AWS). The objective is to understand the latency and performance of the NiFi tool on Cloud-based Ingestion and provide a comparative study against the typical Data Ingestion. The entire setup was done on a local machine and two corresponding cloud platforms namely GCP and AWS. The findings from the comparative analysis have been compiled in a tabular format and graphs are created for easy reference. The paper places emphasis on the significance of NiFi's data ingestion performance on Cloud Platform and attempts to present it as a major activity on the data ingestion platform for Cloud Ingestion Solution. 2023 IEEE. -
Data journalists perception and practice of transparency and interactivity in Indian newsrooms
Data journalism research recorded exponential growth during the last decade. However, the extant literature lacks comparative perspectives from the Asian region as it has been focused on select geographies (mainly Europe and the US). In this backdrop, the present study examined data journalism practices in the Indian media industry by conducting intensive interviews with 11 data journalists to investigate their perception of transparency and interactivity which are two of the core aspects of data journalism practice. Further, a content analysis of data stories published by two Indian news organizations for two years was conducted to assess the status of transparency and interactivity options in these stories. The findings showed that Indian data journalists acknowledge the importance of transparency and interactivity, but exhibit a cautious approach in using them. There is general apathy in practicing transparency among journalists in legacy organizations, drawing a stark contrast with their counterparts in digitally-native organizations. 2022 Asian Media Information and Communication Centre. -
Data linearity using Kernel PCA with Performance Evaluation of Random Forest for training data: A machine learning approach
In this study, Kernel Principal Component Analysis is applied to understand and visualize non-linear variation patterns by inverse mapping the projected data from a high-dimensional feature space back to the original input space. Performance Evaluation of Random Forest on various data sets has been compared to understand accuracy and various statistical measures of interest. 2016 IEEE.