Browse Items (11855 total)
Sort by:
-
Bibliometric analysis of the impact of blockchain technology on the tourism industry
The tourism sector is one of the world's fastest-expanding industries. Because of the benefits, it provides to individuals and organizations, the tourism sector has attracted a lot of attention throughout the years. But because of its poor and obsolete data management techniques, this industry is in desperate need of reform. Blockchain technology is one method for managing and exploring data relevant to the tourism industry. This study used bibliometric methods to analyze the impact of blockchain technology on the tourism sector from 2017 to 2022. The publications were extracted from the dimensions database, and the VOS viewer software was used to visualize research patterns. The findings provided valuable information on the publication year, authors, author's country, author's organizational affiliations, publishing journals, etc. Based on the findings of this analysis, researchers may be able to design their studies better and add more insights into their empirical studies. 2024 Srinesh Thakur, Anvita Electronics, 16-11-762, Vijetha Golden Empire, Hyderabad. -
Bibliometric Analysis: A Trends and Advancement in Clustering Techniques on VANET
In recent years, Traffic management and road safety has become a major concern for all countries around the globe. Many techniques and applications based on Intelligent Transportation Systems came into existence for road safety, traffic management and infotainment. To support the Intelligent Transport System, VANET has been implemented. With the highly dynamic nature of VANET and frequently changing topology network with high mobility of vehicles or nodes, dissemination of messages becomes a challenge. Clustering Technique is one of the methods which enhances network performance by maintaining communication link stability, sharing network resources, timely dissemination of information and making the network more reliable by using network bandwidth efficiently. This study uses bibliometric analysis to understand the impact of Clustering techniques on VANET from 2017 to 2022. The objective of the study was to understand the trends & advancement in clustering in VANET through bibliometric analysis. The publications were extracted from the Dimension database and the VOS viewer was used to visualize the research patterns. The findings provided valuable information on the publication author, authors country, year, authors organization affiliation, publication journal, citation etc. Based on the findings of this analysis, the other researchers may be able to design their studies better and add more perception or understanding to their empirical studies. The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024. -
Bifunctional Amorphous Transition-Metal Phospho-Boride Electrocatalysts for Selective Alkaline Seawater Splitting at a Current Density of 2Acm?2
Hydrogen production by direct seawater electrolysis is an alternative technology to conventional freshwater electrolysis, mainly owing to the vast abundance of seawater reserves on earth. However, the lack of robust, active, and selective electrocatalysts that can withstand the harsh and corrosive saline conditions of seawater greatly hinders its industrial viability. Herein, a series of amorphous transition-metal phospho-borides, namely Co-P-B, Ni-P-B, and Fe-P-B are prepared by simple chemical reduction method and screened for overall alkaline seawater electrolysis. Co-P-B is found to be the best of the lot, requiring low overpotentials of ?270mV for hydrogen evolution reaction (HER), ?410mV for oxygen evolution reaction (OER), and an overall voltage of 2.50V to reach a current density of 2Acm?2 in highly alkaline natural seawater. Furthermore, the optimized electrocatalyst shows formidable stability after 10,000 cycles and 30h of chronoamperometric measurements in alkaline natural seawater without any chlorine evolution, even at higher current densities. A detailed understanding of not only HER and OER but also chlorine evolution reaction (ClER) on the Co-P-B surface is obtained by computational analysis, which also sheds light on the selectivity and stability of the catalyst at high current densities. 2024 The Authors. Small Methods published by Wiley-VCH GmbH. -
Big data analytics lifecycle
Big data analysis is the process of looking through and gleaning important insights from enormous, intricate datasets that are too diverse and massive to be processed via conventional data processing techniques. To find patterns, trends, correlations, and other important information entails gathering, storing, managing, and analyzing massive amounts of data. Datasets that exhibit the three Vs-volume, velocity, and variety-are referred to as "big data. " The vast amount of data produced from numerous sources, including social media, sensors, devices, transactions, and more, is referred to as volume. The rate at which data is generated and must be processed in real-time or very close to real-time is referred to as velocity. Data that is different in its sorts and formats, such as structured, semi-structured, and unstructured data, is referred to as being varied. 2024, IGI Global. All rights reserved. -
Big Data Analytics Tools and Applications for Modern Business World
In the modern world, data is the unavoidable word. The digital environment in almost all our day to day life is linked with digital data. Effective data management is one of the important tasks. The gradual growth of technology in recent years, the generation of data has increased exponentially. Everything, ranging from sending a mail to simply browsing the internet generates data and this is collected and stored. This data has countless uses in various fields such as medicine, business, agriculture and marketing, but most of the time it goes unused. Business intelligence is a key factor in the current business world. Business growth is purely depending on technology. Technology is not only used in manufacturing it is applied to getting the customer. The data analytics is still in its earlier stages and has a long way to go before it yields favourable results. It is a good time as any to start working in this domain to utilize its prowess. This article has discussed the opportunities and growth of data analytics in the research domain. It can face soon when it reaches its advance stages. The big data is handling a larger amount of data in a conventional and non-conventional manner. Technology is playing a vital role to handle larger data from the database. This article is to discuss data analytics application in modern industry. In the technical perspective, big data Map-reduce is an advanced tool and for simulation part, R tool is used. 2020 IEEE. -
Big Data Analytics: A Trading Strategy of NSE Stocks Using Bollinger Bands Analysis
The availability of huge distributed computing power using frameworks like Hadoop and Spark has facilitated algorithmic trading employing technical analysis of Big Data. We used the conventional Bollinger Bands set at two standard deviations based on a band of moving average over 20 minute-by-minute price values. The Nifty 50, a portfolio of blue chip companies, is a stock index of National Stock Exchange (NSE) of India reflecting the overall market sentiment. In this work, we analyze the intraday trading strategy employing the concept of Bollinger Bands to identify stocks that generates maximum profit. We have also examined the profits generated over one trading year. The tick-by-tick stock market data has been sourced from the NSE and was purchased by Amrita School of Business. The tick-by-tick data being typically Big Data was converted to a minute data on a distributed Spark platform prior to the analysis. 2019, Springer Nature Singapore Pte Ltd. -
Big data and artificial intelligence: Creative tools for destination competitiveness
With the advancement of ICT, the tourism industry has undergone a digital transformation where management, marketing, and communication are largely using web based applications. Automation of processes like ticketing and reservation, online hotel booking, E visa processing, etc., indicates the significant reliance of the sector on technology and world wide web for its services. Big data and artificial intelligence are a fairly new and innovative approach to addressing this issue of managing and analyzing huge datasets collected from multiple sources. This chapter focuses on understanding the role and importance of big data and artificial intelligence in the tourism industry and its impact on improving the overall image and attractiveness of destinations. Copyright 2023, IGI Global. -
Big Data and Competition Law: A New Challenge for Competition Authorities
Big data has become a key role player for almost all kinds of markets specifically in a digital economy. It is a raw material as well as a by-product of any process. It has very comprehensive inclusivity to cover all aspects of the market having direct as well as indirect market effects. These effects are inclined towards consumerism and market transparency. But it has inherent dangers that are somehow overlooked by competition authorities. Competition law has dealt with the brick-and-mortar economy that is traditional in a very efficient way. However, this is not the case with the digital economy. Traditional notions of the market, abuse of dominant position, anticompetitive practices, and regulation of combinations cannot be made applicable to the digital economy in the same manner. Big data analytics enables big giants or corporations to establish their dominance in their relevant market. Google, Amazon, Facebook, and Apple have been dominating almost digital economy; hence their strategies are being scrutinized under the lenses of competition law once again. This paper deals with the interplay between big data and competition law, and it is going to explore the impact of this unavoidable aspect of big data on a highly competitive digital economy. 2024 Taylor & Francis. -
Big Data De-duplication using modified SHA algorithm in cloud servers for optimal capacity utilization and reduced transmission bandwidth; [Big Data Deduplicaci utilizando algoritmo SHA modificado en servidores en la nube para una utilizaci tima de la capacidad y un ancho de banda de transmisi reducido]
Data de-duplication in cloud storage is crucial for optimizing resource utilization and reducing transmission overhead. By eliminating redundant copies of data, it enhances storage efficiency, lowers costs, and minimizes network bandwidth requirements, thereby improving overall performance and scalability of cloud-based systems. The research investigates the critical intersection of data de-duplication (DD) and privacy concerns within cloud storage services. Distributed Data (DD), a widely employed technique in these services and aims to enhance capacity utilization and reduce transmission bandwidth. However, it poses challenges to information privacy, typically addressed through encoding mechanisms. One significant approach to mitigating this conflict is hierarchical approved de-duplication, which empowers cloud users to conduct privilegebased duplicate checks before data upload. This hierarchical structure allows cloud servers to profile users based on their privileges, enabling more nuanced control over data management. In this research, we introduce the SHA method for de-duplication within cloud servers, supplemented by a secure pre-processing assessment. The proposed method accommodates dynamic privilege modifications, providing flexibility and adaptability to evolving user needs and access levels. Extensive theoretical analysis and simulated investigations validate the efficacy and security of the proposed system. By leveraging the SHA algorithm and incorporating robust pre-processing techniques, our approach not only enhances efficiency in data deduplication but also addresses crucial privacy concerns inherent in cloud storage environments. This research contributes to advancing the understanding and implementation of efficient and secure data management practices within cloud infrastructures, with implications for a wide range of applications and industries. 2024; Los autores. -
Big Data for Intelligence and Security
The name Big Data for Security and Intelligence is a method of analysis that focuses on huge data (ranging from petabytes to zettabytes) that includes all sources (such as log files, IP addresses, and emails). Various companies use big data technology for security and intelligence in order to identify suspicious tasks, threats, and security tasks. They are able to use this information to combat cyber-attacks. One of the limitations of big data security is the inability to cover both current and past data in order to be able to uncover identified threats, anomalies, and fraud to keep the n/wsafe from attacks. A number of organizations are addressing rising problems like APTs, attacks, and fraud by focusing on them. More is better than less! The easier it will be to determine. Nevertheless, organizations which utilize big data techniques make sure that privacy and security issues have been resolved before putting their data to use. Because there are so many different types of data stored in so many different systems, the infrastructure needed to analyze big data should be able to handle and support more advanced analytics like statistics and data mining. The one side of the coin is the collection and storing of lots of information; the other side is protecting massive amounts of information from uncertified access, which is very difficult. Big data is commonly used extensively in the improvement of security and the facilitation of law enforcement. Big data analytics are used by the US National Security Agency (NSA) to foil terrorist plots, while other agencies use big data to identify and handle cyber-attacks. Credit card companies use big data analytics tools to detect fraud transactions, while police departments use big data methods to track down criminals and forecast illegal activity. Big data is being used in amazing ways in todays information world, but security and privacy are the primary concerns when it comes to protecting massive amounts of data. Real-time data collection, standardization, and analysis used to analyze and enhance a companys overall security is referred to as Security Intelligence. The security intelligence nature entails the formation of software assets and personnel with the goal of uncovering actionable and useful insights that help the organization mitigate threats and reduce risks. To identify security incidents and the behaviors of attackers, todays analysts use machine learning and big data analysis. They also use this cutting-edge technology to automate identification and security events analysis and to extract security intelligence from event logs generated on a network. This chapter will discuss how Big Data analytics can help out in the world of security intelligence, what the appropriate infrastructure needs to be in order to make it useful, how it is more efficient than more traditional approaches, and what it would look like if we built an analytic engine specifically for security intelligence. 2024 selection and editorial matter, S. Vijayalakshmi, P. Durgadevi, Lija Jacob, Balamurugan Balusamy, and Parma Nand; individual chapters, the contributors. -
Big Data Paradigm in Cybercrime Investigation
Big Data is a field that provides a wide range of ways for analyzing and retrieving data as well as hidden patterns of complex and large data collections. As cybercrime and the danger of data theft increase, there is a greater demand for a more robust algorithm for cyber security. Big Data concepts and monitoring are extremely useful in discovering patterns of illegal activity on the internet and informing the appropriate authorities. This chapter investigates privacy and security in the context of Big Data, proposing a paradigm for Big Data privacy and security. It also investigates a classification of Big Data-driven privacy and security of each algorithm. In this section, we first define Big Data in the contexts of police, criminology, and criminal psychology. The chapter will look at how it might be used to analyze concerns that these paradigms confront carefully. We provide a conceptual approach for assisting criminal investigations, as well as a variety of application situations in which Big Data may bring fresh insights into detecting facts regarding illegal incidents. Finally, this chapter will explore the implications, limits, and effects of Big Data monitoring in cybercrime investigations. 2024 selection and editorial matter, S. Vijayalakshmi, P. Durgadevi, Lija Jacob, Balamurugan Balusamy, and Parma Nand; individual chapters, the contributors. -
Big data performance enhancement using machine learning spark-ML pipeline auto parameter tuning /
Patent Number: 202041057025, Applicant: Santosh Kumar J.
The Big data is not only complex, huge data also variety of data which is very difficult to analyze and process efficiently using traditional systems. To analyze and process big data efficiently, we have recently many frameworks like Hadoop, Spark, flink. Some of the languages to process big data are java, Scala, Pig, NoSQL, mango DB Hive Habse. -
Big data performance evalution of map-reduce pig and hive
Big data is nothing but unstructured and structured data which is not possible to process by our traditional system its not only have the volume of data also velocity and verity of data, Processing means ( store and analyze for knowledge information to take decision), Every living, non living and each and every device generates tremendous amount of data every fraction of seconds, Hadoop is a software frame work to process big data to get knowledge out of stored data and enhance the business and solve the societal problems, Hadoop basically have two important components HDFS and Map Reduce HDFS for store and mapreduce to process. HDFS includes name node and data nodes for storage, Map-Reduce includes frame works of Job tracker and Task tracker. Whenever client request Hadoop to store name node responds with available free memory data nodes then client will write data to respective data nodes then replication factor of hadoop copies the blocks of data with other data nodes to overcome fault tolerance Name node stores the meta of data nodes. Replication is for back-up as hadoop HDFS uses commodity hardware for storage, also name node have back-up secondary name node as only point of failure the hadoop. Whenever clients want to process the data, client request the name node Job tracker then Name node communicate to Task tracker for task done. All the above components of hadoop are frame works on-top of OS for efficient utilization and manage the system recourses for big data processing. Big data processing performance is measured with bench marks programs in our research work we compared the processing i.e. execution time of bench mark program word count with Hadoop Map-Reduce python Jar code, PIG script and Hive query with same input file big.txt. and we can say that Hive is much faster than PIG and Map-reduce Python jar code Map-reduce execution time is 1m, 29sec Pig Execution time is 57 sec Hive execution time is 31 sec. BEIESP. -
Big Data Preprocessing for Modern World: Opportunities and Challenges
Big data is an often misunderstood business term in the modern world. Multiple devices are connected to the internet and a democratization of available technologies. The data is generated almost exponential rate. This data is generated in large quantities, at a high speed and belongs to myriad categories. Coupled with advances in storage and processing hardware, it can derive insights from these bigger number of data but it works effectively. The data is to be transformed in the form of understandable and useable insights by algorithms and models. The data mining steps require data that is cleaned and structured to a larger extent. This is achieved by using various algorithms, processes and applications known as data pre-processing techniques. This article reviews the various data pre-processing techniques from a big data point of view. 2019, Springer Nature Switzerland AG. -
Big data-Industry 4.0 readiness factors for sustainable supply chain management: Towards circularity
Big data-Industry 4.0 interaction is expected to revolutionize the existing supply chains in recent years. While increased operational efficiency and enhanced decision-making are the primary advantages studied widely, the sustainable aspects of digital supply chain in the circular economy era have received limited attention. The previous literature rarely explores the industry readiness for a digital supply chain. Thus, the present study objectives to explore Big data-Industry 4.0 readiness factors for sustainable supply chain management. A detailed literature analysis was performed to identify a total of seventeen readiness factors for sustainable supply chain management. A team of six experts were consulted to perform the pairwise comparison for the identified potential readiness factors. This study adopts a fuzzy best-worst method to prioritize the readiness factors according to their degree of influence. The results from the study reflect that readiness towards information system infrastructure, Internet stability for developing I4.0 infrastructure, and circular process and awareness are the most significant readiness factors. The potential recommendation of this study includes the increased attention from sustainable supply chain stakeholders on developing infrastructure, including knowledge building exercise and training process focused on circular economy process. The findings from the study will assist sustainable supply chain stakeholders to frame strategies and action plans during the digitalization of supply chains. 2023 Elsevier Ltd -
Big data, artificial intelligence, and machine learning support for e-learning frameworks
Today's e-rendering frameworks are essential in various fields such as computer graphics, virtual reality, and augmented reality to provide an effective and impressive education to modern society. The integration of big data, artificial intelligence (AI), and machine learning (ML) techniques into e-rendering frameworks hold significant potential for enhancing rendering efficiency, optimizing resource allocation, and improving the quality of rendered outputs. With the advent of big data, massive amounts of rendering-related data can be collected and analyzed. This data includes rendering parameters, scene descriptions, user preferences, and performance metrics. By applying data analytics, important information can be derived, allowing for more informed decision-making in rendering processes. Additionally, AI techniques, such as neural networks and deep learning, can be employed to learn from the collected data and generate more accurate rendering models and algorithms. 2024, IGI Global. All rights reserved. -
Bio-Decolorization and Degradation of Reactive Blue 222 by a Novel Isolate Kucoria marina CU2005
In this study, a novel bacterial strain, Kucoria marina CU2005, was isolated and identified using 16S rRNA gene sequencing from an industrial wastewater sludge sample capable of degrading Reactive Blue 222 (RB222) dye. Batch mode bio stimulation studies were performed with minimal salt media to optimize key physiological parameters for effective decolorization of RB222. When cultured at 35 C and pH 7 under static conditions, this bacterium decolorized 82 percent of the dye after 24 hours. Decolorization was monitored using UV-vis spectrophotometry. Isolates ability to decolorize the complex dye was attributed to its degradation potential rather than a passive surface adsorption. FTIR, HPLC, GC-MS studies were used to confirm microbial dye metabolism. The results indicated breakdown of dye upon decolorization as some peaks were shifted and generation of aromatic amine for monosubstituted benzene ring as intermediates of dye degradation in decolorized solutions. This study has shown the potential of Kucoria marina CU2005 to decolorize RB222 dye at a better pace and efficiency than previously reported bacterial strains. Thus, we propose that our isolated strain can be utilized as a potential dye decolorizer in environmental biotechnology as effluent treatment for decolorization of RB 222. 2023, Association of Biotechnology and Pharmacy. All rights reserved. -
Bio-demineralization of Indian bituminous coal by Aspergillus niger and characterization of the products
The effect of demineralization on an Indian bituminous coal has been investigated by filamentous fungus Aspergillus niger. X-ray diffraction profile reveals the presence of inorganic components in the sample. Bio-Solubilization using Aspergillus niger significantly reduced the ash content in the coal sample (10.23wt% to 5.21wt %). Leaching process removed silicate and pyrite minerals where as aluminates were decreased considerably. The carbon content showed an increase of 19.94% where as the oxygen content decreased by 52.3%. During biosolubilization the fungus produced acids like gluconic acid, oxalic acid and citric acid along with oxalates which are responsible for the demineralization in coal by the formation of mineral salts. The broad diffraction peak at 2?~25.5 is due to the crystalline carbon in the sample which is mainly due to the typical (002) plane reflection of graphite. -
Bio-derived fuels as diesel fuel and gasoline blend components /
Patent Number: 202241032675, Applicant: Kiran K.
To maximize the effectiveness of a refinery's diesel output, a model for the planning of refinery diesel streams is being created. To determine blending parameters with more accuracy than is possible with typical linear models, nonlinear blending models are utilized. Because there are so many equations and variables involved, it may yield an infeasible solution if the beginning points that have been provided are not enough. -
Bio-derived fuels as diesel fuel and gasoline blend components /
Patent Number: 202241032675, Applicant: Kiran K.
To maximize the effectiveness of a refinery's diesel output, a model for the planning of refinery diesel streams is being created. To determine blending parameters with more accuracy than is possible with typical linear models, nonlinear blending models are utilized. Because there are so many equations and variables involved, it may yield an infeasible solution if the beginning points that have been provided are not enough.