Big Data De-duplication using modified SHA algorithm in cloud servers for optimal capacity utilization and reduced transmission bandwidth; [Big Data Deduplicaci utilizando algoritmo SHA modificado en servidores en la nube para una utilizaci tima de la capacidad y un ancho de banda de transmisi reducido]
- Title
- Big Data De-duplication using modified SHA algorithm in cloud servers for optimal capacity utilization and reduced transmission bandwidth; [Big Data Deduplicaci utilizando algoritmo SHA modificado en servidores en la nube para una utilizaci tima de la capacidad y un ancho de banda de transmisi reducido]
- Creator
- Bhojan R.; Rajagopal M.; Ramesh R.
- Description
- Data de-duplication in cloud storage is crucial for optimizing resource utilization and reducing transmission overhead. By eliminating redundant copies of data, it enhances storage efficiency, lowers costs, and minimizes network bandwidth requirements, thereby improving overall performance and scalability of cloud-based systems. The research investigates the critical intersection of data de-duplication (DD) and privacy concerns within cloud storage services. Distributed Data (DD), a widely employed technique in these services and aims to enhance capacity utilization and reduce transmission bandwidth. However, it poses challenges to information privacy, typically addressed through encoding mechanisms. One significant approach to mitigating this conflict is hierarchical approved de-duplication, which empowers cloud users to conduct privilegebased duplicate checks before data upload. This hierarchical structure allows cloud servers to profile users based on their privileges, enabling more nuanced control over data management. In this research, we introduce the SHA method for de-duplication within cloud servers, supplemented by a secure pre-processing assessment. The proposed method accommodates dynamic privilege modifications, providing flexibility and adaptability to evolving user needs and access levels. Extensive theoretical analysis and simulated investigations validate the efficacy and security of the proposed system. By leveraging the SHA algorithm and incorporating robust pre-processing techniques, our approach not only enhances efficiency in data deduplication but also addresses crucial privacy concerns inherent in cloud storage environments. This research contributes to advancing the understanding and implementation of efficient and secure data management practices within cloud infrastructures, with implications for a wide range of applications and industries. 2024; Los autores.
- Source
- Data and Metadata, Vol-3
- Date
- 2024-01-01
- Publisher
- Editorial Salud, Ciencia y Tecnologia
- Subject
- Cloud Servers; De-duplication; Preprocessing; SHA; Target
- Coverage
- Bhojan R., Department of Mathematics and Computer Science, The Papua New Guinea University of Technology, Papua New Guinea; Rajagopal M., Lean Operations and Systems, School of Business and Management, CHRIST (Deemed to be University), Bangalore, India; Ramesh R., Department of Computer Science, KPR College of Arts Science and Research, Tamilnadu, India
- Rights
- All Open Access; Hybrid Gold Open Access
- Relation
- ISSN: 29534917
- Format
- Online
- Language
- English
- Type
- Article
Collection
Citation
Bhojan R.; Rajagopal M.; Ramesh R., “Big Data De-duplication using modified SHA algorithm in cloud servers for optimal capacity utilization and reduced transmission bandwidth; [Big Data Deduplicaci utilizando algoritmo SHA modificado en servidores en la nube para una utilizaci tima de la capacidad y un ancho de banda de transmisi reducido],” CHRIST (Deemed To Be University) Institutional Repository, accessed February 25, 2025, https://archives.christuniversity.in/items/show/13704.