Quantum optimization for machine learning
- Title
- Quantum optimization for machine learning
- Creator
- Mani A.; Bhattacharyya S.; Chatterjee A.
- Description
- Machine learning is a branch of Artificial Intelligence that seeks to make machines learn from data. It is being applied for solving real world problems with huge amount of data. Though, Machine Learning is receiving wide acceptance, however, execution time is one of the major concerns in practical implementations of Machine Learning techniques. It largely comprises of a set of techniques that trains a model by reducing the error between the desired or actual outcome and an estimated or predicted outcome, which is often called as loss function. Thus, training in machine learning techniques often requires solving a difficult optimization problem, which is the most expensive step in the entire model-building process and its applications. One of the possible solutions in near future for reducing execution time of training process in Machine learning techniques is to implement them on quantum computers instead of classical computers. It is conjectured that quantum computers may be exponentially faster than classical computers for solving problems which involve matrix operations. Some of the machine learning techniques like support vector machines make extensive use of matrices, which can be made faster by implementing them on quantum computers. However, their efficient implementation is non-trivial and requires existence of quantum memories. Thus, another possible solution in near term is to use a hybrid of Classical Quantum approach, where a machine learning model is implemented in classical computer but the optimization of loss function during training is performed on quantum computer instead of classical computer. Several Quantum optimization algorithms have been proposed in recent years, which can be classified as gradient based and gradient free optimization techniques. Gradient based techniques require the nature of optimization problem being solved to be convex, continuous and differentiable otherwise if the problem is non-convex then they can find local optima only whereas gradient free optimization techniques work well even with non-continuous, non-linear and nonconvex optimization problems. This chapter discusses a global optimization technique based on Adiabatic Quantum Computation (AQC) to solve minimization of loss function without any restriction on its structure and the underlying model, which is being learned. Further, it is also shown that in the proposed framework, AQC based approach would be superior to circuit-based approach in solving global optimization problems. 2020 Walter de Gruyter GmbH, Berlin/Boston. All rights reserved.
- Source
- Quantum Machine Learning, pp. 39-66.
- Date
- 2020-01-01
- Publisher
- De Gruyter
- Subject
- Artificial Intelligence; Non-convex optimization; Quantum computing
- Coverage
- Mani A., ASET, Amity University UP Noida, Uttar Pradesh, India; Bhattacharyya S., Department of Computer Science and Engineering CHRIST (Deemed to be University), Bangalore, India; Chatterjee A., Computer Science California State University Dominguez Hills, United States
- Rights
- Restricted Access
- Relation
- ISBN: 978-311067070-7; 978-311067072-1
- Format
- Online
- Language
- English
- Type
- Book chapter
Collection
Citation
Mani A.; Bhattacharyya S.; Chatterjee A., “Quantum optimization for machine learning,” CHRIST (Deemed To Be University) Institutional Repository, accessed February 23, 2025, https://archives.christuniversity.in/items/show/18815.