Download PDFOpen PDF in browser

Quantum Optimization for Neural Network Training

EasyChair Preprint 14861

14 pagesDate: September 14, 2024

Abstract

Recent advancements in quantum computing have opened new avenues for optimizing neural network training processes, promising significant improvements over classical methods. Quantum optimization leverages quantum superposition and entanglement to explore complex, high-dimensional parameter spaces more efficiently than classical algorithms. This paper explores the application of quantum optimization techniques to neural network training, focusing on Quantum Approximate Optimization Algorithms (QAOA) and Quantum Gradient Descent (QGD). We discuss the theoretical foundations of these methods, their potential advantages in overcoming the limitations of classical optimization, and practical considerations for their implementation. By analyzing case studies and experimental results, we demonstrate how quantum optimization can enhance convergence rates, improve generalization, and reduce computational overhead in training deep learning models. The paper also highlights the challenges and future directions for integrating quantum optimization into existing neural network frameworks, aiming to bridge the gap between quantum computing theory and practical applications in machine learning.

Keyphrases: machine learning, neural network training, quantum optimization

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:14861,
  author    = {Favour Olaoye and Kaledio Potter},
  title     = {Quantum Optimization for Neural Network Training},
  howpublished = {EasyChair Preprint 14861},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser