Research

Dr. Mao’s research interests mainly focus on the fields of advanced computing systems, such as cloud computing, data-intensive platforms, quantum computing and quantum-based applications.

We develop algorithms to improve the performance of existing systems and propose novel system architectures to address practical issues in industry. Specifically, Dr. Mao’s investigate the following research problems.

  • Quantum systems and applications: we develop algorithms based on the utilize quantum bits (qubits) to improve the classic applications, such as deep neural networks (Tensorflow Quantum and Qiskit).

  • Cloud systems and applications: we build efficient cluster management algorithms for virtualized computing platforms, such as Docker and Kubernetes, to efficiently schedule the resources and improve the performance.


Quantum systems and applications

The fast development of quantum computing has pushed classical designs to the quantum stage, which breaks the physical bound for deep learning applications. For instance, Google demonstrated quantum Supremacy using a 53-qubit quantum computer, where it spent 200 seconds to complete a task that would cost 10,000 years on the world’s largest classical computer. Due to the endless potential, quantum-based deep learning architectures attract increasing attention in both industry and academia, in hopes that certain systems might offer a quantum speedup. In this field, we develop two quantum based deep learning systems for classifications and generative adversarial networks.
  • We developed a Quantum GAN (QuGAN), a Generative Adversarial Network (GAN) through quantum states, to improve the important and successful deep learning applications (e.g. Selfies to Emojis, 3D Object Generation, and Face Aging). With the potential quantum speedup, we designed and implemented QuGAN architecture that provides a stable convergence of the model trained and tested on the real datasets. Compared to classical GANs and other quantum-based in the literature, QuGAN achieved significantly better results in terms of training efficiency, up to a 98% parameter count reduction, as well as the measured similarity between the generated distribution and original distribution, with up to a 125% improvement.

New Icon

  • From a practical system point of view, we proposed QuClassi, a hybrid quantum-classical deep neural network architecture for classification problems (e.g. face recognition). With a limited number of qubits, our novel system is able to reduce 95% of the size of the classic learning models. To the best of our knowledge, QuClassi is the first practical solution that tackles image classification problems in the quantum setting. Additionally, comparing QuClassi to the other similarly parameterized classical neural networks, QuClassi outperformed them by learning significantly faster (up to 53%) and achieved up to 215% higher accuracy in our experiments. Besides experiments on local simulators, we conducted our experiments on real quantum computers, IBM-Q Experience, to evaluate the proposed system.

New Icon


Cloud systems and resource management

In the past decades, we have witnessed a spectacular information explosion over the Internet. To utilize the data on the Internet that generated by various sources, machine learning plays a key role in advanced big data analytics and enables a wide range of applications, such as social network analysis and computational biology. With respect to the nature of distributed data collection, processing and storage in today’s big data application scenarios, the distributed and parallelized learning systems are frequently required to train various models over distributed data centers. A spectrum of learning systems and frameworks have been proposed to accelerate the algorithms on distributed datasets in a parallelized computing manner. Generally, these methods first structure the data using a loss function, then intends to train the model and optimize parameters by minimizing the loss function via its gradient flow. In these methods, iterations are used to train the model, where each iteration improves the model through sampling and learning from a batch of samples. Such iteration process will slowly converge to a (local) minimum/saddle point (with a set of usable parameters) with decreasing loss reduction. In this field, our projects aim to accelerate the overall performance of heterogeneous multi-task learning processes from system aspects with a novel virtualization technique, namely, containerization.