Collaborative Intelligence Between the Cloud and Mobile Edge


Intelligent applications have gained tremendous popularity in recent years. Examples of intelligent applications include personal assistants such as Apple Siri and Amazon Alexa and advanced driver- assistance systems in modern vehicles. These applications rely heavily on Neural Networks (NN) to process natural inputs such as voice and images. However, NNs have orders of magnitude higher computation requirement than traditional machine learning methods, creating significant challenges for mobile/IoT devices.

The common status-quo approach is offloading the NN computation to the cloud datacenter by transferring input via wireless networks. However, this cloud-centric approach under-utilizes or bypasses the compute resources available on the mobile devices and its performance is highly susceptible to the variability in wireless network condition. On the other hand, mobile hardware have grown to be more capable at NN computation through powerful mobile SoC and specialized co-processors.

In this proposed work, we aim to improve the performance and energy efficiency of intelligent applications by more effectively utilizing compute resources outside of cloud. A body of prior work has proposed techniques to partition NN between mobile and cloud. However, the evolution in NN model design and mobile hardware and the complex real-world environment have exposed some fundamental limitations in existing approaches. We plan to design novel approaches to address those limitations.

People

Lingjia
Tang

CSE
Engineering

Jason
Mars

CSE
Engineering


Funding

Funding: $30K (2022)
Goal: In this work, we plan to design generalized and effective collaborative intelligence approaches between mobile and cloud in dynamic and complex environment to support state-of-the-art large-scale intelligent applications.
Token Investors: Lingjia Tang, Jason Mars


Project ID: 1041