- Investigated into the Pareto frontier of accuracy-latency trade-off of transfer learning models.
- In-depth study of how to perform transfer learning and neural architecture search effectively and efficiently.
- Explored advanced compiler optimization opportunities and challenges for the Pareto frontier models, e.g., GPU memory sharing across models, layer-based optimization caching.