About Json Lee
This archive is my profile. Now, I am a Ph.D. candidate at State Key Laboratory of Computer Architecture, in Institute of Computing Technology, Chinese Academy of Sciences. I am an open source enthusiast. I love coding, long-distance race and cycling.
Biography
I received my B.E. degree in Computer Science and Technology from Wuhan University in July 2016 and my Ph.D. degree in Computer Architecture from Institute of Computing Technology, CAS in July 2021 under the supervision of Prof. Feng Xiaobing.
Ph.D. thesis project On the System Optimizations of DNN Accelerators from the perspective of Benchmarking, Compiler and Runtime System Optimizations for the dedicated DNN accelerators.
Research Directions
My research interests span Compiler Techniques, Runtime System, Programming Language, Computer Architecture, Distributed Computing, Parallel Computing, Machine Learning and etc.
I am interested in everything about the underlying infrastructures, including but not limited to Compiler, Programming Language, Operating System, Runtime, Computer Architecture and etc.
Publications
Research Activities
-
2017.06 ~ 2019.12, High performance programming language BANG construction and implementation for Cambricon neural network chips. More details about the compiler, please checkout this page. Thanks to this project, I gained the ability to build and hack a large system.
-
2019.01 ~ 2019.02, Google AI Machine Learning Winter Camp, Peking Site. Automatic App Name Generator - A tool to generate popular app name. More details, please checkout this GitHub repo.
-
2019.09 ~ 2020.03, Characterizing the end-to-end deployment of DNNs on commercial AI accelerators, e.g., Cambricon MLU100 and Huawei Atlas300. For more details, please checkout this GitHub repo.
-
2019.06 ~ - , A tiny DSL for DNN accelerators with formal PL specifications. For more details, please checkout this GitHub repo.
-
2020.06 ~ 2021.02, Application-oblivious memory scheduling support for heterogeneous computing systems. The core idea is to implement a runtime system to automatically pinpoint the memory behaviors of each device memory block and detect the memory access patterns, generate the memory scheduling plan, and thereby to reduce the memory pressure of device accelerators. The implementation is based on the Apache top level project: Singa. For more details, please checkout this repo. Baselines see this repo.
-
2021.01 ~ 2021.02, Xilinx Customized Computing Winter Camp. A tiny TPU-like DNN accelerator design and optimization with HLS and LLVM CIRCT. Thanks for the PYNQ-Z2 FPGA board provided by Xilinx.
Internship Experiences
- 3D Road Modeling and Simulation System, in Wuhan Zoyon Technology co., ltd, 2015
- Real-time video style transfer with deep learning methods, in Beijing TensorStack Technology co., ltd, 2017
Awards
- Outstanding Graduate of Wuhan University, Wuhan University, 2016
- ACM/ICPC, Silver Medal, Shenyang Site 2016
- Third Place in National Information Security Contest for College Students, China, 2015
- First Place in American College Students' Mathematics and Modeling Contest, America, 2015
- First Place of Eighth Central China Region of Mathematics and Modeling Contest for College Students, China, 2015
- Champions of 5000m / 1500m long-distance race for many times, Wuhan University, 2014
Useful Links
- System Conference Deadline: http://www.cs.technion.ac.il/~dan/index_sysvenues_deadline.html
- Chronological Listing of A.M. Turing Award Winners: https://amturing.acm.org/byyear.cfm
Postscript
Any questions or suggestions, feel free to open an issue on GitHub or email me to jsonlee@whu.edu.cn .