Efficient Architecture Design for Deep Neural Networks

Daquan’s main research interest focuses on the efficient architecture design for deep neural networks from two aspects. The first direction is the design of new operators in replacement of the current popular modules such as convolutional operator and fully connected operators. This includes two parts: the structural design and parameter design. Structural design is to develop new graph topologies that requires less computational power and memory footprint. Parameter design is to reuse the parameters in a fixed topological connection. The second direction is to exploit the usage AutoML algorithms to search for an optimized architecture atomically. Recently, he is working on transformer architecture for more accurate and efficient networks to achieve comparable performance with convolutional neural networks on vision tasks without any inductive bias.

Click on the video below to view a presentation on the research project!