TNAS: Neural Network Architecture Search Sampling Module Based on Variational Autoencoder

Main Article Content

Kefan Yan, Shunlong Wang, Yuwei Guan, Ailian Jiang

Abstract

Neural Architecture Search (NAS) searches the architecture of neural network automatically in a large search space. The search space typically contains billions of network architectures, which makes it computationally expensive to search for the best-performing architecture. One-shot and gradient-based NAS methods have achieved good results in various computer vision tasks. Even though they achieved success, the current sampling methods are either fixed or manual, all of which are ineffective. In this paper, we propose a learning sampling module for neural architecture search (NAS) named TNAS, based on variational auto-encoder (VAE). This module can be easily embedded into the existing weight sharing NAS framework such as one-shot approach and gradient-based approach, and significantly improve the performance of search results. In NasNet-like search space, TNAS produced a series of competitive results on CIFAR-10 and ImageNet. In addition, combined with the one-shot method, our method obtains the latest results of ImageNet classification model under 400M FLOPs frequency hopping with a probability of 77.4% in a ShuffleNet-like search space. Finally, we performed an in-depth analysis of TNAS on the NAS-BENCH-201 dataset to verify the effectiveness of our proposed approach.

Article Details

How to Cite
Kefan Yan, Shunlong Wang, Yuwei Guan, Ailian Jiang. (2021). TNAS: Neural Network Architecture Search Sampling Module Based on Variational Autoencoder. CONVERTER, 2021(7), 288-296. Retrieved from http://converter-magazine.info/index.php/converter/article/view/499
Section
Articles