Training a supernet using a copy of shared weights has become a popular approach to speed up neural architecture search (NAS). However, it is difficult for supernet to accurately evaluate on a large-scale search space due to high weight coupling in weight-sharing setting. To address this, we present a shrinking-and-expanding supernet that decouples the shared parameters by reducing the degree of weight sharing, avoiding unstable and inaccurate performance estimation as in previous methods. Specifically, we propose a new shrinking strategy that progressively simplifies the original search space by discarding unpromising operators in a smart way. Based on this, we further present an expanding strategy by appropriately increasing parameters of the shrunk supernet. We provide comprehensive evidences showing that, in weight-sharing supernet, the proposed method SE-NAS brings more accurate and more stable performance estimation. Experimental results on ImageNet dataset indicate that SE-NAS achieves higher Top-1 accuracy than its counterparts under the same complexity constraint and search space. The ablation study is presented to further understand SE-NAS.