As various deep learning applications have been deployed in diverse areas, the explainability of neural networks is becoming increasingly important in the research field. Besides being desirable on its own account, explainability also often helps further improve performance of deep learning models. In this work, we introduce float neurons and fixed neurons to describe the neuron-level stability in a network based on the activation pattern of neurons on given input. With the proposed concept, we quantify the expressive ability and robustness of a neural network with a neuron entropy metric and illustrate their relationship by decomposing the computational graph of a neural network. We find theoretically that networks with better generalization have more diverse activation patterns across the input space, which results in a higher neuron entropy globally. On the other hand, the prediction of neural networks is prone to be affected by perturbation when there are locally more float neurons, which respond with additional impulses to local stimuli. Empirically, we show that the proposed analytical framework can be applied to downstream applications, including network pruning and randomized smoothing of network prediction.