What do you mean by 'active neuron'. If you mean that the output of the neuron is zero, then the answer might be no. It depends on the activation function.
If you are using a sigmoid function, its output will be 0.5 when its input is zero. This is not the case for other activation functions like ReLU or tanh.
If by 'active' you mean that it changes according to its input, then yes. The neuron will always output the same number regardless of its input.
Just to add to the answers above, if the network is using backprop, then the weights and bais will be updated during the training even if they starts at zero and will start contributing to the final output.
A single neuron can have zero weight and zero bias, and as Muhammad Farooq points out, further training with backprop will likely move the weights and biases around.
If you have *two* units in the same layer with zero weight and zero bias, then backprop will compute an identical gradient for both, so they will always stay matched.
So, the real problem is not when weights and biases are zero, but when they're "redundant." If two neurons always give identical output, you might as well ignore one of them. In that sense, some can become "inactive"
Here's a paper that goes into more detail and suggests a solution: Article Skip Connections as Effective Symmetry-Breaking
(Also, this is why we typically initialize with random weights rather than, say, all-zero weights. Random matrices are very unlikely to contain "redundant" columns in this sense.)