An individual biological neuron can compute a variety of functions, including max and xor, that a single perceptron can't (e.g., https://www.science.org/doi/10.1126/science.aax6239 ). In general, one needs a fairly elaborate ANN to approximate the behavior of a single biological neuron.
OTOH, a three-layer network is a universal function approximator and RNNs are universal dynamical systems approximators, so they are sort of trivially equivalent.
For one they need to engage in metabolism and reproduction, but I'd like to see some argument of how neurons are more complex without those needs, e.g. do they compute some radically different class of functions than typical ANN's do, or require entirely different interconnections, etc.