But to determine how many discrete points of data are visibly separate from their neighbors in an image, you have to apply that measurement from a single dimension to both dimensions. It's really not accurate to say that a 24mp sensor resolves only 9.6% more data than a 20mp sensor when it is capturing 20% more data points. It is a two dimensional sensor after all, and we look at two dimensional images.If you are interested only in the number of pixels in an image, fair enough to compare the number of pixels in a sensor. If you are interested in how well that sensor resolves detail, then it is the square root that is crucial. It is scientifically correct to use linear (one-dimensional) resolution for determining the resolution of two-dimensional images.

Note that this is not a mere play on words or definitions. If you get into signal processing such as with a phased array antenna, or even an imaging sensor treated as a photon detector, the 20% value is closer to telling you what can be done than the 9.6% value.

Of course with photography neither approach translates directly to human impression of detail which varies with subject, view size, contrast, and even emotional impact. At 8x10 I would not say that a 50mp sensor is twice as good or even 44% better than a 24mp one. Might not be visibly better at all. At 90" I might say it's more than twice as good because of how things scale when there's more detail to begin with.

We throw around single numbers out of convenience. If you really want to model the performance of an optical system you need to measure lp/mm horizontally or vertically, and then also along the diagonal. You basically need two measurements which betrays the fact that we're dealing with an array of data in two dimensions. So yes, 24mp really is resolving 20% more data than 20mp, assuming of course that every pixel captures a distinct point.That is why the resolution of sensors is given in line-pairs/mm or lines per picture height, ie linear resolution, and not (line-pairs/mm)^2.