Each compressed file format - if jpeg or RAW - lives from describing areas of identical pixel (color, brightness) with less data than describing EACH pixel itself. Example: If you have a lot of clean blue sky in your image, you find large areas with the same color&brightness. Compressing means that you describe these areas as a region of lets say 30x300 pixels with the SAME properties with some bytes instead of storing 30x300x3 bytes (the last 3 for the three primary colors).
If you use high ISO settings you will increase the noise. The mentioned area of 30x300 pixels hasn't the same color anymore but will be spoiled by noise. The compression algorithm doesn't see a region of SAME properties anymore and tries to find smaller areas with SAME properties or stores the pixels itself.
This effect can be seen just between different subjects: A blue sky with a few clouds doesn't need the same storage compared to a close up shot of very detailed scenes, I remember a shot of some objects in the sand. The sand grains force the compression algorithm to store pixel by pixel.
A larger sensor size with the same MP decreases the noise and ... the rise of storage demand with increasing ISO should be less observable.
Hopefully this explains the situation - best, Michael
Some remarks about compression:
RAW compression doesn't loose information
. RAW just stores the data more intelligent just like ZIP-software which doesn't alter texts or other data (otherwise it wouldn't be very useful
The JPEG compression deletes - depending on the degree of compression/quality - information.