Apparently it (or at least, a similar prototype from 2004-2005) uses a microlens array between the main lens and the sensor. I'm not clear on how that helps, but I'd like to have it explained to me.
It seems that they are somehow capturing the direction information about incoming light, as well as just its intensity.
It looks as though the computation to perform the "digital refocusing" is pretty heavy.
Most of the articles about it refer to a microlens array.
There is another write-up on Wired as well:http://www.wired.com/gadgetlab/2011/06/ren-ng-lytro/
I think the important point that is being alluded to, but not addressed directly is that this technology takes one element of interpretation and moves it from the photographer to the viewer of the image:
Traditionally, the photographer chooses where he/she wishes to place the point of focus, in order to direct the viewer's attention. With this technology, the viewer takes on this control.
From an artistic point of view, this raises an interesting discussion about whether or not the artist wants the viewer to have control of the interpretation of a work. - While the utility of a happy snapper being able to choose the focus point after shooting is something that appeals too many consumers, the interaction of the viewer with the interpretation is a totally different discussion. Some photographers may not like the idea, because they will choose focus in a way that suits their interpretation of a scene, and they may not want the viewer to interpret a scene differently!