There are three views at the suns position facing the camera with different arc values. Each of them renders a depthmap.
There is an other view placed at the same position as camera having always the same parameters. The objects in this view are rendered with a shader which compares their "real" distance to the light for each pixel using always the depthmap with the highest resolution possible for it. If the "real" depth is bigger than the one safed in the depthmap, it is a shadow and is rendered in a dark grey otherwise it is rendered white. This is then combined per postprocessing with the image rendered by the camera and then displayed.
This is basicly how it works. I also blur the shadowmap which has the smallest arc to get softer shadows near to the camera. This is all combined with keeping the square of the depth in the depthmaps which is used for the variance part...

That is basicly how it works.
It could also be done with A6 comm and render to texture.