Thank you for the great questions! The R1A uses a LiVOX LiDAR sensor. This system uses a single laser beam that is steered through an approximate 40 deg Field of View. The mapping results are an accuracy and precision of 5cm.
We have a dataset on the rock cloud called Claremont Canyon -3D GCP
This dataset contains 6 independently surveyed ground control points using GNSS RTK system and the OPUS precise point processing. The Surveyed points and aerial targets were captured at sub-cm accuracy. Here is a photo from the survey -->
From here we compare the independently surveyed target to the ground classified LiDAR point cloud to verify the global accuracy of the dataset. In this dataset our accuracies were very good (2.8 cm)
As for the motion blur, this is not a significant factor when using the ROCK cloud post processing. Our ROCK cloud computes a very accurate trajectory for the R1A sensor and directly georeferences each point. This is in contrast to a SLAM (computer vision) based point cloud registration, where a whole scene needs to be captured then compared to the next scene. This motion smearing in LiDAR SLAM is comparable to the smearing of image pixels in the Rolling Shutter effect.
Lastly, the size. This really depends on how high you fly and how fast you fly. We provided recommended flying parameters for best results, but sometimes 10 or even 15 cm accuracy is sufficient and low density points. For these cases you can fly very high and very fast and vastly reduce the data size.
A good rule of thumb for the R1A — every 0.125 sq km is about 2 GB of data!
Hope this helps!