A Case for Space Microdatacenters¶
With the above in mind, we make a case for space microdatacenters (SµDCs) for high resolution Earth observation space missions. A SµDC is a relatively large computational satellite whose primary task is to support in-space computation on data generated by the observation satellites. The power generation capability for the SµDC is commensurate with the amount of computation supported by the SµDC. Inter-satellite links (ISLs) are used to offload the data generated by the observation satellites to the SµDC.
To support in-space computation of Earth-based applications, one could also simply make each EO satellite much bigger (i.e., increase its power generation and computation capability). However, a LEO EO constellation supported by SµDCs offers several advantages over a homogeneous constellation of EO satellites large enough to support the applications natively. First, by concentrating compute onto SµDCs, EO satellite design – satellite bus design, heat dissipation, power generation and power management, etc, – is simplified allowing continuing low mission costs [54] which is critical for growth of the EO industry. Second, changes in computational requirements (e.g., an improved neural network model, increased accuracy requirements, change in application, etc) would be hard to support in a homogeneous constellation, while they can be supported by adding additional SµDCs in our model. Third, SµDCs act as data integrators, minimizing the impact of variation in data generation (not all EO satellites within their constellation would generate the same amount of data - e.g., land vs ocean, day vs night, cloudy vs not). Thus, average case design for SµDCs would be more effective than average case design for a homogeneous constellation. Finally, SµDCs may also be used to provide space-based cloud computing, supporting excess compute requirements of multiple constellations, including from multiple organizations.
Fig. 9 shows the number of 4 kW SµDCs needed to support a constellation of 64 EO satellites for various resolutions and early discard rates. We assume a 4 kW SµDC for this study since Orbits Edge SATFRAME 445 [110] uses a 19-inch server rack, which can easily support up to 4 kW of compute 2 We assume no images are downlinked to Earth — all are processed in space. These results have been generated using measured power and delay numbers on a RTX 3090 (Table 6). RTX 3090 is a state-of-the-art GPU that provides high energy efficiency for image processing workloads [85], and support high productivity programming paradigms [72]. We used CUDA version 11.7 along with cuDNN version 8.9.0 and the supported TensorFlow version 2.12. For the Panoptic Segmentation application, which uses Mask R-CNN, we employed the Mask R-CNN [74] implementation available in the application’s repository. To run this specific application on the RTX 3090, we used TensorFlow-GPU version 1.14, ensuring compatibility with the provided Mask R-CNN implementation. For all of the DNNs, we performed inference 100 times, for different batch sizes, and utilized the Python NVML (pynvml) library to measure the average GPU utilization and average GPU power. In addition, we used the timing library to measure the inference time. We ported the TM workload from a CPU implementation to a CUDA implementation. We implemented LSC using k-means in CUDA. Batch-sizes which maximize energy efficiency (maximize pixels W −1 s −1 ) are used. We assume a ground track frame period of 1.5 s, meaning each satellite in the constellation generates one image every 1.5 s. We assume that early discard is applied uniformly over all generated images.
The results show that one 4 kW SµDC can support the computation needs for a majority of our applications for most resolutions, especially when used in conjunction with early discard. For example, only a single 4 kW SµDC is needed to support all but one application at 1 m with 95% early discard rate. At finer resolutions and low early discard rates, multiple 4KW SµDCs may be needed. In some cases, SµDCs may need to be significantly larger (e.g., 256 kW “Space Station” class SµDCs). While the number of SµDCs needed to support some applications at fine resolutions is high, the costs of downlinking data to Earth are prohibitive. Even with 99% early discard, downlink at current commercial rates would cost the constellation operator over $1000 per minute at 10 cm resolution. But at that early discard rate, eight out of ten applications can be supported with only a small number of SµDCs computing in space. Launching these SµDCs, especially at projected future launch costs, will invariably be cheaper than paying significant recurring costs for data downlink.