Moving Ground-based Computation into Space¶
We explore an unorthodox solution instead - whenever possible, move the data processing computation that would have happened on the ground to space. If we are able to perform the computation in space, only insights, not raw sensor data, may need to be transmitted to the ground alleviating the need for massive data transfer to the ground for high resolution applications.
To determine the feasibility of moving ground-based computation into space, we analyzed a collection of representative Earth-based applications that process high resolution satellite image data. We consider only ‘memoryless’ applications which process a single frame at a time using only the single frame’s data. Some EO imagery applications are longitudinal — they assess changes to a location over days, months, or even years, and thus require significant data storage.
Table 5 lists our applications. Air Pollution Prediction (APP) is used to monitor urban areas and other areas where air pollution is a concern. Missions supported by NASA and the California Air Resources Board use satellite imagery to predict air pollution. Satellite imagery-based crop monitoring (CM) is used to identify how much of a crop is grown in a region, which is important information for commodities markets, and to monitor crop growth and performance on a macro scale. Satellite imagery is used to perform Flood Detection (FD) and flood severity estimation. Satellites can provide timely identification of fast moving flash floods [146]. In Forage Quality Estimation (FQE), satellite imagery is used to estimate the quality (quantity) of animal forage for use by ranchers, shepherds, etc. Urban Emergency Detection (UED) is a multifaceted application which attempts to identify emergent life threatening phenomenon in built-up and urban areas, enabling timely emergency response and public awareness. Processing in space enables low latency detection, an important metric for this application. Aircraft Detection (AD) enables detecting and classifying aircraft from satellite imagery. While 3 m resolution is sufficient for commercial airliners and large, manned combat aircraft, \(< 1\) m resolutions are likely required to detect and classify small drones and loitering munitions which have played impactful roles in recent battlefields [150]. LEO satellites provide benefit over aircraft for this role in that they do not violate restricted or contested airspace. Panoptic Segmentation (PS) [92] is an emerging machine vision application which attempts to perform both semantic segmentation of an image, as well as identification of individual objects within the segments. It can be used to support numerous other applications. In Oil Spill Monitoring (OSM), waterways are monitored for signs of spills of oil and refined petroleum products. As oil is often shipped via intercontinental shipping lanes, satellites offer timely and inexpensive (relative to aircraft) monitoring of sea-lanes. Traffic Monitoring (TM) detects moving vehicles due to the offset of different wavelengths that moving objects produce, causing a specific reflectance relationship in RGB channels. This enables effective vehicle detection with very low compute overhead. Land Surface Clustering (LSC) is an unsupervised machine learning technique which attempts to segment imagery to detect changes in a landscape over time. Satellites, which periodically revisit locations with little to no additional cost per revisit (unlike aircraft), are thus a good fit for this application. The majority of the applications are machine learning based, with most using deep learning. The variety of kernels and architectures lead to a wide spread in computational complexity, with over \(10^5 \\times\) difference in floating point operations per pixel between aircraft detection and traffic monitoring.
To estimate the performance and power requirements of these applications, we run them on Jetson AGX Xavier (32GB AGX) that features an NVIDIA Volta GPU with eight streaming multiprocessors. The Jetson AGX Xavier has been proposed for use in cubesat-class EO satellites [26] due to its good radiation tolerance [124]. We installed JetPack 5.0.1 with L4T 34.1.1, which supports CUDA version 11.4.315. To maintain compatibility with the hardware and software environment, we installed the appropriate cuDNN version 8.3.0.166 and TensorFlow version 2.11. This configuration allowed us to successfully run all but one of our applications on the Jetson AGX Xavier. We ran the inference 100 times, for different batch sizes, and employed the TegraStats tool to measure the average GPU utilization. To approximate the GPU power consumption, we used the utilization data along with the reported maximum power of Jetson AGX Xavier, an accepted technique for estimating embedded GPU power consumption [35]. Table 6 shows the performance and power of our applications on Jetson AGX Xavier, including pixels processed \(s^{-1} W^{-1}\).
We use the performance and power numbers of applications on Jetson AGX Xavier to determine how much compute and power generation a satellite must support to run a given application in space. Fig. 8 shows these requirements for a single satellite at 0.10 m to 3 m resolutions and 0-99% early discard rates. As in [54], each ground frame at 3 m is represented by a single 4K RGB image; scaling resolution holds the ground frame size constant by increasing the number of pixels per frame. Thus, as resolution becomes finer, the number of pixels needed to be processed each second increases. The horizontal lines in the graph represent the number of pixels per second needed to be processed to run the applications at a given resolution and early discard rate. The curves (lines with non-zero slope) represent the number of pixels per second that can be supported for a given power budget (x-axis) with power efficiency equal to a Jetson AGX Xavier. Where a curve intersects a horizontal bar gives the amount of power needed to support the application in a satellite. We assume computational complexity scales linearly with number of pixels, as is decidedly the case in TM, and is often the case in deep learning based image processing [69].
The results in Fig. 8 show that only one application can be supported at 3 m resolution with a power budget typical of a small satellite (Table 7) without a high early discard rate. No application can be supported by a small satellite at fine resolutions where they require hundreds to hundreds of thousands of watts.
Even with aggressive early discard (99%), many applications still require hundreds of Ws at fine resolutions. Aircraft detection requires > 400 W of compute per satellite at 30 cm. At 99% early discard rate, several applications cannot be supported at 1 m on a cubesat or cubesat with deployable solar panel. At 10 cm, several applications cannot even be supported on a typical 100 kg microsatellite. Further, for many applications, 99% early discard rate is unrealistic, as applications such as OSM, CM, AD, FD, FQE, etc, may be interested in large portions of Earth’s surface.
To summarize, the above results show that while moving Earthbased computation (that computes on EO data) into space may be promising, this computation cannot be performed on the typically small EO satellites themselves (Table 7) since the corresponding power requirements cannot be met by most of these satellites. While some large satellites may be able to natively support some of the applications, many of the emerging LEO EO constellations are based on microsat and cubesat class satellites, including ones with < 1 m spatial resolutions (Table 1), as well as the largest current and planned EO constellations. As such, an alternate approach to computing in space must be developed.