Data latency from Mars to Earth can range from 4 minutes to 24 – so it could take up to 48 minutes for a round trip response, depending on how close the planets are at any particular time.
Relays between the International Space Station (ISS) and ground stations on Earth have a delay of less than a second, but considering that meteors enter earth’s atmosphere at 12 km/s to 72 km/s, much can happen in that time. That makes space a compelling use case for edge computing.
The more that space is exploited, the greater the need to provide edge computing capabilities in data-dependent spacecraft and satellites. In February, NASA space-ferried commercial off the shelf Hewlett Packard Enterprise (HPE) high-performance systems—HPE Spaceborne Computer-2—to the International Space Station in what certainly could be characterized as extreme distributed computing.
“A successful end-to-end demonstration using the IBM Cloud on Earth and Red Hat CodeReady Containers and the HPE Spaceborne Computer-2 on the ISS in orbit will further validate and push the concept of widespread edge computing in space toward reality,” writes Naeem Altaf, IBM Distinguished Engineer & CTO Space Tech.
HPE’s Mark R. Fernandez, Ph.D., Principal Investigator, Spaceborne Computer-2, adds that “this synergistic cloud-edge workflow will advance research and accelerate our return to the Moon and our missions to Mars.”
The current project builds on successes of an earlier version of the HPE computer and, according to NASA, is “exploring how commercial off-the-shelf computer systems can advance exploration by processing data significantly faster in space with edge computing and artificial intelligence (AI) capabilities.”
Among its many achievements, the ISS project is proving the utility of container-based applications that can be deployed across any infrastructure or cloud—including private and public data centers, or edge locations.
The ISS recently was upgraded with a 600 mbit/sec data link, but that is not sufficient for all localized data processing needs. “The computer hardware has the capability to run artificial intelligence machine-learning software that can sort through scientific data in orbit,” according to NASA. “This reduces the amount of raw data that is required to be transmitted back to Earth and sorted through by researchers and ground-based computers.”
The software or application within a container can be moved and run consistently in any environment and on any infrastructure, independent of that environment or infrastructure’s operating system. Essentially providing a fully functional and portable computing environment, containers can be built once and deployed in multiple places, leveraging predictability and consistency to architect and manage edge applications.
Distributed containerized applications can be managed at massive scale using the open source Kubernetes container orchestration platform. The platform takes care of almost everything it takes to deploy and manage your containers. Red Hat OpenShift is Kubernetes for the enterprise, adding productivity and security features that are important to large-scale companies.
As edge computing solutions mature, organizations are looking for a common, horizontal, unified platform—from the core to the edge—with a consistent development and operations experience. For proof of concept, just look into the night sky when the ISS is overhead.
Red Hat sees edge differently. See how: https://www.redhat.com/en/topics/edge-computing/approach?sc_cid=7013a000002w1CwAAI