by Tim Scannell

The technology aboard the Mars rover Perseverance: An inside look

Feature
Aug 07, 2020
Computers and PeripheralsEnterprise ApplicationsMachine Learning

When it lands on the surface of Mars after a seventh-month journey, the newest rover will be powered by decades-old computing technology, early-stage machine learning, and a range of newer systems and software.

Rieber with Perseverance rover in High-Bay 1 Spacecraft Assembly Facility (SAF) clean room at JPL.
Credit: JPL

When NASA finally launched its Mars 2020 rover last week following several delays, it packed along technology bestowed with all of the good-luck symbolism in the traditional wedding rhyme – something old, something new, something borrowed, and even something blue, if you consider the launch of the powerful Atlas V rocket that propelled the rover through the various shades of the Earth’s atmosphere into deep space.

The old, in this case, is a radiation-hardened version of an IBM PowerPC microprocessor, designed by Motorola and IBM, that is primarily used in satellites and avionics. It basically has the power of a circa 1992 Pentium 1 chip and will be responsible for handling the entire avionics architecture of the rover designed and programmed by NASA’s Jet Propulsion Laboratory (JPL).  “The closer you pack your transistors, the more susceptible to radiation you get,” notes Richard Rieber, a JPL mobility flight systems engineer associated with the project (pictured above with Perseverance rover in JPL’s Spacecraft Assembly Facility clean room.) “With space hardware, you need high reliability, and the RAD750 has had a couple of hundred missions in space.”

Richard Rieber Richard Rieber

The ‘old school’ RDA750 computer will work in tandem with a series of field programmable gate array (FPGA) computers to control such things as the drivetrain, wheels, suspension and cameras on the rover. One FPGA, a Virtex-5, is also a bit dated in terms of the technology, but will be used for the rover’s entry, descent and landing on the surface of Mars.  Once the rover has landed, this computer will be re-programmed by commands sent from Earth by NASA engineers to perform mobility visual processing.

Some newer aspects of this 2200-plus pound robotics space explorer include the use of machine learning and orbital imagery processing conducted here on Earth months prior to the launch to examine every rock and cranny of proposed landing sights to identify one where the rover could safely descend to the planet’s surface. “One thing we did very early is an extremely detailed traversability study of all the landing sites and basically taught an algorithm how to count rocks,” explains Rieber.

As part of this pre-landing study, the JPL team and other scientists also used images and simulation software to examine terrain maps, calculate elevations and slopes and then developed a Monte Carlo model of 5,000 possible landing points that took into consideration all possible factors and uncertainties with one single purpose in mind: to keep the rover from falling flat on its digital face when landing on Mars.

One other very important new feature in Perseverance is an upgrade in the unit’s speed, which has a huge impact on its autonomy and auto-navigation (AutoNav) capabilities. (Perseverance can drive five times faster than Curiosity, which is still puttering about on Mars.)  Because of its more efficient autonomy, scientists are able to collect more peripheral information on terrain as the unit makes its way up a hill not exactly knowing what obstacles might lie on the other side. The result is that more data is collected and processed back on Earth to make better decisions on going forward and avoiding problems that might cripple the probe.

This accelerated autonomy is important since commands telling Perseverance what to do are sent once each Martian solar day or Sol, which is a little over 39 minutes longer than an Earth day. These commands to and from the rover are sent through the Deep Space Network and relayed through an antenna on the Mars Reconnaissance Orbiter, an orbiting high-definition camera launched in 2005. “Basically, we are saying, ‘Hey rover, here is everything you are going to do today!’” Rieber said, noting that decisions on where to go are partially based on what the rover has experienced the day before in terms of terrain difficulties.  

Room for a better view

The multiple cameras on the latest rover are also new upgrades of the one-megapixel B&W CCD units used on Opportunity, which was launched in 2011 and ceased working in early 2018 when a massive dust storm prevented its solar panels from recharging the onboard batteries. Perseverance’s cameras are 20-megapixel full-color CMOS sensors equipped with lenses that have a 90-degree horizontal field of view and can easily scope out the terrain without having to turn the camera mast.

The images collected by the rover’s cameras will be processed on-the-fly by the same Virtex-5 FPGA that was used to guide Perseverance to a safe landing on Mars before being re-programmed. This computer can process stereo images in about 1.5 seconds, as compared with 180 seconds or more with the primary Pentium 1 processor, notes Rieber.

The end result of all the high-resolution cameras working in tandem with the relatively fast Virtex-5 computer is that images can be taken and undergo some processing while the rover is on the move and scientists are essentially ‘driving’ it.

“That’s fairly revolutionary for planetary rovers because historically what you do is you say, drive a meter, take an image, turn it into a map, pick your next drive arc, drive, image, process, drive, image, and process,” Rieber explains.

No road trip, even an interplanetary one, would be complete without a buddy riding shotgun.  For Perseverance, it is a robotic helicopter called Ingenuity that will be deployed from Perseverance soon after it lands on the surface of Mars to scout out driving routes and landing sites for future Mars missions. The drone will fly autonomously from nine to a little more than 30 feet above the surface as part of a trial run to test its performance in the relatively thin Mars atmosphere. It will fly up to five times over a 30-day test period, covering roughly 980 ft per flight, and be able to communicate with Perseverance as it does.

Digging deep for answers

Perseverance’s ability to easily cruise the Martian surface and collect detailed images is a key aspect of NASA’s long-term Mars Exploration Project that will initially rely on robotic systems to not only map the surface but collect core samples from rocks and soil as well.  The rover is outfitted with a special drill that can penetrate Mars’ surface to collect these samples, which are then transferred to unique ‘cache’ that can be sealed and later retrieved by a future Mars mission to transport back to Earth for analysis. The goal is not only to provide a snapshot of Mars’ past and identify possible microbial life, but also to look for habitable areas that might be ideal for laboratories and living quarters where scientists and colonists might one day work onsite.

jpl mars 2020 image 4 NASA/JPL-Caltech
In this illustration, NASA’s Mars 2020 rover uses its drill to core a rock sample on Mars. Launched in July 2020, the Mars 2020 rover represents the first leg of humanity’s first round trip to another planet. The rover will collect and store rock and soil samples on the planet’s surface that future missions will retrieve and return to Earth. NASA and the European Space Agency are solidifying concepts for a Mars sample return mission. For more information about the mission, go to https://mars.nasa.gov/mars2020/.

While Perseverance’s cameras and onboard computers can do some pre-processing work, the bulk of the data collected by the rover will be sent back to Earth via the Mars Reconnaissance Orbiter and Deep Space Network to Amazon servers and systems at JPL.

Once the data is received at JPL, it will be processed by scientists and engineers to essentially plan Perseverance’s travel and work itinerary for the next day,” says James Kurien, Ground Data Systems Manager at JPL. “We run simulations and analysis and then send this as a binary package up through the Deep Space Network, the relay orbiter and then to the rover where we’ll start the whole process again.”

“We have to do a lot of work a lot faster,” adds Guy Pyrzak, Deputy Ground Data System Manager at JPL. This first  involves getting the raw information transmitted by the rover from Mars, “which is frankly a bunch of squiggly lines, warning messages, and error or success messages and turning that into something we can look at to understand what is happening,” he explains. Software at JPL then takes this, as well as all of the photographic images captured by the rover, and melds them into an integrated scene. 

Guy Pyrzak, Deputy Ground Data System Manager at JPL Guy Pyrzak

These images are then overlaid on a 3D mesh that includes a range of information collected and processed from related areas of the surface to provide an almost three-dimensional representation. “You don’t just see a picture and guess how far away things are but can actually spin it around and look at it as a VR [virtual reality] image on your computer,” says Pyrzak.

All of the processed and merged information must be analyzed by JPL researchers and scientists at locations in France, Norway and on the East Coast of the U.S. within about an hour to determine if the rover is healthy or in trouble, and then quickly provide this intel to engineers who decide if it is okay for the rover to proceed with its assigned tasks, he added.

“On the ground system, a major focus of improvement over the last mission is making this much more of a production-automated system,” says Kurien. The downlink from the rover automatically kicks off a pipeline of processes, primarily based on Amazon Web Services, for running rules and doing analysis to generate terrain meshes and scenes, he explains.  “This event-driven system drives the whole process and delivers results in a way that is accessible to people here at JPL and around the world,” Kurien says.

In terms of applying machine learning to this particular Mars mission and even the previous ones, the problem is that the machines, or more accurately the software, is still in the early stage of learning, despite all of the data that is being collected and analyzed.   “We’re experimenting with using some forms of machine learning on some things, but most of these systems are actually pretty new,” says Pyrzak.  “Even with the ones that we are using, you might say that we are building a bridge because of the nature of the mission.”

Just like in a marriage, you never really know for sure if things will work out when you are just starting on your journey.  “We’re going to a new place,” says Pyrzak. “The hardware itself interacts with the terrain and a lot of the machine learning algorithms that we have right now kind of assumes a certain level of understood variability. Unfortunately, we don’t understand how the terrain, where we’re landing, and how this integrated system works well enough to really utilize machine learning this early on.”