Conference brings international group of astrophysicists to U of T
When it comes to figuring out what’s going on in distant solar systems, astrophysicists rely on a powerful set of mathematical tools called numerical integration methods – algorithms that calculate the motion of objects in space.
But even relatively small errors quickly add up with these calculations, especially when the models are run over the long-time scales involved in the life of planetary systems. Researchers have found that high precision comes at a cost, namely efficiency.
Assistant Professor Hanno Rein and postdoctoral researcher Daniel Tamayo hosted a three-day conference at U of T Scarborough’s Centre for Planetary Science this week, bringing together international experts who are using integration methods.
“It’s exciting to bring in so many international experts,” Rein said, adding that many of those who attended are pioneers in the field. "For example, we made many references to the Wisdom-Holman Integrator. Well, Professor Jack Wisdom [of planetary sciences at MIT] who helped develop it is here in attendance, so that makes it very exciting.”
Tamayo said the conference is a reflection of the quality of research taking place at the Centre for Planetary Science.
“This definitely shows that Hanno’s research group has reached the international stature where it's able to attract the researchers who really laid the groundwork in this field,” he said. “It’s also great that U of T Scarborough has become one of those centres for this line of research.”
Rein and Tamayo spoke to writer Don Campbell about numerical integrations and why they're so critical for current and future planetary science research.
What on earth are numerical integration methods?
Hanno Rein: As astrophysicists, we use algorithms to calculate the motion of celestial objects such as stars, planets and moons as efficiently and accurately as we can.
This is a very old topic in general dating back to the time of Isaac Newton. Famous mathematicians ever since have worked on it. But there’s been a renaissance over the last 50 years when computers first started being able to do these calculations efficiently. Especially over the last few years, there’s been growth in the field because of all the planets discovered outside of our own solar system.
Where scientists in the past could only look at our own solar system exclusively when doing these calculations, we can now take the methods they’ve developed and apply them to this entirely new field of exoplanets that we’re a part of.
A significant part of the conference is looking at unresolved problems in these methods. Why is that important?
Daniel Tamayo: Planetary systems live for billions to hundreds of billions of orbits, which is an incredibly long period of time, and any tiny mistake will make an entire solution invalid. In some sense, the numerical algorithms that we’re working with are like the precision experiments that people studying physics use to measure something intricate like the gravitational constant. There are plenty of things that can go wrong to mess up your measurements, so you need to ensure a high level of precision.
We’re aiming to not just sort out how these types of precision errors can be avoided but also come up with a smart way that causes the errors to cancel each other out. We also hope to come up with creative ways to ensure consistency so we can trust the solutions we end up with.
What are some examples of where these methods are applied?
Hanno Rein: With many exoplanets being discovered, one thing that is hard to find is their parameters – things like mass, diameter and orbital frequency. One challenge is that there are many different parameters that can produce similar results when observed, so we’re trying to find ways that ensure we know which parameters are the most probable.
Since we need to run these numerical experiments many times, it’s important they’re as efficient as possible so when we use them to determine the parameters of other planets we can do so with certainty.