Finding Kuiper Belt Objects With Machine Learning

I have something to admit. I have a dirty little secret, and it's time that I air the laundry. Here we go...

I used machine learning. Cough. I know! How could I? How dare I? For years I have been complaining about how machine learning is a lazy-person's route to solving a problem, how could I join the club? Hypocritical? Probably. Worth it? Totally!

My reticence to using machine learning techniques like the convolutional neural network - now a house hold name in machine learning - has always with the limitations of determining just exactly what is going on inside the machine itself. It's almost like magic: give a network some examples of labelled images (eg., cats or dogs, cars or bridges, asteroid or star) and out comes classifications, automatically. It's like magic. And scientists hate magic. But it turns out my use case is almost perfect.

As part of my recent science efforts, I am running a search for moving Kuiper Belt Objects that are near enough to the New Horizons Spacecraft that we can actually take pictures of them from the spacecraft's on-board telescope. So the application is "real object or no?" That is, I have a list of images that may contain a Kuiper Belt Object, or it may contain something else like a star, a subtraction residual, or what ever other garbage we aren't concerned about. And we want to be able to identify the good ones. Here's an example of what I mean.

A real, newly discovered Kuiper Belt Object
This is a real, new Kuiper Belt Object that I found... with machine learning.

This panel of animated images are stacked images we took with the Hyper-Suprime Camera on the Subaru Telescope. The idea is that moving objects, well, move, and so by shifting the images at different rates and angles that correspond to the rates of motion Kuiper Belt Objects may have, we can find objects hidden in the noise. To do so, at each rate, we produce a stack. The animation is cycling through ever faster rates of motion. Left to right are different stacks for different times in the night, and top to bottom are different angles. If you get the wrong rate or angle, nothing shows up, but as you get near the right rate, voila, the object pops out!

All the stuff around the central source are artifacts due to bright stars which we have done our best to remove. But sadly, those other blobs can confuse our normal methods to identify real moving sources, as shown below.

Bad source
Bad source identified as candidate good source through classical methods.

In the above, we see a source that is actually just residual garbage from a bright galaxy that our image subtraction routines didn't get rid of perfectly. Unfortunately, stuff like this is all too common, and really really hard to get rid of through old fashioned means. Aside: the keen reader will spot the faint good moving object to the left of centre...

We can tell what good sources look like, because we artificially injected some. Thousands of them actually. This is useful in many ways, because we can do things like determine how many objects are lost to falling infront of stars, or what our detection limits are. Unfortunately, the bad outweigh the good by a very large amount, because our data contain a huge number of bright stars. We have already searched these using the trained eyes of many people (I wrote about that here) and that was relatively successful. We found about 50 new objects! But it was also extremely painful, and only about half as productive as we expected - there should be another 50 in the data that we missed.

So in comes machine learning. This use case is perfect for machine learning. Binary classification (dog/cat, asteroid/star) has been a long used application of machine learning, and so, I figured I would give in, and give it a try.

I made us of a Convolutional Neural Network to take sequences of stacks like those above, and train the network to recognize what is a moving source, and what is not. Roughly, the network consists of multiple layers, in my case, 3. At each layer, a convolution is performed between a kernel (itself just a small 3x3 image) and the stacks themselves. Think of this as a way of producing some kind of image enhancement, like sharpening, contrast boosting, etc. The output of this convolution is just another image. So one convolution occurs at each of the three layers, and the output image from the first layer is passed to the second, and so-on. The interesting thing about machine learning is that the convolution parameters themselves are not determined by me, but are learned. That is, by repeatedly passing training images through the network, we can iterate to find the values of the convolution kernels that best highlight real objects, and hide away the bad ones. The last step of the network is to pass the convolved images through a normal "fully connected layer" that does the yes/no decision making. Voila, a list of real sources. Black magic!

One of the hardest parts of machine learning is finding a good training set. That is, a list of of images you know contain good sources, and another list of only bad sources. It just so happens that our injected artificial sources act as a good training sample. And luckily, the data were already partially searched by humans, allowing us to confirm that the network we trained was in fact returning real objects, and not garbage.

Here's the proof in the pudding.

 

Here's one of 28 new objects we have found so far with the new machine learning technique. It works really well. For the human search, it took 3 people 3 days each, to go through the data we would gather in a single night. With 16 nights of telescope time, the human commitment was enormous. My new machine cuts the human effort down to about 3 hours of work for 1 person. Each source the network says is real still requires a human confirmation because I still have a 10% false report from the network. But the work load is drastically reduced - we used to have a 200% false positive rate.

The output of the ML algorithm is also deeper by half a magnitude. For the astronomer, our limiting magnitude is about r=26.3. Even better, our completeness has gone from a 50% lost object rate to a 20% loss rate. That's a big improvement. We don't expect to do any better than 20% loss, just because there are sooo many stars in the frame, and if an object falls directly on top of a bright star, it will for ever be hidden to us, no matter how good my search is.

Clearly not everything is perfect yet. I want to decrease the false positive rate to less than 1%. Plus I think I can get deeper still. For my thesis, I got down to r=27 using similar data. So I want to get to r=26.5 at least! It's become an obsession.

So I will swallow my pride and say that machine learning has its uses. I still can't tell you exactly how the machine is doing what it's doing. But that doesn't matter all that much here. And I think that's the lesson I've learned. Machine learning can be super powerful in cases where you don't really need to know how it's working, only that it's working. Just apply with an appropriate amount of sugar to hide any sour tastes left behind.