Not directly apropos, but the current best answer on that page is probably good enough to have been worth money to NASA, and somebody just did it for free because it was a neat problem to solve and they could.
This is probably my favorite thing about the internet.
It's a very good answer, and it worked very well in this case, but it's nothing revolutionary. Background modelling with a single Gaussian is a pretty standard computer vision technique, and he's essentially finding anomalies by backprojecting the model[1]. This is great for one image when you can grab a sand patch, but you'll need to recalculate the model if/when the lighting changes.
If you're interested, more advanced techniques will look at Gaussian Mixture Models, which essentially can model the background (i.e. the sand) as the sum of several Gaussians (thereby permitting multiple distributions, e.g. sand in light vs. sand in shade); for example, GrabCut[2] uses GMMs to model foreground and background distributions for background segmentation. This still won't work for changing lighting, for that you need something more advanced which changes the model over time.
EDIT: I realised that I sound very negative in this reply, which was not my intention. The technique used is great for the problem they were trying to solve, and I'm glad to see people sharing these sorts of answers on the internet for free.
Somewhere his boss is wondering what on earth that guy was doing all day.
Yet, he has demonstrated to a wide audience what good image processing and lateral thinking can do, he has brushed up on a few parts of mathematica he has not used this past quarter and the world is a little bit richer
A TED talk recently mentioned how robots are replacing us and we need to manage a jobless future - this indicates how we might do it.
Plus, I cannot work out which is worse - that bits of metal are dropping off the Rover. Or not ...
Do you mind providing a link to the TED talk in question? Been thinking about that kind of future lately, and was wondering what other people (who've probably thought about it for longer) have to say about such future.
I second that. I would like to see what that TED talk has to say about keeping humans employed in something for as long as possible.
I want to think software engineering/programming would be one of the last jobs to be replaced by machines. It would last until the "singularity" is reached which is when computers would be able to program themselves (re-writing and improving their own code exponentially).
One idea I came across a while ago is the idea of a 'national dividend'. Basically everyone gets a base amount of money you could live off. If people want to work harder and earn more money, they are free to.
Unfortunately I can't recall the book I read it in.
I've heard this view before. It's what I have been saying for some time now: With vastly increased wealth for the richest due to technological leverage, what will need to be done is increased taxation, especially for the richest. The taxes can be used to pay for a "dividend" system, which could for instance be implemented through negative taxation up to a certain low income. We might not be there today, but it is where we'll end up if automation keeps spreading at the rate it has until now.
Norway almost de facto has a system like this already (through an incredibly generous welfare system), but the only reason we are able to pay for it is through the huge petroleum income.
I have seen it from Charles Handy and others as well. It is all about the idea that currently the vast wealth of society is generated unevenly, so some form of redistribution is needed - a job for everyone is the current approach. But when those jobs get replaced how do we redistribute?
Maybe 'The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future' by Martin Ford?
I haven't finished the whole thing yet, but so far it has been a good read about some of the problems with increased automation and machine intelligence. He also outlines some possible solutions.
I agree, that is an awesome answer. Really brings home how much more powerful tools are today than they were when this sort of analysis took 'big iron' (like a PDP-10 with a Vector Coprocessor)
I'm pretty sure that NASA scientists are able to do this kind of anomaly detection and that the primary goal of this (interesting!) question is only trivia.
I'm not sure if very tiny and close objects like this one are something anyone has focused on. At the moment, the MSL team has so many very well trained eyeballs that they don't need much automation of relatively high level tasks like this one. After the initial 90 days, the team will shrink, and automation will be more important.
Detecting objects in medium to long range Mars rover imagery is a problem that has received a lot of attention, including systems that run on board the rover.
The people who have pushed this the farthest are here:
Just a recognition algorithm is not enough. You also need to integrate prioritization, planning (is it OK to pivot the camera to point in that direction, do we have enough room to store the data, ...), and acquisition to have a system that actually gathers quality data without sacrificing other mission objectives. The described algorithm has been run on Opportunity.
The selling point for on board automation in this context is light time. By the time you send the images back, analyze them, and upload a new command sequence to take the data, you would have driven past the interesting rock. So you have to do some things on board. Limited bandwidth also plays a role.
The next time some alien life form comes to earth and wants to destroy us for all the pain and suffering we are causing each other, lets just point it at this link.
It is by far the best example of "Why the human race deserves to exist" that I've seen this week.
This is why I both love and hate reading hacker news / stack exchange. On one hand this is one of the most amazing posts I've seen and made me want to learn Mathematica and image analysis, but on the other hand, it made me want to learn Mathematica an image analysis.
(which has no real connection to my day job or side projects)
Interesting that one of the responders advises the asker to leave the question open to garner stronger responses - in stark contrast to SO where quick turnaround (by both parties) is strongly encouraged. Different models for different communities.
Basically, the accepted answered did something like this:
First, he noticed that the sand looks pretty uniform. You wouldn't be able to tell one patch of sand from another. So he just picked a random 200x200px square of sand.
Using this sample of sand, he analyzed the color distribution. Looking at it, he decided the color of a given pixel of sand-sample can be modelled reasonably well by assuming a random gaussian distribution (think bell curve).
Next, he assumed the whole picture was sand. The black-and-white image at the end is essentially a plot of the likelihood that the corresponding picture in the original picture would have been a grain of sand, assuming the normal distribution found earlier. Black means "very likely" and white means "not so likely".
Since we argued earlier that any patch of sand in this picture is as good as any other, all of the sand appears black. The rover, and crucially, the foreign object, do not fit the modelled distribution, so they stand out in stark contrast.
A very good explanation, but I believe you are missing some steps at the end.
After he had the rover and bright object showing up white and the sand showing up as black, he removed any large connected white objects (In any image the largest block of 'non sand' would always be the rover). He then drew a circle around anything left over, which in this case was the mystery bright object.
The beauty of his approach is that its generic and can be applied to any photo of sand and rover to find interesting anomalies.
In that case, wouldn't the 200x200px square with the anomaly still be highlighted as unique from the rest of the sand, since none of the sand would match it?
If I were implementing this for NASA, I'd run the analysis a number of times using different patches for my samples. Any areas that come up as anomalies more than once are probably worth checking out.
You can choose patches randomly, but they probably also have good enough telemetry and modeling to be able to predict which areas in any given picture are going to show part of the rover. That data can be used to ensure that your random patches don't include rover parts.
A mod named "rm -rf" lock-protected the thread. That's a terrible username. Can you imagine thinking you're logging into your account but the focus is accidentally on a command line window?
Actually, you can't login any of the Stack Exchange websites with a username and password. The only options are OpenID and an email-based StackExchange login.
And the username can be changed any time, so anyone can change their username (which is effectively a "display name") to 'rm -rf' at any time if they want to, IIRC.
NASA spotted that bright spot after deploying the scooping robotic arm for the first time, after its first scooping.[1]
You can see the scoop full of sand very clearly on the images.
This kind of maneuver (sampling martian stuff) is certainly crucial for Curiosity's mission. You would guess that they were watching this first very carefully, with a lot of engineers, scientists and attention. All of this makes me think that they spotted this bright thing with bare eyes.
When we got first light on LROC, pretty much everyone on our team looked over images like crazy. The PI was a madman in that regard. I imagine that some person spotted this.
I would not be so sure - they will have a lot of image processing but they are not sending back realtime video - they only have limited bandwidth so the number of images is fairly low. I would guess NASA spotted it pretty similarly to how the rest ofus did - suddenly going wtf is that?
How can these sorts of problems be posed to a wider audience?
Is NASA and SpaceX's work so narrowly defined that the community could not help in other ways?
How great would that be? Get great industry minds involved in government work furthering the space program and encouraging open source spirit all the while?
This is probably my favorite thing about the internet.