I don't know whether or not this is a factor, but apparently the Goldstone Solar System Radar (GSSR) facility used can operate in either of two modes, monostatic in which the Goldstone facility both sends and receives radar transmissions, and bistatic in which Goldstone transmits whilst two other facilities receive the bounced signal.
It seems possible to me that for a near-earth object it might be possible for a bistatic image to show a perspective difference between the side nearest the transmission and receiving antennae.
Though at five times Earth-Lunar distance (about 2 million km), any such parallax baseline would be minimal (13 thousand km).
The postprocessing explanation offered elsewhere seems far more likely.
These images are confusing. The "lit" side is where the radar beam is coming from. The perspective is looking "down" on the asteroid with "up" on the image being the direction of Earth. The radar scans a beam across the object and records the reflections. The brightness of a pixel is a function of the strength, phase, and polarity of the returned signal. The position of a pixel is a function of its distance from the receiver.
From the perspective of the radar receiver it's just receiving a series of reflections over a period of time. These are processed for the above measures and then perspective transformed to show an "overhead" image that we see here.
It's not a bad heuristic that the "lit" (up) side is where the radar is coming from. That's the "delay" coordinate, so the top means small delay which means close to observer.
But the other (left/right) coordinate is doppler, which does not map 1:1 to a physical location -- doppler will depend on the rotation and geometry of the asteroid.
The radar beam is not "scanning" across the object. The asteroid is too small to focus the beam on a part of it -- the beam illuminates the whole object. We get lucky that the object is spinning, so that different surfaces on the asteroid cast energy into different doppler offsets.
Anyway, the returned signal is then binned into (delay, doppler) coordinates. This is repeated for many separate pings to beat down the receiver noise.
During the observation window (series of pings), you have to compensate for the relative motion of the earth and the centroid of the target, because the relative velocity (zero-point of the doppler coordinate) is changing the whole time.
The Sun emits microwaves that can be reflected by asteroids. Depending on the surface/composition it could be much more reflective to microwaves than visible light.
A 70 meter dish can emit and focus a powerful signal but probably can't outshine the Sun at these distances. Radar emissions also consist of short pulses while the Sun emissions are continuous.
Since the article mentions precise distance information being acquired, the radar system was able to detect it's own emissions (likely only from the unlit regions)
This is related to the periodic outages of geostationary satellites when the Sun is directly behind them. Ground receivers are essentially "jammed" by the microwaves emitted by the Sun itself.
Alas, the above comment is quite mistaken. The microwave reflection from sunlight is too faint to detect. All the information shown comes from the radar beam from Goldstone.
The radar imaging process is complex, but suffice to say the shown "image" is not in physical coordinates. If it's a conventional radar image, the vertical coordinate of the shown image is "delay" (distance from observer). And also conventionally, the horizontal image coordinate is "doppler", which is the doppler shift given to the returned signal by the rotating asteroid.
So, stuff on the left side of the image was moving away from the observer, and stuff on the right side was moving towards the observer. And of course the brightness is essentially the "amount of stuff" at that delay-doppler locus.
The reason we can't plot an "image" in physical coordinates, and have to be content with the altered coordinates, is that all we get from the returned radar carrier signal is a delay, and a doppler shift. That's it - "delay-doppler" coordinates.
So any set of sites on the asteroid surface with the same distance and the same relative velocity (w/r/t the observer) will be binned into the same place in the radar image. There is no guarantee that these sites are near each other, and for complex geometries (rough asteroids), they often will not be.
If you want to get a real image in physical coordinates, you have a separate inversion problem to solve, and you'll probably need more images and some model constraints.
Eh, your explanation makes it sound like the radar "image" is as divorced from physical coordinates as a spectrogram or a bar chart. Radar images are at least phyicalish... the asteroid does look similar to the image in real life. Doppler and range do roughly map to x/y. I hear it alot with SAR imagery as well - that we're not looking at a "physical representation". We are - there are just different anomalies to account for.
I’d agree partway, and I appreciate the counterpoint, in that the delay coordinate is simple. But Doppler for a rotating, irregular asteroid is not simple. See Fig. 1 of the paper I linked for an example. It really is many-to-one.
Delay-Doppler for planar surfaces, like a remote sensing radar zipping along Earth’s surface, is pretty straightforward, as you note…and we get to set up the system parameters, like beam width, pulse rate and ground speed, so it works out nicely.
I'm not at all plugged into astronomy, though I have worked in radar for quite awhile.
Can you comment at all why a technique like Inverse Synthetic Aperture Radar (ISAR) is not used? That relies on the rotation of an object to generate cross-range resolution through sampling a diverse set of aspect angles (and is certainly useful for non-uniform, non-planar surfaces). If the rotation rates of the asteroids are known, then that minimizes one of the main challenges in forming quality ISAR imagery. For the use-cases I'm familiar with, we need to estimate the rotational motion because vehicles do unfortunate things like accelerate and turn while we're trying to look at them -- the nerve! And ISAR has certainly existed before the paper you linked in another comment was written (and also before the previous study the paper itself references).
As a side note, that paper by Ostro et al. is very interesting to me; it's like being familiar with Leibniz's notation for calculus and seeing something written using Newton's notation (or vice-versa). I skimmed the references and all of the ones I saw seemed to be from astronomy / astrophysics sources. It's almost as though we have two fields using similar methodologies to look at different objects that don't seem to talk at all and have developed different dialects.
Edit: maybe delay-Doppler imaging is akin to ISAR, like medical tomography and SAR were shown to be mathematically related?
I know something about SAR and InSAR because I use radar (Earth) remote sensing in my day job, and I have a signal processing background. But I’m not a serious radar expert.
ISAR sounds like its operating principle is the same as the radar imaging technique used for these asteroid results. In general, it’s all radar imaging so you get reflectances in delay/Doppler coordinates as your observable.
One possible difference is the poor SNR of the asteroid problem — you have to average many radar images to beat down the noise. The velocity of the scattering elements on the asteroid is assumed to be the same across all these images (as far as I know). (After correcting for a known offset due to the Earth’s changing motion.)
This seems to be a little different than the ISAR “swaying boat” type of application in which the body is indeed accelerating and your radar image must be adjusted for that, or else the mast of the swaying boat will smear across the image.
It may also be worth saying that there is no synthetic aperture in the OP - it’s a physical aperture.
No, that’s not it. I believe that these are not true images. They are instead effectively graphs of the time of flight of the radar signals. The vertical axis is flipped, putting Earth (or rather the radio telescope) “above” the image and the bearing of the radar signal along the x axis. The shading happens because some fraction of the radar signal refracts instead of reflecting. You can think of this as reflecting off of the interior of the asteroid rather than the surface, or as reflecting off of internal features such as density or composition changes.
Our eyes interpret it as a top–down view, or as a face–on view with lighting from one side, but it’s really more like a slice through the middle. Or rather, like a bunch of slices through the middle all stacked on top of each other, since the beam probably isn’t all _that_ narrow after traveling for a few million miles.
I wish I could find the paper though; press releases so often don’t bother linking to them. Maybe it hasn’t even been published yet? The observations were just 19 days ago.
Any idea how this picture was made? Was the Goldstone dish scanning a really tightly focused beam across the region of space where this asteroid was, or are they relying on some kind of doppler or synthetic aperture radar effect with the relative movement of the dish (on a rotating Earth) and the asteroid let the image be recovered out of Doppler and timing info?
I went looking for some details on how the picture was produced on the NASA page linked at the bottom of the article but I can't find much technical info on the process. However it was done I'm sure it's a cool technique!
> ... the image be recovered out of Doppler and timing info...
Yes, that's right: they are using doppler shift of the rotating asteroid, and the delay of the propagating signal, to form the "image". See my other comment nearby.
I think it depends quite a bit on things like angle and speed. I'm sure that if it came in behind Earth, in a similar orbit, at an angle it'd be very different from a head on 90 degree angle collision with it headed in the opposite direction. Size alone is probably not a great factor to estimate damage. A starting point, but not final word.
Smartass: "If our distinguished colleague Optimist here is suggesting a living space potato, then I believe we would be better off if it was the other thing."