Question about downloading sweep catalogs

Hi (Dustin),

I had a quick question about using Tractor sweep/catalog values. I’ve figured out how to construct urls to the nersc portal which appropriately identify the brick_min / brick_max values for a given set of input coordinates and can download sweep files.

My problem is that I have some particular sample of N galaxies (N anywhere from dozens to tens of thousands), and each sweep file is ~700+ MB. Is it true that the only way for me to obtain legacy photometry/sweep catalog values (like size) is to download the sweep for the brick bounds containing my object for every object, then crossmatch and figure out which source(s) match? Because I don’t think I have the storage for 700MB x 10,000 downloads (though I could in principle delete each sweep file after extracting the catalog values for just the galaxy I need). Further, downloading 700MBx10,000 is very slow and probably taxing on the LS servers?

Appreciate any advice on how to extract something like 10,000 objects from the overall legacy catalogs in the fastest/most lightweight manner! I feel like I’m missing something obvious here…

Cheers,
Imad

1 Like

The LS catalogs are loaded into the NOIRLab Datalab interface, and they have a cross-match service that would let you match your input catalog to the LS catalogs.
https://datalab.noirlab.edu/xmatch.php