From Skyfield to SQL
Skyfield is an excellent Python library for positional astronomy — well-documented, well-tested, and built on the same JPL ephemeris data used by spacecraft navigation teams. If you already use Skyfield, you’ll recognize the computations pg_orbit performs. The difference is where they happen.
This page shows the same tasks done both ways. Not to argue one is better than the other — they make different trade-offs — but to help you decide which fits your workflow.
Observing a planet
Section titled “Observing a planet”The most common starting point: where is Jupiter from my location, right now?
from skyfield.api import load, Topos
ts = load.timescale() # downloads finals2000A.alleph = load('de421.bsp') # downloads 17MB BSP file
observer = Topos('40.0 N', '105.3 W', elevation_m=1655)t = ts.now()
astrometric = (eph['earth'] + observer).at(t).observe(eph['jupiter barycenter'])alt, az, distance = astrometric.apparent().altaz()
print(f"Az: {az.degrees:.2f} El: {alt.degrees:.2f} Dist: {distance.au:.4f} AU")Before this runs, Skyfield downloads two files:
de421.bsp(17 MB) — JPL planetary ephemerisfinals2000A.all(3.5 MB) — Earth orientation parameters
These files expire. finals2000A.all needs refreshing every few months. The BSP file
itself is stable, but managing file paths across environments (local dev, CI, production)
adds friction.
SELECT topo_azimuth(t) AS az, topo_elevation(t) AS el, topo_range(t) / 149597870.7 AS distance_auFROM planet_observe(5, '40.0N 105.3W 1655m'::observer, now()) t;No files to download. No timescale object. The VSOP87 coefficients are compiled
into the extension — they ship with CREATE EXTENSION pg_orbit and never expire.
Batch satellite observation
Section titled “Batch satellite observation”Observe many satellites at the same timestamp. This is where the architectural difference starts to matter.
from skyfield.api import load, EarthSatellite, Topos
ts = load.timescale()observer = Topos('40.0 N', '105.3 W', elevation_m=1655)t = ts.now()
# Load TLE filewith open('catalog.tle') as f: lines = f.readlines()
results = []for i in range(0, len(lines), 3): name = lines[i].strip() sat = EarthSatellite(lines[i+1], lines[i+2], name, ts) topo = (sat - observer).at(t) alt, az, dist = topo.altaz() if alt.degrees > 0: results.append({ 'name': name, 'az': az.degrees, 'el': alt.degrees, 'range_km': dist.km })
# Now you have results in a Python list.# To correlate with other data, you need to:# 1. Load that data from your database# 2. Match by satellite name or NORAD ID# 3. Merge in Python (pandas, dict lookup, etc.)The results live in Python memory. If you need to correlate with operator contact schedules, frequency assignments, or historical observation logs that live in PostgreSQL, you have to bridge two systems.
-- TLEs already in your database? Just JOIN.SELECT s.norad_id, s.name, topo_azimuth(observe(s.tle, obs.location, now())) AS az, topo_elevation(observe(s.tle, obs.location, now())) AS el, topo_range(observe(s.tle, obs.location, now())) AS range_kmFROM satellites s, (VALUES ('40.0N 105.3W 1655m'::observer)) AS obs(location)WHERE topo_elevation(observe(s.tle, obs.location, now())) > 0;-- Correlate with frequency assignments in the same querySELECT s.norad_id, s.name, f.downlink_mhz, topo_elevation(observe(s.tle, obs.location, now())) AS elFROM satellites sJOIN freq_assignments f ON f.norad_id = s.norad_id, (VALUES ('40.0N 105.3W 1655m'::observer)) AS obs(location)WHERE topo_elevation(observe(s.tle, obs.location, now())) > 10ORDER BY el DESC;The computation and the correlation happen in the same process. No data transfer between Python and PostgreSQL. The query planner can parallelize across cores when scanning large catalogs.
Time series generation
Section titled “Time series generation”Generate positions over a time range — for plotting an elevation profile, building a ground track, or analyzing visibility windows.
from skyfield.api import load, Toposimport numpy as np
ts = load.timescale()eph = load('de421.bsp')observer = Topos('40.0 N', '105.3 W', elevation_m=1655)
# Generate 144 timestamps over 24 hourst0 = ts.now()t1 = ts.tt_jd(t0.tt + 1.0)times = ts.linspace(t0, t1, 144)
earth_obs = eph['earth'] + observerjupiter = eph['jupiter barycenter']
elevations = []for t in times: alt, az, dist = earth_obs.at(t).observe(jupiter).apparent().altaz() elevations.append(alt.degrees)
# Plot with matplotlibimport matplotlib.pyplot as pltplt.plot(range(len(elevations)), elevations)plt.ylabel('Elevation (degrees)')plt.show()The loop is explicit. For 144 points this is fast, but the pattern doesn’t parallelize automatically. For larger sweeps (thousands of satellites, days of 1-minute resolution), you manage the iteration yourself.
SELECT t AS time, topo_azimuth(obs) AS az, topo_elevation(obs) AS elFROM generate_series( now(), now() + interval '24 hours', interval '10 minutes' ) AS t,LATERAL planet_observe(5, '40.0N 105.3W 1655m'::observer, t) AS obs;generate_series replaces the Python loop. To change the resolution from
10 minutes to 1 minute, change one parameter. The same pattern works for
any observable — planets, satellites, moons, stars.
Export for plotting:
COPY ( SELECT t, topo_elevation(planet_observe(5, '40.0N 105.3W 1655m'::observer, t)) AS el FROM generate_series(now(), now() + interval '24 hours', interval '10 minutes') t) TO '/tmp/jupiter_elevation.csv' WITH CSV HEADER;Then plot with whatever tool you prefer — gnuplot, matplotlib, Observable, a spreadsheet. pg_orbit produces data; visualization is a separate concern.
Pass prediction
Section titled “Pass prediction”Predict when a satellite will be visible from a location. This is where Skyfield and pg_orbit take genuinely different approaches.
from skyfield.api import load, EarthSatellite, Topos
ts = load.timescale()observer = Topos('40.0 N', '105.3 W', elevation_m=1655)
line1 = '1 25544U 98067A 24001.50000000 .00016717 00000-0 10270-3 0 9025'line2 = '2 25544 51.6400 208.9163 0006703 30.1694 61.7520 15.50100486 00001'sat = EarthSatellite(line1, line2, 'ISS', ts)
t0 = ts.now()t1 = ts.tt_jd(t0.tt + 1.0)
# find_events returns (times, event_types)# event_type: 0=rise, 1=culminate, 2=settimes, events = sat.find_events(observer, t0, t1, altitude_degrees=10.0)
for ti, event in zip(times, events): name = ('rise', 'culminate', 'set')[event] print(f"{ti.utc_iso()} — {name}")Skyfield’s find_events uses root-finding to locate the exact moments when
elevation crosses the threshold. This gives sub-second precision for AOS and
LOS times.
WITH iss AS ( SELECT '1 25544U 98067A 24001.50000000 .00016717 00000-0 10270-3 0 90252 25544 51.6400 208.9163 0006703 30.1694 61.7520 15.50100486 00001'::tle AS tle)SELECT pass_aos(p) AS rise_time, pass_tca(p) AS max_el_time, pass_max_el(p) AS max_elevation, pass_los(p) AS set_time, pass_aos_az(p) AS rise_azimuth, pass_los_az(p) AS set_azimuthFROM iss, predict_passes(tle, '40.0N 105.3W 1655m'::observer, now(), now() + interval '24 hours', 10.0) p;predict_passes returns structured pass_event records. Batch prediction
across many satellites is a JOIN:
SELECT s.name, pass_aos(p) AS rise, pass_max_el(p) AS max_el, pass_los(p) AS setFROM satellites s,LATERAL predict_passes(s.tle, '40.0N 105.3W 1655m'::observer, now(), now() + interval '24 hours', 10.0) pWHERE s.constellation = 'Iridium'ORDER BY pass_aos(p);Every Iridium pass in the next 24 hours, filtered by constellation, sorted
chronologically. Adding JOIN schedules or JOIN ground_contacts keeps the
correlation inside the database.
Where Skyfield wins
Section titled “Where Skyfield wins”Precision. Skyfield uses the full IAU 2000A nutation model, polar motion corrections, and delta-T from IERS data. When you need sub-arcsecond accuracy — dish pointing at microwave frequencies, occultation timing, precision astrometry — Skyfield with DE441 ephemerides is the right tool.
Rise/set finding. find_events() uses numerical root-finding to pinpoint the exact moment a body crosses an elevation threshold. pg_orbit’s predict_passes uses a step-and-refine approach that’s fast for batches but less precise for individual events.
Aberration and light-time. Skyfield iterates to correct for light travel time and applies stellar aberration. pg_orbit uses geometric positions without light-time iteration — the difference is under 20 arcseconds for planets and irrelevant for satellite tracking, but it matters for some applications.
Visualization integration. Skyfield works directly with matplotlib, numpy, and the rest of the Python scientific stack. pg_orbit produces rows and columns — you export to CSV or JSON and then plot separately.
Extensibility. Skyfield handles arbitrary BSP kernels — Pluto, spacecraft, asteroids with precise ephemerides. pg_orbit’s body catalog is fixed at compile time.
Where pg_orbit wins
Section titled “Where pg_orbit wins”No file management. No BSP kernels, no timescale data files, no expiring Earth orientation parameters. The computation ships with the extension.
Batch operations at database speed. Observing 12,000 satellites in 17ms. 22,500 Lambert solves in 8.3 seconds. These aren’t optimized benchmarks — they’re SELECT statements running on commodity hardware.
Data correlation. The computation happens where your data lives. JOIN orbital results with contact schedules, frequency assignments, observation logs, or any other table. No ETL pipeline between Python and PostgreSQL.
Automatic parallelism. PostgreSQL’s query planner distributes PARALLEL SAFE functions across available cores. You don’t manage threads or multiprocessing pools.
Reproducibility. A SQL query is a complete, self-contained specification of a computation. No virtual environment, no package versions, no file paths. The same query produces the same result on any PostgreSQL instance with pg_orbit installed.
Migrating gradually
Section titled “Migrating gradually”You don’t have to choose one or the other. A practical migration path:
-
Keep Skyfield for precision work. Anything requiring sub-arcsecond accuracy, aberration corrections, or custom BSP kernels stays in Python.
-
Move batch observation to SQL. If you’re computing positions for hundreds of objects to filter or correlate with database records, pg_orbit eliminates the Python-to-PostgreSQL round trip.
-
Move scheduling to SQL. Pass prediction and visibility windows over time ranges are natural
generate_series+predict_passesqueries. -
Move reporting to SQL. “What was above 20 degrees from each of our 5 observers last night?” is a single query with a CROSS JOIN, not a Python loop over observers and timestamps.