What it does
Every morning at 06:30, Daglig Vejr pulls weather and pollen data from two Danish public APIs, runs it through a small classification layer, and sends me an HTML email with three recommendations for the day: whether to apply sunscreen, whether to take a hay fever pill, and whether to bring an umbrella.
I have a grass pollen allergy and I run outside most mornings, so getting these three things wrong in the same day is genuinely unpleasant. The project exists because I wanted something that actually runs and produces a tangible output I use — not a notebook sitting in a folder.
Data sources
Weather data comes from the DMI Open Data API, which provides free access to Danish meteorological observations including UV index, precipitation probability, cloud cover, and temperature. Pollen forecasts come from SMHI, the Swedish meteorological institute, which publishes Scandinavian pollen levels including grass pollen concentrations.
Why SMHI for pollen? DMI does not expose a public pollen forecast API. SMHI covers southern Scandinavia and the data is accurate enough for Odense. If a Danish pollen API becomes available, swapping it in is a one-line change.
Architecture
The whole system is a single Python process scheduled by Railway's cron feature. No containers, no orchestration framework — just a script that runs, does its work, and exits.
Fetcher
fetcher.py calls both APIs, normalises the responses into a shared schema, and returns a single data object for the day. Each API call is wrapped in a retry decorator with exponential backoff — Railway's free tier occasionally has cold starts, and the APIs sometimes return 503 under load.
Classifier
classifier.py takes the day's data and produces three binary recommendations. The current implementation is rule-based with tunable thresholds stored in a config file. UV index above a threshold triggers the sunscreen recommendation; pollen count above a second triggers hay fever; precipitation probability above a third triggers the umbrella.
I log my actual experience each day into history.db. Once enough labelled data has accumulated, I plan to replace the rule-based thresholds with a small logistic regression model trained on that data. The classification interface is abstracted behind a single function call, so swapping the implementation requires no changes elsewhere.
Mailer
mailer.py renders a Jinja2 HTML template and sends it via smtplib using a Gmail app password. The email is intentionally simple: three coloured indicators at the top, the raw numbers underneath, and a one-line summary. Designed to be readable in the notification preview without opening it.
Deployment
The service runs on Railway's free tier as a cron job. Railway injects environment variables for API keys and email credentials, so no secrets are stored in the repository. The SQLite database is on a persistent volume so the history survives redeployments.
Total monthly cost: zero. The DMI API is free, SMHI is free, Gmail app passwords are free, and the Railway free tier covers a cron job of this size comfortably.
What I learned
The main technical lesson was around API reliability. Both APIs return inconsistent response shapes depending on whether the nearest station has data for a given parameter on a given day. Writing defensive deserialization — where the fetcher handles missing fields gracefully rather than crashing — turned out to be most of the actual engineering work.
The second lesson was about production discipline. It is easy to write code that works once on your laptop. It is harder to write code that runs at 06:30 every morning, sends an alert when something goes wrong, and recovers cleanly when an upstream API is down. This project forced me to think about observability and failure modes in a way that a Jupyter notebook never does.