Back to Blog
Technical

Supabase Egress Optimization: From 19 GB to Near Zero

Supabase egress spiked to 19 GB from a single .select('*') query. Here's how to detect it, fix it with column selection and pagination, and prevent it in production.

ReadyToRelease TeamApril 8, 20266 min read
Supabase Egress Optimization: From 19 GB to Near Zero
Share this:

How a Single .select('*') in Supabase Turned Into 19 GB of Egress (and How to Fix It)

6 min read

The problem nobody sees coming

One morning I opened the Supabase dashboard for ReadyToRelease and found a usage spike that made no sense. The project had minimal traffic — a handful of users, mostly me testing features. But the egress graph looked like a mountain range. 19 GB out in a few days, from a single internal screen.

The culprit? One dashboard page doing .select('*') on a table that had grown much larger than I'd realized. No pagination, no column filtering, just give me everything on every page load. This post covers how to detect this pattern, why it's dangerous, and the exact fixes I applied to bring egress back to near zero.


What Supabase counts as egress

Egress (also called bandwidth) is any data that leaves Supabase toward your application — database query results, Storage file downloads, Edge Function responses, and even response headers all count. The free tier includes 5 GB of database egress per month; the Pro plan gives you 250 GB. Beyond that, you pay per GB.

The critical thing to understand is that egress is proportional to bytes returned, not to the number of queries. A single query that returns 2 MB of data generates 2 MB of egress every time it runs. If that query fires on every page load, the math compounds fast.


How .select('*') silently scales against you

When your table has 10 rows, .select('*') is harmless. When it has 50,000 rows with wide columns — JSON blobs, long text fields, timestamps, metadata — each query pulls everything. In my case the table stored market research data with large JSON payloads per row. I never noticed it growing because I was focused on features, not on byte counts.

// What I was doing — pulls every column, every row, on every load
const { data, error } = await supabase
  .from('research_results')
  .select('*')

This is not a Supabase bug. It's a database fundamentals issue: SELECT * is a development shortcut, not a production pattern. The difference is that with Supabase you feel it directly in the billing dashboard, not buried in a server log somewhere.


How to detect the source of the spike

Before fixing anything, find the source. Supabase gives you a few places to look:

  1. Dashboard → Reports → Database — shows query volume and data transferred over time. Look for spikes that don't correlate with user traffic.
  2. Dashboard → Logs → API — filter by table name or endpoint to find which routes are the busiest. Sort by response size.
  3. pg_stat_statements — if you have access via the SQL editor, query it to see cumulative bytes scanned per query pattern.

In my case the spike was obvious: one route, consistent volume, happening every time the internal dashboard was opened — even by me during development. The traffic was nearly zero but egress was enormous.


The fix: four changes, near-zero egress

1. Select only the columns you need

This is the highest-impact change. Identify what the UI actually renders and request only those fields.

Validating your SaaS idea before building saves more time than fixing bugs after.

Get your market research report in 90 seconds → readytorelease.online

// Before
const { data } = await supabase
  .from('research_results')
  .select('*')

// After — only what the dashboard table displays
const { data } = await supabase
  .from('research_results')
  .select('id, title, status, created_at, score')

If a column holds a JSON blob or long text that isn't shown in the list view, don't fetch it. Load it on demand when the user opens the detail view.

2. Add real pagination

An unbounded query is a time bomb. As the table grows, so does every page load. Supabase's range() method maps directly to SQL LIMIT / OFFSET.

const PAGE_SIZE = 25

const { data, count } = await supabase
  .from('research_results')
  .select('id, title, status, created_at', { count: 'exact' })
  .order('created_at', { ascending: false })
  .range(page * PAGE_SIZE, (page + 1) * PAGE_SIZE - 1)

This caps your worst-case egress per request to PAGE_SIZE × average_row_size, regardless of how large the table gets.

3. Cache on the client

An internal dashboard doesn't need real-time data. If the data is valid for 60 seconds, don't re-fetch it on every navigation. In Next.js you can use fetch cache options, React Query's staleTime, or even a simple module-level variable for truly static reference data.

// React Query example — won't re-fetch for 60 seconds
const { data } = useQuery({
  queryKey: ['research_results', page],
  queryFn: () => fetchResults(page),
  staleTime: 60 * 1000,
})

4. Use .select() on inserts and updates too

This one is less obvious. When you do an upsert or update, Supabase returns the full modified row by default. If you only need the id back, say so:

// Returns only the id after insert — not the full row
const { data } = await supabase
  .from('research_results')
  .insert({ title, status })
  .select('id')

Results

After applying these four changes the egress for that dashboard dropped from ~19 GB over a few days to effectively zero — well under 1 MB per day with normal usage. The table has continued growing but the query cost stays flat because it's bounded by PAGE_SIZE and a fixed column list.


A note on monitoring going forward

Set up a usage alert in Supabase (Dashboard → Settings → Billing → Usage alerts) so you get notified before hitting plan limits, not after. Running .select('*') once during prototyping is fine. Shipping it to a route that runs on every load is where it becomes a real cost.


ReadyToRelease uses Supabase as its primary database for storing market research data. If you're building an indie SaaS and want to validate your idea before writing production-quality queries, check it out — market analysis for $3, no subscription.


The rule I apply now: never use .select('*') in production on any table that can grow. Always specify columns, always paginate, always ask "what's the worst-case byte count if this table has 100k rows?" before shipping a query.

Found this helpful? Share it:

Share this:

Preguntas Frecuentes

How much egress does Supabase's free tier include?

The free tier includes 5 GB of combined egress per month. The Pro plan includes 250 GB. Beyond those limits, additional egress is billed per GB.

What causes high egress in Supabase?

The most common cause is using .select('*') on large tables without pagination. Every page load fetches all rows and all columns, and egress is billed by bytes returned, not by number of queries

How do I reduce Supabase egress on database queries?

Select only the columns your UI needs, add .range() pagination to cap rows per request, cache query results on the client with tools like React Query, and avoid returning full rows from inserts, use .select('id') instead

How can I find which query is causing a Supabase egress spike?

Check Dashboard → Reports → Database for data transfer spikes, then cross-reference with Dashboard → Logs → API filtering by table name. You can also query pg_stat_statements via the SQL editor to find the heaviest queries by bytes scanned.

Is .select('*') always bad in Supabase?

Not during development or on small, stable tables. The risk appears in production when a table grows over time and an unbounded query runs on every page load. The safe rule is: always specify columns and always paginate on any table that can scale.

Tags:

#supabase#egress#optimization#database

Ready to launch your SaaS idea?

Get comprehensive market research and competitor analysis in minutes. Skip weeks of manual research and start building faster.