Alexander Garcia
Analyzing the security implications of refreshing OAuth tokens before they expire on the frontend.
Read time is about 13 minutes
Alexander Garcia is an effective JavaScript Engineer who crafts stunning web experiences.
Alexander Garcia is a meticulous Web Architect who creates scalable, maintainable web solutions.
Alexander Garcia is a passionate Software Consultant who develops extendable, fault-tolerant code.
Alexander Garcia is a detail-oriented Web Developer who builds user-friendly websites.
Alexander Garcia is a passionate Lead Software Engineer who builds user-friendly experiences.
Alexander Garcia is a trailblazing UI Engineer who develops pixel-perfect code and design.
I recently reviewed a PR that implements proactive token refresh on the frontend. The approach checks if an access token is within 60 seconds of expiring before making an API request, and if so, refreshes it preemptively. While this sounds like a nice UX improvement, it raises some security considerations worth discussing - especially when you already have a secure session management architecture in place.
Before diving into the PR, let me explain the current session management architecture. It uses a three-cookie pattern designed to balance security with frontend usability:
| Cookie | HTTPOnly | Lifetime | Purpose |
|---|---|---|---|
| Access Token | Yes | 5 minutes | Authenticates API requests |
| Refresh Token | Yes | 30 minutes | Obtains new access tokens |
| Info Token | No | Matches tokens | Frontend session state |
The Access Token and Refresh Token are both HTTPOnly, meaning JavaScript cannot read them. This protects against XSS attacks - even if malicious code runs on the page, it cannot exfiltrate the actual credentials. The browser automatically includes these cookies in requests to the API.
The Info Token is readable by JavaScript but contains only metadata: expiration timestamps for both HTTPOnly tokens. This allows the frontend to:
// Info token structure (readable by JS) { access_token_exp: 1706300400, // When access token expires refresh_token_exp: 1706301900 // When refresh token expires } // Actual tokens are HTTPOnly - JS can't access them // Browser sends them automatically with requests
This separation means the frontend can make intelligent decisions about session state without ever touching the actual credentials.
Currently, token refresh is handled reactively:
403: Access Token Expired/refresh endpointasync function apiRequest(url, options) { const response = await fetch(url, options); if (response.status === 403) { const error = await response.json(); if (error.message === "Access Token Expired") { await refreshTokens(); // Retry the original request with new tokens return fetch(url, options); } } return response; }
The server is the source of truth. It validates the actual HTTPOnly access token and tells the client definitively whether it's expired. No client-side guessing, no clock synchronization issues, no trust problems.
The "cost" is one extra round-trip when tokens expire - roughly every 5 minutes. That's ~200ms of latency, once every 5 minutes.
The PR proposes checking token expiration before making API requests:
// Before making an API request if (isApiRequest && infoTokenExists() && serviceName) { try { await refreshIfAccessTokenExpiringSoon({ thresholdSeconds: 60, type: serviceName, }); } catch (e) { // Intentionally swallow refresh errors } } // Then proceed with the actual API call
The supporting utilities read expiration from the info token:
function getAccessTokenExpiration() { const infoToken = getInfoTokenCookie(); return infoToken?.access_token_exp; } function isAccessTokenExpiringSoon(thresholdSeconds = 60) { const expiration = getAccessTokenExpiration(); const now = Date.now() / 1000; return expiration - now <= thresholdSeconds; } async function refreshIfAccessTokenExpiringSoon({ thresholdSeconds, type }) { if (isAccessTokenExpiringSoon(thresholdSeconds)) { await refreshToken(type); } }
The proactive approach doesn't eliminate the need for reactive 403 handling. Consider:
So you're adding a layer of complexity that doesn't remove the existing mechanism - it just tries to race ahead of it.
The 60-second threshold assumes client and server clocks are synchronized:
const now = Date.now() / 1000; return expiration - now <= 60;
In practice:
A client clock 2 minutes ahead will refresh constantly. A clock 2 minutes behind won't refresh in time.
With reactive refresh, this doesn't matter - the server determines expiration based on its own clock.
Multiple concurrent API requests could each trigger refresh:
// User clicks a button that fires 3 API calls // Request 1: token expiring soon, starts refresh // Request 2: token expiring soon, starts refresh // Request 3: token expiring soon, starts refresh // Result: 3 refresh requests hit the auth server
With refresh token rotation (where each refresh invalidates the previous token), this could cause legitimate requests to fail.
If implementing proactive refresh, use a mutex:
let refreshPromise = null; async function refreshIfExpiringSoon(options) { if (isAccessTokenExpiringSoon(options.thresholdSeconds)) { if (!refreshPromise) { refreshPromise = refreshToken(options.type).finally(() => { refreshPromise = null; }); } return refreshPromise; } }
While the three-cookie pattern is secure (actual tokens are HTTPOnly), the info token is readable and modifiable by JavaScript. A malicious script could:
// Attacker modifies info token to always appear "expiring soon" document.cookie = "info_token=" + JSON.stringify({ access_token_exp: Math.floor(Date.now() / 1000) + 30, // Always 30 seconds away });
This wouldn't grant unauthorized access (the real tokens are HTTPOnly), but it could:
With reactive refresh, manipulation of the info token has no effect - the server decides when tokens are expired.
The PR intentionally swallows refresh errors:
try { await refreshIfAccessTokenExpiringSoon({ thresholdSeconds: 60, type: serviceName, }); } catch (e) { // Intentionally swallow refresh errors }
I understand the intent - don't block requests if proactive refresh fails. But silent failures make debugging difficult. If refresh is consistently failing, you won't know until users report session issues.
If implementing, at least log failures:
try { await refreshIfAccessTokenExpiringSoon(options); } catch (e) { // Log for observability, but don't block logRefreshFailure({ errorType: e.name, timestamp: Date.now() }); }
Let's quantify what proactive refresh actually saves:
| Metric | Value |
|---|---|
| Access token lifetime | 5 minutes |
| Extra round-trip latency | ~200ms |
| Frequency of reactive refresh | Once per 5 minutes |
| Time "lost" per hour | ~2.4 seconds |
For most applications, 200ms of latency every 5 minutes is imperceptible. Users aren't sitting there watching a spinner for that 200ms - the retry happens automatically.
What proactive refresh adds:
Proactive refresh isn't inherently bad. It makes sense when:
For a traditional web application with 5-minute access tokens and standard API requests, reactive refresh is sufficient.
If you're considering proactive token refresh:
X-Token-Refresh-Soon header)The three-cookie pattern (HTTPOnly tokens + readable info token) is a solid architecture that balances security with frontend usability. The existing reactive refresh approach lets the server drive token validity decisions, avoiding client trust issues and clock synchronization problems.
Proactive refresh is a UX optimization that trades simplicity for marginal latency improvements. For an application handling sensitive data with 5-minute access tokens, the complexity cost outweighs the ~200ms saved every 5 minutes.
Sometimes the best code is the code you don't write.
If you want to learn more about OAuth token management, check out OAuth 2.0 Security Best Current Practice (RFC 9700) and the OAuth 2.0 for Browser-Based Apps draft.
Hopefully some of you found that useful. Cheers!