David Dunn

Senior Software Developer


Back to Blog

XXS & Injection On The Frontend

5 min read
security javascript

Security is often associated with the server or backend developers but frontend developers need to be aware of security as well. In this post we will discuss one of the most common security issues in the frontend, XSS.

Cross-site Scripting, or XSS, is a very common security concern within the frontend space. Most of us will have heard mention of it, even if we don’t fully understand it.

However as frontend developers we need to understand what it is, why it happens and how to prevent it. This will allow us to write safer code and as an added bonus this is a common interview question.

What is XSS?

We really only need to remember the following sentence:

XSS is when untrusted input reaches an execution sink.

That is it, XSS in a nutshell… but lets dig into what that sentence actually means. First, what is an untrusted input?

Untrusted Inputs

Well as frontend developers we often create features/write code that allows users, or some third party, to submit some input within our application. How can we trust that input?

Well, the answer is we can’t. Untrusted input is any input we, as developers, don’t control.

They are more common than we may think. Some examples of untrusted input are:

… and the list goes on and on.

Really, if the data can be influenced by:

Then we must treat it as untrusted until it has been validated.

Execution Sinks

So now that we understand what untrusted inputs are, we can look at executions sinks.

We can think of execution sinks as APIs/attributes that make the browser interpret a string, not just display it. And yes by interpret we do mean that the browser will stop treating the string as literal text and process it as programming instructions in some language or context, such as JavaScript.

Now we can see the real problem here. A malicious user could attempt to input dangerous code into our application which the browser would parse and attempt to execute.

Some common examples of execution sinks:

HTML

These turn a string into DOM nodes which can activate event handlers, scriptable elements, etc.

JavaScript

These execute a string as JavaScript.

URL & Navigation Sinks

These interpret strings as URLs. Risk is often scheme injection, such as javascript: and data:, or open redirects.

DOM properties and attributes

Resource-loading attributes

JS APIs that fetch

CSS

These interpret strings as CSS. Depending on browser + context, this can enable nasty behaviour such as exfil tricks, UI redress, historical “expression()” issues in old IE, etc.

Above are just a few examples of execution sinks, there are many more we could discuss but the main thing we need to remember is:

These execution sinks are not dangerous by themselves

We only need to be concerned when we have some untrusted input being passed into those sinks before we have validated and/or sanitized it.

So how do we make untrusted input safe?

Validating / Sanitizing untrusted input

Sadly there is no magic rule we can follow here, we can’t just sanitize/validate input in general it needs to be done for a specific context, right before we pass it off to a specific sink.

HTML

Where possible we should always rely on battle tested libraries for sanitizing HTML. A very popular option is DOMPurify:

import DOMPurify from "dompurify";

export function sanitizeHTML(dirty) {
  return DOMPurify.sanitize(dirty, {
    USE_PROFILES: { html: true },
  });
}

el.innerHTML = sanitizeHTML(untrustedHtml);

In HTML we can also just avoid using the sink altogether. So instead of doing:

el.innerHTML = untrusted;

We could do:

el.textContent = untrusted;

The textContent property is not an execution sink.

URL

We want to prevent javascript: and unexpected schemes, and often prevent open redirects.

A popular option is to create a safeURL function:

export function safeURL(
  input: unknown,
  {
    allowProtocols = ["http:", "https:"],
    allowOrigins,
    base = window.location.origin,
  }: {
    allowProtocols?: string[],
    allowOrigins?: string[],
    base?: string,
  } = {}
): string | null {
  try {
    const raw = String(input).trim();
    const url = new URL(raw, base);

    if (!allowProtocols.includes(url.protocol)) return null;
    if (allowOrigins && !allowOrigins.includes(url.origin)) return null;

    return url.toString();
  } catch {
    return null;
  }
}

JS

For JS there is no robust way to sanitize untrusted input into safe JavaScript code. So really, we need to forbid usage of execution sinks with untrusted data in these cases.

But how do we work around it? Well a common approach is to create a list of actions we know to be safe and then map the untrusted data, if possible, to those actions:

const ACTIONS = {
  openSettings: () => openSettings(),
  logout: () => logout(),
};

export function runAction(actionName) {
  const fn = ACTIONS[actionName];
  if (!fn) return; // unknown action
  fn();
}

So now instead of accepting untrusted code we accept a key that maps into a known set of actions. We avoid ever having to parse that untrusted data.

CSS

Conclusion

So we’ve covered the basics of what an XSS attack is, how they can happen and how we can prevent them. As frontend engineers we need to understand these patterns, and be on the look out for them when writing code or reviewing a PR.

In a future post we will see how XSS is with React, and what React does for us and what we still need to be wary of.