Site isolation

The feature was first proposed publicly by Charles Reis and others, although Microsoft was independently working on implementation in the Gazelle research browser at the same time.

The approach initially failed to gain traction due to the large engineering effort required to implement it in a fully featured browser, and concerns around the real-world performance impact of potentially unbounded process use.

The main tradeoff of site isolation involves the added resource consumption necessitated by the additional processes it requires.

[1][2] Although this model successfully prevented problems associated with malicious JavaScript gaining access to the operating system, it lacked the capability to isolate websites from each other adequately.

[12][13] In May 2013 a member of Google Chrome's Site Isolation Team announced on the chromium-dev mailing list that they would begin landing code for out-of-process i-frames (OOPIF).

[14] This was followed by a Site Isolation Summit at BlinkOn in January 2015, which introduced the eight-engineer team and described the motivation, goals, architecture, proposed schedule, and progress made so far.

[22] Chrome's implementation of site isolation allowed it to eliminate multiple universal cross-site scripting (uXSS) attacks.

[32][33] Chrome was the industry's first major web browser to adopt site isolation as a defense against uXSS and transient execution attacks.

[35] In 2021, Agarwal et al. were able to develop an exploit called Spook.js that was able to break Chrome's Spectre defenses and exfiltrate data across web page in different origins.

[37] In 2023, researchers at Ruhr University Bochum showed that they were able to leverage the process architecture required by site isolation to exhaust system resources and also perform advanced attacks like DNS poisoning.

A depiction of how site isolation separated different websites into different processes
vectorial version
vectorial version