XS-Leaks are a type of vulnerability in web applications and web browsers which allow an attacker to infer state information from another, different website. This can be used to infer personal and sensitive information, e.g., location, political opinion or sexual orientation, which can be used for targeted scamming attacks, discrimination, or market research.
This website show the results of our automated XS-Leak detection tool.
A JavaScript program that runs in a web browser and exports all properties and objects it can find as a graph. It does so by starting at an object, e.g. window, or globalThis. It then enumerates all properties on that object and collects data like the readability, data type, and serialized representations. For every property, it creates an edge, and for every unique object a node. The crawler then repeats with one of the un-traversed objects until all objects have been traversed.
The COMPARATOR component in the paper is codenamed DOMparator - because it's a comparator for DOMs. It stores graphs in a database and can compute the difference between two (or more) graphs. The difference algorithm takes graphs as inputs and removes everything they have in common. What remains are the parts that differ. DOMparator also normalizes these changes by ignoring irrelevant changes (e.g., timestamps), and summarizes changes (e.g., "there are 6123 differing edges at window[0]").
Autograph AUTOmatically creates many GRAPHs. It is a test case runner that combines browsers, inclusion methods, differences and file types in order to test whether this combination results in an XS-Leak. It invokes the crawler inside the browsers to extract graphs, and then lets DOMParator calculate the difference.
Inclusion methods include resources via URLs. For example, a website can use an <img>
tag to load an image from another website, and an <iframe>
tag can be used to embed another website.
We tested a set of 30 inclusion methods: iframe, iframeCSP, iframeCSPHashreload, iframeHashreload, iframeSandbox, object, objectHashreload, embed, embedHashreload, image, stylesheet, script, audio, video, windowOpen, fetch, fetchCORS, fetchCORSCredLess, fetchAll, preloadScript, preloadStyle, prerender, frame, track, favicon, import, importScript, svg, websocket, eventSource.
Differences in the context of the table mean what part of the HTTP response differs. For example, in the first state, the HTTP response contains a header X-Frame-Options: Deny
, and in the other state, it doesn't. That's a difference. Click on them in the table to see more details.
In order to test how inclusion methods react to semantical differences, we implemented a small set of file types. For example, we try to load HTML content into an <img>
, or a PDF into an <iframe>
and observe if the browser reacts differently.
You are using one right now. We tested Firefox, Chrome and Safari with the automation of Playwright.
This is the number of properties that differ between the graphs of state 0 and 1.
For example, if you create two identical graphs by running the crawler (extractor) twice in the same browser window, but before the second time you create a global variable window.foo='bar'
, the number of differences will be 1.
Manually reviewing many thousand changes per test case isn't feasible. Therefore, we summarize the changes per test case. For example, if the browser window size changes, all changes to visual dimensions are summarized with "element dimensions". This makes spotting the root cause of changes easier.
You can view the complete list of changes for every graph in the JSON format. You can open a test case in the "manual evaluation mode", meaning you have direct access to the SD-URL. That is useful if you want to manually evaluate a test case, e.g., using the devtools console.
No, this website is a read-only view of the data set. If you wish to run new tests, you can get our code from Github or contact us and we may add them for you.
You can contact us via twitter or mail.