<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
		<title>Nathan Gauer</title>
		<description>Personal blog where I share some work and some thoughts.</description>
		<link>https://www.studiopixl.com</link>
		<atom:link href="https://www.studiopixl.com/feeds/full" rel="self" type="application/rss+xml" />
		<lastBuildDate>Sat, 16 Nov 2024 00:00:00 +0000</lastBuildDate>
		
			<item>
				<title>Boursorama: auth flow &amp; scraping</title>
				<description>&lt;h1 id=&quot;introduction&quot;&gt;Introduction&lt;/h1&gt;

&lt;p&gt;My accounting software is useless without automatic transaction fetching.
Open Banking, enabled by DSP2, is theoretically an option, but the official certification process is a major hurdle.
Given that they offer a mobile and web app, I suspect there might be an API I can leverage.&lt;/p&gt;

&lt;p&gt;Figuring out how this works might be fun, after all, the &lt;a href=&quot;/2024-02-21/lcl-login.html&quot;&gt;last bank auth process I looked into&lt;/a&gt;
was… peculiar 🙃.&lt;/p&gt;

&lt;h2 id=&quot;step-1-login&quot;&gt;Step 1: Login&lt;/h2&gt;

&lt;p&gt;As most French banks, the authentication flow typically involves:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Entering a digits-only username.&lt;/li&gt;
  &lt;li&gt;Inputting a digits-only password using a visual keyboard.&lt;/li&gt;
  &lt;li&gt;Potentially validating a second factor via the bank’s app.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2024-boursorama-keypad.webp&quot; alt=&quot;This bank&apos;s keypad&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;step-1-magic-cookie-the-gathering&quot;&gt;Step 1: Magic cookie the gathering&lt;/h3&gt;

&lt;p&gt;Before sending any login info, Boursorama requires you to do a few fixed requests to gather some
magic cookies. There returned values seem quite stable for a given IP.&lt;/p&gt;

&lt;div class=&quot;language-html highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ curl &apos;https://clients.boursobank.com/connexion/&apos;
[...]
&lt;span class=&quot;nt&quot;&gt;&amp;lt;script&amp;gt;&lt;/span&gt;
    &lt;span class=&quot;nb&quot;&gt;document&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nx&quot;&gt;cookie&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;dl&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;__brs_mit=&amp;lt;128 bit hex key&amp;gt;; domain=.&lt;/span&gt;&lt;span class=&quot;dl&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;window&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nx&quot;&gt;location&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nx&quot;&gt;hostname&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;dl&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;; path=/; &lt;/span&gt;&lt;span class=&quot;dl&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
&lt;span class=&quot;nt&quot;&gt;&amp;lt;/script&amp;gt;&lt;/span&gt;
[...]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;To continue, this key must be included in every subsequent request as the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;__brs_mit&lt;/code&gt; cookie.
Doing the same request again, but providing &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;__brs_mit&lt;/code&gt; yields 2 new cookies:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;brsxd_secure=&amp;lt;some 142 char long key, not base64&amp;gt;
navSessionId=web&amp;lt;sha-256&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In addition to those, we also need a third value: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;form[_token]&lt;/code&gt;!
This token can be found in the returned HTML page. It looks like a JWT token (b64, sections separated by a dot),
but is not.
This token will have to be returned with the input/password POST request.&lt;/p&gt;

&lt;h3 id=&quot;step-2-svg-based-obfuscation&quot;&gt;Step 2: SVG based obfuscation&lt;/h3&gt;

&lt;p&gt;Since the beginning, no user information was required. It was simply a matter of making the right requests
and collecting bits in the cookies or returned HTML.&lt;/p&gt;

&lt;p&gt;Now comes the last hurdle: the keypad.&lt;/p&gt;

&lt;p&gt;As usual, the user needs to type the password using a visual keyboard. No direct key input is allowed.
Perhaps to prevent a keylogger from discovering your highly secure 8-digit password?&lt;/p&gt;

&lt;p&gt;What’s interesting is that the request doesn’t contain the password itself, but rather a series of 3-letter groups.
So if you type &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;12345567&lt;/code&gt;, you get &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NXS|GKY|KJE|YOL|JXA|JXA|YFM|YSP&lt;/code&gt;.
The sequence changes with each page reload, but the digits remain unshuffled, and the same digits are always encoded
the same way.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;NXS|GKY|KJE|YOL|JXA|JXA|YFM|YSP
 1 | 2 | 3 | 4 | 5 | 5 | 6 | 7
                 ^   ^
                 same!
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The idea is that the server sends down 10 SVG files, each associated with a 3-letter sequence.
Then, when the user clicks on an SVG button, the JS records the associated letters and appends them to the
“password”.&lt;/p&gt;

&lt;p&gt;This SVG-group association can be found by loading
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://clients.boursobank.com/connexion/clavier-virtuel?_hinclude=1&lt;/code&gt; (don’t forget to include all the cookies we
gathered so far).&lt;/p&gt;

&lt;p&gt;On the returned HTML page, you’ll find a list of SVGs, each linked to a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;data-matrix-key&lt;/code&gt; attribute:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2024-boursorama-svg-html.webp&quot; alt=&quot;One of the 10 buttons in HTML&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Now that we have 10 SVG images and their corresponding groups, how do we know what digits each image represents?&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c1&quot;&gt;# SVG are always the same, and the path length is different for each digit.
&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;B64_SVG_LEN_MAP&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
        &lt;span class=&quot;mi&quot;&gt;419&lt;/span&gt;  &lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;mi&quot;&gt;259&lt;/span&gt;  &lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;mi&quot;&gt;1131&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;mi&quot;&gt;979&lt;/span&gt;  &lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;mi&quot;&gt;763&lt;/span&gt;  &lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;4&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;mi&quot;&gt;839&lt;/span&gt;  &lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;mi&quot;&gt;1075&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;6&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;mi&quot;&gt;1359&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;7&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;mi&quot;&gt;1023&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;8&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;mi&quot;&gt;1047&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;9&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;key&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;B64_SVG_LEN_MAP&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;len&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;my_svg_path&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;step-3-loggin-in&quot;&gt;Step 3: Loggin in!&lt;/h3&gt;

&lt;p&gt;Equipped with the magic cookies and a SVG-to-digit conversion method, we can delve into the login step!&lt;/p&gt;

&lt;p&gt;The login request is a multipart form &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;POST&lt;/code&gt; request.
It requires a few fields to be accepted.&lt;/p&gt;

&lt;p&gt;Some make sense:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;clientNumber&lt;/strong&gt;: the username/client ID.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;password&lt;/strong&gt;: the 3-letter chain we created by understanding the digit images.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;matrixRandomChallenge&lt;/strong&gt;: a long hex key gathered in the HTML along the keypad. Probably the key used to
generate the 3-letters group, allowing the server to be stateless.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;_token&lt;/strong&gt;: one of the many magic values we got by fetching a particular URL (part 1, ‘form[_token]’)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;ajx&lt;/strong&gt;: always ‘1’&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;platformAuthenticatorAvailable&lt;/strong&gt;: “-1”, I guess something to say if my device supports passkeys?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some are just plain weird:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;fakePassword&lt;/strong&gt;: one instance of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;•&lt;/code&gt; character for each digit in the password. So ‘••••••••’ since Boursorama
enforces the password length to be 8.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And some are probably related to some analytics and can be discarded:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;passwordAck&lt;/strong&gt;: A JSON containing the timestamp of the click on each digit, and the X/Y coordinate of the
said tap relative to the button.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Initially, I feared that this last field would require a more complex “human” check, such as emulating keypad layout
and calculating realistic click delays based on button distances.
However, it turns out that a simple &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;{}&lt;/code&gt; is a sufficient value for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;passwordAck&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The final form request looks like this:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;form[clientNumber]: &amp;lt;actual client number, plain text, e.g: 12341234&amp;gt;
form[password]: &quot;CEF|UGR|O....E|IKR|KNE&quot; # 3-key sequence we computed before.
form[ajx]: 1
form[platformAuthenticatorAvailable]: &quot;-1&quot;
form[passwordAck]: &quot;{}&quot;
form[fakePassword]: &quot;••••••••&quot;
form[_token]: &amp;lt;kinda JWT token, not quite, fetched from the previous steps&amp;gt;
form[matrixRandomChallenge]: &amp;lt;very long key, 13k characters, looks like B64&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;POST&lt;/code&gt; request to `https://clients.boursobank.com/connexion/saisie-mot-de-passe’, along with the
previously gathers cookies should yield us 2 final cookies:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;brsxds_d6e4a9b6646c62fc48baa6dd6150d1f7 = &amp;lt;actual JTW token&amp;gt;
ckln&amp;lt;sha256&amp;gt; = &amp;lt;HTML quoted 2048-bit RSA?&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The first is a simple JWT token. But what’s intersting is the cookie name: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;brsxds_d6e4a9b6646c62fc48baa6dd6150d1f7&lt;/code&gt;!
Did you know &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;d6e4a9b6646c62fc48baa6dd6150d1f7&lt;/code&gt; is the MD5 hash of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;prod&lt;/code&gt; ? 🙃
Turns out naming the cookie &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;brsxds_prod&lt;/code&gt; wasn’t enough, they needed to hash the suffix.&lt;/p&gt;

&lt;p&gt;The second cookie is a bit more mysterious. The name seems to be a SHA-256 hash prefixed with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ckln&lt;/code&gt;. Not sure why.
The value itself looks to be a twice URL-encoded 2048-bit base64 key, but I wasn’t able to figure out more.
But as always: just add those to the next request, and everything works!&lt;/p&gt;

&lt;h3 id=&quot;step-4-fetching-my-data&quot;&gt;Step 4: Fetching my data&lt;/h3&gt;

&lt;p&gt;As any 90’s movie hacker would say: &lt;em&gt;I’m in!&lt;/em&gt;
Time to grab some transaction and account information to feed my software.&lt;/p&gt;

&lt;p&gt;Unfortunately, Boursorama doesn’t appear to offer a JSON API for easy data access, it seems to rely heavily on
Server-Side Rendering.
To retrieve my account list, I fetched &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://clients.boursobank.com/mon-budget/generate&lt;/code&gt; and parsed the HTML.&lt;/p&gt;

&lt;p&gt;As for recent transactions, there’s at least a CSV exporter available:&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;params&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
        &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;movementSearch[selectedAccounts][]&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;account_id&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;movementSearch[fromDate]&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;from_date&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;strftime&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;%d/%m/%Y&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
        &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;movementSearch[toDate]&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;to_date&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;strftime&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;%d/%m/%Y&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
        &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;movementSearch[format]&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;CSV&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;movementSearch[filteredBy]&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;filteredByCategory&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;movementSearch[catergory]&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sh&quot;&gt;&apos;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;movementSearch[operationTypes]&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sh&quot;&gt;&apos;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;movementSearch[myBudgetPage]&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;movementSearch[submit]&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sh&quot;&gt;&apos;&apos;&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;url&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;https://clients.boursobank.com/budget/exporter-mouvements&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;session&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;requests&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;Session&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;resp&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;session&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;request&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;GET&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;url&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;cookies&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;cookies&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;params&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;params&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Note: if the range is invalid, or returned no results, the response is not a CSV anymore, but an HTML page
showing an error message.&lt;/p&gt;

&lt;h2 id=&quot;final-thoughts&quot;&gt;Final thoughts&lt;/h2&gt;

&lt;p&gt;As always with bank logins, I find this very convoluted, but not sure of the added benefit.
In fact, most magic cookies were simply fetched from the server once, and sent back as-is in all subsequent requests,
and the only challenge (SVG-&amp;gt;key) is quite trivial.&lt;/p&gt;

&lt;p&gt;I’d be curious to know the rational behind all that. Initialy I thought maybe the magic cookies are used to prevent
some kind of MITM or replay attack? But unlike OVH which uses time-based request signatures, those keys seems to be
quite stable.&lt;/p&gt;

&lt;p&gt;In the future, I’d like to explore the mobile app, see if there is some JSON API I could use, because parsing HTML
feels wrong.&lt;/p&gt;

&lt;p&gt;I Hope you found this post interesting!&lt;/p&gt;
</description>
				<link>https://www.studiopixl.com/2024-11-16/boursorama-login</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2024-11-16/boursorama-login</guid>
				<pubDate>Sat, 16 Nov 2024 00:00:00 +0000</pubDate>
				
					<category>web</category>
				
					<category>reverse</category>
				
			</item>
		
			<item>
				<title>Banks, Keypad &amp; Statements</title>
				<description>&lt;blockquote&gt;
  &lt;p&gt;The details of this article have been communicated to the bank, but after
6 months of silence, I’m assuming it is not an issue for them, and decided
to release this (see timeline below).&lt;/p&gt;

  &lt;p&gt;I think they are breaking PSD2 regulation around strong authentication, but&lt;br /&gt;
I’m not an expert on that subject.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1 id=&quot;introduction&quot;&gt;Introduction&lt;/h1&gt;

&lt;p&gt;       When I was a child, I had a 10€ allowance per month. I remember keeping a small paper with all those transactions,
but also planning future investment. For example, I knew I had to save for 8 years to afford my driving license.&lt;/p&gt;

&lt;p&gt;Fast forward 2017, student, new flat, and thus the start of &lt;em&gt;the great accounting spreadsheet™&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;After 6 years, it became &lt;em&gt;The Humongous Accounting Spreadsheet™&lt;/em&gt;. &lt;br /&gt;
Turns out, a single spreadsheet is not the ideal tool to track your every expenses across multiple countries.
So here we are, with &lt;em&gt;My Own Accounting Tool®&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;It is fancy-enough, auto-categorizes most transactions, and can display pretty graphs.&lt;/p&gt;

&lt;p&gt;Problem is, &lt;strong&gt;I still have to record transactions manually&lt;/strong&gt;.&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;When I’m lucky, It’s a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;curl&lt;/code&gt; gathered from Firefox (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;DevTools &amp;gt; copy as cURL&lt;/code&gt;).&lt;/li&gt;
  &lt;li&gt;For others, a wonky regex-based python script to parse statements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My goal: automatically fetching transactions directly from my bank account.
This should reduce input mistake and accounting errors.
My bank should have an API right?&lt;/p&gt;

&lt;h1 id=&quot;the-official-api&quot;&gt;The official API&lt;/h1&gt;

&lt;p&gt;The bank seems to have a public API: &lt;a href=&quot;https://developer.lcl.fr/&quot;&gt;https://developer.lcl.fr/&lt;/a&gt;. &lt;br /&gt;
But as far as I understand, one needs to sign an agreement with the authorities or something, before getting
some kind of certificate to sign requests.. Not going down that path tonight!&lt;/p&gt;

&lt;p&gt;This bank also offers a website, so unless it’s full SSR, they should have
some API I can plug into.&lt;/p&gt;

&lt;h1 id=&quot;the-other-api&quot;&gt;The other API&lt;/h1&gt;

&lt;p&gt;A quick look at the network requests, and here we are: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://monespace.lcl.fr/api/*&lt;/code&gt;!&lt;/p&gt;

&lt;p&gt;The most interesting routes seems to be:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://monespace.lcl.fr/api/login&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://monespace.lcl.fr/api/login/keypad&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://monespace.lcl.fr/api/login/contract&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://monespace.lcl.fr/api/user/accounts?type=current&amp;amp;contract_id=XXXXXXXX&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://monespace.lcl.fr/api/user/&amp;lt;account-id&amp;gt;/transactions&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those should be enough to fetch my own banking information.&lt;/p&gt;

&lt;h2 id=&quot;step-1-login&quot;&gt;Step 1: Login&lt;/h2&gt;

&lt;p&gt;To access my own data, I need to login. &lt;br /&gt;
For some unknown reason, banks in France &lt;strong&gt;LOVE&lt;/strong&gt; weird SeCuRe visual keypads. &lt;br /&gt;
This bank doesn’t deviate: a 6-digit pin is the only password you need.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2023-lcl-keypad.webp&quot; alt=&quot;This bank&apos;s keypad&quot; /&gt;&lt;/p&gt;

&lt;p&gt;First surprising element: no 2FA by default?
This bank does provide one (prompt on a trusted device), but it is only required for a few specific operations.
I tried login on a blank browser, on a phone, with a new IP, and still, only the 6-digit password.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;⚠ When traveling abroad, I noticed 2FA was required on the web page once,
logging in from an already trusted device. &lt;br /&gt;
Rented a VPN, and tried my script in a few locations in France and Europe,
and 2FA was never required.
Not sure of the heuristic they chose, but since I can login from an
untrusted location, and untrusted device, seems weak.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The 2 important network requests during the login are:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://monespace.lcl.fr/api/login&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://monespace.lcl.fr/api/login/keypad&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you load the page, a first &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;GET&lt;/code&gt; request is sent to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;api/login/keypad&lt;/code&gt;. &lt;br /&gt;
Upon login, a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;POST&lt;/code&gt; request is sent to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;api/login&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;⚠ I redacted some parts of the request samples. The reason is I don’t know
what those are, and if they contain secrets I shall not share.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3 id=&quot;apiloginkeypad-get-request&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;api/login/keypad&lt;/code&gt; GET request&lt;/h3&gt;

&lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;keypad&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;13236373539383433303XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;keypad&lt;/code&gt;: A long, apparently random, digit-only sequence (partially redacted).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;apilogin-post-request&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;api/login&lt;/code&gt; POST request&lt;/h3&gt;

&lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
   &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;callingUrl&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;/connexion&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
   &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;clientTimestamp&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1692997262&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
   &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;encryptedIdentifier&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;kc&quot;&gt;false&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
   &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;identifier&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;XXXXXXXXXXX&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
   &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;keypad&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;030303939303XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
   &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;sessionId&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;00000000000000000000001&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;clientTimestamp&lt;/code&gt;: timestamp of the request.&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;encryptedIdentifier&lt;/code&gt;: always &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;false&lt;/code&gt;, not sure why. Maybe something for plain HTTP requests?&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;identifier&lt;/code&gt;: the customer number.&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;keypad&lt;/code&gt;: A long, digit-only sequence (partially redacted). Maybe a challenge response?&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sessionId&lt;/code&gt;: some client-side value derived from the timestamp. Seems to accept all numerical values as long as it respects some format.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;digit-mangling&quot;&gt;Digit mangling&lt;/h3&gt;

&lt;p&gt;A large random number received, some client-side process with a keypad, and a large random number sent back.
Some kind of challenge-response? Not exactly.&lt;/p&gt;

&lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;keypad&lt;/code&gt; parameter is composed of 2 parts:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;13236373539383433303&lt;/code&gt;: a sequence determining the order of the keys on the keypad.&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;XXXXXXXXX...&lt;/code&gt;: the random seed used to generate that order?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So what do my login request looks like with the code &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;011000&lt;/code&gt; ? &lt;br /&gt;
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&quot;keypad&quot;: &quot;030303939303XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX&quot;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The repetition pattern looks familiar.&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;03 03 03 93 93 03 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 0  0  0  1  1  0 ??
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Yes, that’s the pin code, mangled digit by digit, and reversed.
The mangling is a bit weird:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;take the received &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;keypad&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;reverse the string&lt;/li&gt;
  &lt;li&gt;parse digits, 2 by 2, as an hex value&lt;/li&gt;
  &lt;li&gt;take the last 10 pairs&lt;/li&gt;
  &lt;li&gt;interpret them as base-10&lt;/li&gt;
  &lt;li&gt;take the ascii char corresponding to the each value.&lt;/li&gt;
  &lt;li&gt;those are your keypad numbers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I spare you the JS handling the keypad, but here is the python code to login.&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;answer&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;get_json&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;https://monespace.lcl.fr/api/login/keypad&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;keypad&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;answer&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;keypad&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;# Weird mangling/obfuscation for the keypad values.
# The HEX digits, interpreted as base-10 are the keypad digits.
&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;keys&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;chr&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;int&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;base&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;16&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;))&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;re&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;findall&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;..&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;keypad&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[::&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;])&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;][&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;10&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:]&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;seed&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;sh&quot;&gt;&quot;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;join&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;([&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;chr&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;int&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;base&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;16&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;))&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;re&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;findall&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;..&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;keypad&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[::&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;])&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;][:&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;10&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;])&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;password&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;input&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Your 6 digit pin? &lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;mangled&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;sh&quot;&gt;&quot;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;join&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;([&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;str&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;keys&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;index&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;))&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;password&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;])&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;token&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;sh&quot;&gt;&quot;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;join&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;([&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;str&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;hex&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;ord&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)))[&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:]&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;seed&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;+&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;mangled&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;])[::&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;payload&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;callingUrl&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;/connexion&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;encryptedIdentifier&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;False&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;identifier&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;lt;customer-id&amp;gt;&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;keypad&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;token&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;clientTimestamp&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;now&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;sessionId&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;lt;some-random-value&amp;gt;&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;nf&quot;&gt;post_json&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;https://monespace.lcl.fr/api/login&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;payload&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;getting-the-transactions&quot;&gt;Getting the transactions&lt;/h2&gt;

&lt;p&gt;Now that we are logged in, we want to list transactions.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Each transaction is tied to an account.&lt;/li&gt;
  &lt;li&gt;Each account is tied to a contract.&lt;/li&gt;
  &lt;li&gt;Each contract is tied to a user.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So to get my transactions, I need to get the contract, then get the account, and only then transactions.&lt;/p&gt;

&lt;p&gt;The initial login request returns a few info:&lt;/p&gt;

&lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;accessToken&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;Bearer token&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;refreshToken&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;Refresh token&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;expiresAt&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;timestamp&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;multiFactorAuth&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;kc&quot;&gt;null&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;userName&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;name&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;birthdate&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;birthdate&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;[...]&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;contracts&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;id&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;contract-id&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;[...]&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;],&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;As-is, the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;accessToken&lt;/code&gt; cannot be used to fetch transactions. Instead, it is used to get a second token.
Token which authenticates requests made to a specific “contract”.
I’m not sure how accounts are tied to “contracts”, but in my case, I have 1 contract tied to 1 account.&lt;/p&gt;

&lt;h3 id=&quot;apilogincontract-post-request&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;api/login/contract&lt;/code&gt; POST request&lt;/h3&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;clientTimestamp&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;timestamp&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;contractId&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;base64&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;b64encode&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;contract&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;id&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;].&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;encode&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()).&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;decode&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()[:&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Why is the contract ID base64 encoded? Maybe some code sharing with the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;user/accounts&lt;/code&gt; GET route?&lt;/p&gt;

&lt;h3 id=&quot;apilogincontract-response&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;api/login/contract&lt;/code&gt; response.&lt;/h3&gt;

&lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;accessToken&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;another-token&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;refreshToken&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;refresh-token&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;expiresAt&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;timestamp&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This access token can be used on 2 routes:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://monespace.lcl.fr/api/user/accounts?type=current&amp;amp;contract_id=XXXXXXXX&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://monespace.lcl.fr/api/user/&amp;lt;account-id&amp;gt;/transactions&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;apiuseraccounts-get-request&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;api/user/accounts&lt;/code&gt; GET request&lt;/h3&gt;

&lt;p&gt;This request takes 4 parameters:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;type&lt;/code&gt;: the type of the contract/account to fetch?, Here set to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;current&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;contract_id&lt;/code&gt;: the base64 encoded contract ID.&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;is_eligible_for_identity&lt;/code&gt;: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;false&lt;/code&gt;. Not sure what this is about.&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;include_aggregate_account&lt;/code&gt;: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;boolean&amp;gt;&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It returns some information about the fetched account:&lt;/p&gt;

&lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;total&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;balance-in-euro&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;accounts&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;type&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;current&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;iban&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;the iban&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;amount&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;date&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;2023-08-25T22:45:46.892+0200&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;value&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;balance&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;currenty&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;EUR&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;},&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;internal_id&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;internal-account-id&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;external_id&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;external-account-id&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;[...]&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;apiuseraccount-idtransactions-get-request&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;api/user/&amp;lt;account-id&amp;gt;/transactions&lt;/code&gt; GET request&lt;/h3&gt;

&lt;p&gt;This request takes 2 parameters:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;contract_id&lt;/code&gt;: this time, the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;internal_id&lt;/code&gt; received in the previous request.&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;range&lt;/code&gt;: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;int32&amp;gt;-&amp;lt;int32&amp;gt;&lt;/code&gt;. From-To range of transactions to fetch. 0 is the most recent transaction.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;isFailover&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;boolean&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;accountTransactions&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;label&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;CB some shop&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;booking_date_time&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;1970-01-01T00:00:00.000Z&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;is_accounted&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;boolean&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;are_details_available&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;boolean&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;amount&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;value&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;mf&quot;&gt;-5.32&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;currency&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;EUR&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;},&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;movement_code_type&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;code&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;nature&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;I/CARTE/VIREMENT SEPA RECU/PRELVT SEPA RECU XXX&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;movement_code_type&lt;/code&gt;: not sure, sometimes absent, sometimes an int (like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;948&lt;/code&gt;).&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;nature&lt;/code&gt;: seem to be a free-form field, as SEPA order text can be seen there.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;getting-old-transactions&quot;&gt;Getting old transactions&lt;/h3&gt;

&lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;api/user/&amp;lt;account-id&amp;gt;/transactions&lt;/code&gt; request takes a range. But if this range contains any transaction
older than 90 days, the request fails: &lt;strong&gt;2FA is required to make such request.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Digging a bit, I found 2 other API routes:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;api/user/documents/accounts_statements&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;api/user/documents/documents&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those routes have no limit on the dates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WAIT, WHAT?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes, they do require 2FA to call &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://monespace.lcl.fr/api/user/&amp;lt;account-id&amp;gt;/transactions&lt;/code&gt; for transactions
older than 90 days, but PDF statements since the dawn of time? Sure, &lt;strong&gt;NO PROBLEM&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The returned values have this format:&lt;/p&gt;

&lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;codsoufamdoc_1&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;AST&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;datprddoccli&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;2020-12-02&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;downloadToken&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&amp;lt;some-token&amp;gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;liblg_typdoc&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;Some human-readable document title&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;libsoufamdoc_1&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;Some human-readable category&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;To download the PDF, a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;GET&lt;/code&gt; request with the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;downloadToken&lt;/code&gt; fetched in the previous request:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;https://monespace.lcl.fr/api/user/documents/download?downloadToken=&amp;lt;token&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;final-thoughts&quot;&gt;Final thoughts&lt;/h2&gt;

&lt;blockquote&gt;
  &lt;p&gt;No 2 factor authentication;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A 6-digit pin. Really? &lt;br /&gt;
Why isn’t 2FA enforced by default? Even my empty Twitter account is more secured.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Why is the pin code mangled?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Isn’t SSL enough to secure your payload? &lt;br /&gt;
This rot-13 like obfuscation really seems weak if that’s the worry.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Auth tokens remains valid for &lt;strong&gt;21 days&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The web session does auto-exit after ~30mn of inactivity. &lt;br /&gt;
But did you know the auth tokens remains valid for &lt;strong&gt;21 days&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;Anyway, I do have what I need to interoperate with my accounting application, and I can rest peacefully, knowing my
personal information are safe 🙃.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;All the information disclosed here are public, and freely accessible with any
web browser. I was required to figure this out to build interoperability with my own software.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&quot;disclosure-timeline&quot;&gt;Disclosure timeline&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;28-08-2023: found those weaknesses, documented them.&lt;/li&gt;
  &lt;li&gt;01-09-2023: contacted the bank on twitter via private message to ask about this.&lt;/li&gt;
  &lt;li&gt;04-09-2023: contacted the bank by email since the twitter message hasn’t been replied to.&lt;/li&gt;
  &lt;li&gt;05-09-2023: received a twitter message saying “we received the email, we’ll reply”&lt;/li&gt;
  &lt;li&gt;21-02-2024: No news. Same behavior observed. Published this article.&lt;/li&gt;
&lt;/ul&gt;
</description>
				<link>https://www.studiopixl.com/2024-02-21/lcl-login</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2024-02-21/lcl-login</guid>
				<pubDate>Wed, 21 Feb 2024 00:00:00 +0000</pubDate>
				
					<category>web</category>
				
					<category>reverse</category>
				
			</item>
		
			<item>
				<title>Hosting this blog</title>
				<description>&lt;p&gt;       This blog is simple: some &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.md&lt;/code&gt; files, generated to static HTML. No backend or complex
CMS. It’s light, loads fast, and readable-enough on mobiles (except code-blocks).
Versioning is done on git.
But it had one drawback: “high” cost to publish.&lt;/p&gt;

&lt;p&gt;Building is done using Jekyll, and then pushing files to an FTP server. Since I publish rarely, I had no warm setup.
Sometimes my ruby installation was broken, sometimes some dependency were broken. Once built, pushing to the FTP
was a mix of FTP fuseFS + rsync (My OVH hosting had to ssh/sshfs access).
As always with manual intervention, error could happen!&lt;/p&gt;

&lt;p&gt;Anyway, I had some free credits on GCP, so tried Cloud Builder (used it in the past to setup CIs), and quickly stopped.
Goal was to simplify the whole process, and using GCP is &lt;strong&gt;not&lt;/strong&gt; going the good direction.&lt;/p&gt;

&lt;p&gt;Found out about Firebase, decided to give it a try (spoiler: blog is hosted on Firebase as of today). And it had
everything I need: It’s fast, simple, and absolutely cheap for my use-case!&lt;/p&gt;

&lt;p&gt;Deploying the website was simple:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;firebase deploy
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This combined with a GitHub workflow, spinning a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ruby&lt;/code&gt; docker, building the website and pushing to firebase:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&apos;on&apos;:
  push:
    branches:
      - master
jobs:
  build_and_deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Prepare tree
        run: &apos;mkdir build&apos;
      - name: Build docker image
        run: &apos;cd docker &amp;amp;&amp;amp; docker build -t builder . &amp;amp;&amp;amp; cd ..&apos;
      - name: Build website
        run: &apos;docker run -t --rm --mount type=bind,src=/home/runner/work/blog/blog,dst=/mnt/src --mount type=bind,src=/home/runner/work/blog/blog/build,dst=/mnt/output builder&apos;
      - uses: FirebaseExtended/action-hosting-deploy@v0
        with:
          repoToken: &apos;$&apos;
          firebaseServiceAccount: &apos;$&apos;
          channelId: live
          projectId: blog-1234
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This also brings another advantage: Github becomes my CMS. I can write a new article as long as I have some Github
access. Which is quite convenient!&lt;/p&gt;
</description>
				<link>https://www.studiopixl.com/2023-02-23/hosting</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2023-02-23/hosting</guid>
				<pubDate>Thu, 23 Feb 2023 00:00:00 +0000</pubDate>
				
					<category>infra</category>
				
			</item>
		
			<item>
				<title>Sparse virtual textures</title>
				<description>&lt;p&gt;       While working on &lt;a href=&quot;/2019-02-28/game-engine-parrallax.html&quot;&gt;parallax mapping&lt;/a&gt;,
somebody told me about a cool presentation: &lt;a href=&quot;https://silverspaceship.com/src/svt/&quot;&gt;Sparse virtual textures&lt;/a&gt;.
The idea is quite simple: reimplement pagination in your shaders, allowing you
to have infinite textures while keeping the GPU memory usage constant.&lt;/p&gt;

&lt;p&gt;Goal was set: &lt;strong&gt;add SVT support to my renderer!&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&quot;vimeo mt-3 mb-3&quot;&gt;
  &lt;iframe src=&quot;https://player.vimeo.com/video/706176808&quot; loading=&quot;lazy&quot; frameborder=&quot;0&quot; allowvr=&quot;&quot; allowfullscreen=&quot;&quot; mozallowfullscreen=&quot;&quot; webkitallowfullscreen=&quot;&quot;&gt;
  &lt;/iframe&gt;
&lt;/div&gt;

&lt;h1 id=&quot;step-1---hand-made-pagination&quot;&gt;Step 1 - Hand-made pagination&lt;/h1&gt;

&lt;h2 id=&quot;pagination-overview&quot;&gt;Pagination overview&lt;/h2&gt;

&lt;p&gt;To understand how SVT works, it is useful to understand what pagination is.&lt;/p&gt;

&lt;p&gt;On most computers, data is stored in the RAM. RAM is a linear buffer, and its
first byte is at the address 0, and the last at address &lt;em&gt;N&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;For some practical reasons, using the real address is not very convenient.
Thus some clever folks invented the segmentation, which then evolved into
pagination.&lt;/p&gt;

&lt;p&gt;The idea is simple: use a &lt;strong&gt;virtual&lt;/strong&gt; address, that is translated by the CPU
into the real RAM address (&lt;strong&gt;physical&lt;/strong&gt;). The whole mechanism is well explained
by Intel&lt;sup id=&quot;fnref:1&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot; role=&quot;doc-noteref&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;This translation possible thanks to &lt;strong&gt;pagetables&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Translating every addresses into a new independant one is costly and not
needed. That’s why they divided the whole space in pages. A page is a set of N
contiguous bytes. For example, on x86, we often talk about 4kB pages.&lt;/p&gt;

&lt;p&gt;What the CPU translate are page addresses. Each block is translated as a
contiguous unit. The internal offset remains the same.
This means for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;N&lt;/code&gt; bytes, we only have to store &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;N/page_size&lt;/code&gt; translations.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2022-svt-01.webp&quot; alt=&quot;pagination recap&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Here on the left you have the virtual memory, divided in 4 blocks (pages).
Each block is linearly mapped to an entry in the pagetable.&lt;/p&gt;

&lt;p&gt;The mapping can be understood as follows:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Take your memory address.
    &lt;ul&gt;
      &lt;li&gt;adress = 9416&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Split it into a page-aligned value and the rest.
    &lt;ul&gt;
      &lt;li&gt;9416 =&amp;gt; 8192 + 1224.&lt;/li&gt;
      &lt;li&gt;aligned_adress = 8192&lt;/li&gt;
      &lt;li&gt;rest = 1224&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Take the aligned value, and divide it by the page size.
    &lt;ul&gt;
      &lt;li&gt;8192 / 4096 = 2&lt;/li&gt;
      &lt;li&gt;index = 2&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;This result is the index in the pagetable.&lt;/li&gt;
  &lt;li&gt;Read the pagetable entry at this index, this is your new aligned address:
    &lt;ul&gt;
      &lt;li&gt;pagetable[2] = 20480&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Add the rest back to this address:
    &lt;ul&gt;
      &lt;li&gt;physical_address = 20480 + 1224&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;You have your physical address.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;adding-the-page-concept-to-the-shader&quot;&gt;Adding the page concept to the shader&lt;/h2&gt;

&lt;p&gt;To implement this technique, I’ll need to:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;find which pages to load&lt;/li&gt;
  &lt;li&gt;load them in the “main memory”&lt;/li&gt;
  &lt;li&gt;add this pagetable/translation technique.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This could be done using compute shaders and linear buffers, but why not use
textures directly? This way I can just add a special rendering pass to compute
visibility, and modify my pre-existing forward rendering pass to support
pagetables.&lt;/p&gt;

&lt;p&gt;First step is to build the pagetable lookup system. This is done in GLSL:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;take the UV coordinates&lt;/li&gt;
  &lt;li&gt;split them into page-aligned address, and the rest&lt;/li&gt;
  &lt;li&gt;compute page index in both X and Y dimensions&lt;/li&gt;
  &lt;li&gt;lookup a texture at the computed index (our pagetable)&lt;/li&gt;
  &lt;li&gt;add to the value the rest&lt;/li&gt;
&lt;/ul&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th style=&quot;text-align: center&quot;&gt;&lt;img src=&quot;/assets/posts/2022-svt-02.webp&quot; alt=&quot;uv coordinates&quot; /&gt;&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;&lt;em&gt;Showing UV coordinates&lt;/em&gt;&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th style=&quot;text-align: center&quot;&gt;&lt;img src=&quot;/assets/posts/2022-svt-03.webp&quot; alt=&quot;page aligned UVs&quot; /&gt;&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;&lt;em&gt;Showing page-aligned UV coordinates&lt;/em&gt;&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h2 id=&quot;computing-visibility&quot;&gt;Computing visibility&lt;/h2&gt;

&lt;p&gt;The other advantage of pagination is the ability to load/unload parts of the
memory at runtime.
Instead of loading the whole file, the kernel only loads the required bits
(pages), and only fetch new pages when required.&lt;/p&gt;

&lt;p&gt;This is done using a &lt;strong&gt;pagefault&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;User tries to access a non-yet loaded address.&lt;/li&gt;
  &lt;li&gt;CPU faults, and send a signal to the kernel (page fault).&lt;/li&gt;
  &lt;li&gt;The kernel determines if this access is allowed, and loads the page.&lt;/li&gt;
  &lt;li&gt;Once loaded, the kernel can resume the user program.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This mechanism requires hardware support: the CPU knows what a pagetable is,
and has this interruption system. In GLSL/OpenGL, we don’t have such thing.
So what do we do when interrupts don’t exits? &lt;strong&gt;We poll!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For us, this means running an initial rendering pass, but instead of rendering
the final output with lights and materials, we output the page addresses.
(Similar to the illustration image seen above).&lt;/p&gt;

&lt;p&gt;This is done by binding a special framebuffer, and doing render-to-texture.
Once the pass completed, the output texture can be read, and we can discover
which pages are visible.&lt;/p&gt;

&lt;p&gt;For this render pass, all materials are replaced with a simple shader:&lt;/p&gt;

&lt;div class=&quot;language-glsl highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;cp&quot;&gt;#version 420 core
&lt;/span&gt;
&lt;span class=&quot;cm&quot;&gt;/* material definition */&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;uniform&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;float&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;textureid&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
&lt;span class=&quot;cm&quot;&gt;/* Size of a page in pixels. */&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;uniform&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;float&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;page_size&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
&lt;span class=&quot;cm&quot;&gt;/* Size of the pagetable, in pixels (aka how many entries do we have). */&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;uniform&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;float&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pagetable_size&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
&lt;span class=&quot;cm&quot;&gt;/* Size in pixels of the final texture to load. */&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;uniform&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;float&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;texture_size&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
&lt;span class=&quot;cm&quot;&gt;/* Aspect ratio difference between this pass, and the final pass. */&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;uniform&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;float&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;svt_to_final_ratio_w&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;// svt_size / final_size&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;uniform&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;float&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;svt_to_final_ratio_h&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;// svt_size / final_size&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;vertex_data&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;kt&quot;&gt;vec2&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;uv&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;fs_in&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;out&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;vec4&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;result&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;

&lt;span class=&quot;cm&quot;&gt;/* Determines which mipmap level the texture should be visible at.
 * uv: uv coordinates to query.
 * texture_size: size in pixels of the texture to display.
 */&lt;/span&gt;
&lt;span class=&quot;kt&quot;&gt;float&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;mipmap_level&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;kt&quot;&gt;vec2&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;uv&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;float&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;texture_size&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;kt&quot;&gt;vec2&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;dx&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;dFdx&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;uv&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;texture_size&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;svt_to_final_ratio_w&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
    &lt;span class=&quot;kt&quot;&gt;vec2&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;dy&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;dFdy&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;uv&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;texture_size&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;svt_to_final_ratio_h&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;

    &lt;span class=&quot;kt&quot;&gt;float&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;d&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;max&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;dot&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;dx&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;dx&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;dot&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;dy&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;dy&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;));&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;f&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;log2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;d&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;main&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;cm&quot;&gt;/* how many mipmap level we have for the page-table */&lt;/span&gt;
    &lt;span class=&quot;kt&quot;&gt;float&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;max_miplevel&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;log2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;texture_size&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;/&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;page_size&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;

    &lt;span class=&quot;cm&quot;&gt;/* what mipmap level do we need */&lt;/span&gt;
    &lt;span class=&quot;kt&quot;&gt;float&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;mip&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;floor&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;mipmap_level&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;fs_in&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;uv&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;texture_size&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;));&lt;/span&gt;

    &lt;span class=&quot;cm&quot;&gt;/* clamp on the max we can store using the page-table */&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;mip&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;clamp&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;mip&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;f&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;max_miplevel&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;

    &lt;span class=&quot;kt&quot;&gt;vec2&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;requested_pixel&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;floor&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;fs_in&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;uv&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;texture_size&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;/&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;exp2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;mip&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
    &lt;span class=&quot;kt&quot;&gt;vec2&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;requested_page&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;floor&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;requested_pixel&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;/&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;page_size&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;

    &lt;span class=&quot;cm&quot;&gt;/* Move values back into a range supported by our framebuffer. */&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;result&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;rg&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;requested_page&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;/&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;255&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;f&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;result&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;b&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;mip&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;/&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;255&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;f&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;

    &lt;span class=&quot;cm&quot;&gt;/* I use the alpha channel to mark &quot;dirty&quot; pixels.
     * On the CPU side, I first check the alpha value for &amp;gt; 0.5,
     * and if yes, consider this a valid page request.
     * I could also use it to store a &quot;material&quot; ID and support
     * multi-material single-pass SVT. */&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;result&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;a&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;f&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Once the page request list retrieved, I can load the textures in the
“main memory”.&lt;/p&gt;

&lt;p&gt;The main memory is a simple 2D texture, and page allocation is for now simple:
first page requested gets the first slot, and so on until memory is full.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th style=&quot;text-align: center&quot;&gt;&lt;img src=&quot;/assets/posts/2022-svt-04.webp&quot; alt=&quot;main memory texture&quot; /&gt;&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;&lt;em&gt;“Main memory” texture&lt;/em&gt;&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Once the page allocated, I need to update the corresponding pagetable entry
to point to the correct physical address. This is done by updating the correct
pixel in the pagetable:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;R &amp;amp; G channels store the physical address.&lt;/li&gt;
  &lt;li&gt;B is unused.&lt;/li&gt;
  &lt;li&gt;A marks the entry as valid (loaded) or not.&lt;/li&gt;
&lt;/ul&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th style=&quot;text-align: center&quot;&gt;&lt;img src=&quot;/assets/posts/2022-svt-05.webp&quot; alt=&quot;pagetable&quot; /&gt;&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;&lt;em&gt;Pagetable texture&lt;/em&gt;&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h2 id=&quot;rendering-pass&quot;&gt;Rendering pass&lt;/h2&gt;

&lt;p&gt;The final pass is quite similar to a classic pass, except instead of binding
one texture for diffuse, I bind 2 textures: the pagetable, and the memory.&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;bind the 3D model&lt;/li&gt;
  &lt;li&gt;bind the GLSL program&lt;/li&gt;
  &lt;li&gt;bind the pagetable and main-memory textures.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this stage, I can display a texture too big to fit in RAM &amp;amp; VRAM.&lt;/p&gt;

&lt;div class=&quot;vimeo mt-3 mb-3&quot;&gt;
  &lt;iframe src=&quot;https://player.vimeo.com/video/709491387&quot; loading=&quot;lazy&quot; frameborder=&quot;0&quot; allowvr=&quot;&quot; allowfullscreen=&quot;&quot; mozallowfullscreen=&quot;&quot; webkitallowfullscreen=&quot;&quot;&gt;
  &lt;/iframe&gt;
&lt;/div&gt;

&lt;h1 id=&quot;step-2-mipmapping&quot;&gt;Step 2: MipMapping&lt;/h1&gt;

&lt;p&gt;If you look at the previous video, you’ll notice two issues:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Red lines showing up near the screen edges.&lt;/li&gt;
  &lt;li&gt;Page load increase when zooming out.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First issue is because texture loading doesn’t block the current pass.
This means I might request a page, and not have it ready by the time the final
pass is ran. I could render it as black, but wanted to make it visible.&lt;/p&gt;

&lt;p&gt;The second issue is because I have a 1:1 mapping between the virtual page size
and the texture page size. Zooming out to show the entire plane would require
loading the entire texture. Texture which doesn’t fit in my RAM.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The solution to both these issues are mipmaps&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A page at mipmap level 0 covers &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;page_size&lt;/code&gt; pixels.&lt;/li&gt;
  &lt;li&gt;A page at mipmap level 1 covers &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;page_size * 2&lt;/code&gt; pixels&lt;/li&gt;
  &lt;li&gt;…&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;A page at mipmap level N covers the whole texture&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, I can load the mipmap level N by default, and if the requested page is
not available, I just go up in the mip levels until I find a valid page.&lt;/p&gt;

&lt;p&gt;Adding mipmaps also allow me to implement a better memory eviction mechanism: &lt;br /&gt;
I can now replace 4 pages with one page a level above. &lt;br /&gt;
So if I’m low on memory, I can just downgrade some areas, and save 75% of
my memory.&lt;/p&gt;

&lt;p&gt;Finally, MipMapping reduces the bandwidth requirements: if the object is far,
why load the texture in high resolution? A low-resolution page is enough:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;less disk load.&lt;/li&gt;
  &lt;li&gt;less memory usage.&lt;/li&gt;
  &lt;li&gt;less latency (since there is less pages to load).&lt;/li&gt;
&lt;/ul&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th style=&quot;text-align: center&quot;&gt;&lt;img src=&quot;/assets/posts/2022-svt-06.webp&quot; alt=&quot;physicaladdresses with MipMapping&quot; /&gt;&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;&lt;em&gt;Showing physical addresses with MipMapping&lt;/em&gt;&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h1 id=&quot;step-3-complex-materials&quot;&gt;Step 3: Complex materials&lt;/h1&gt;

&lt;p&gt;The initial rendered had PBR materials. Such material had not only an albedo
map, but also normal and roughness+metallic maps. To add new textures, several
options:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;New memory textures, new pagetable texture, new pass.&lt;/li&gt;
  &lt;li&gt;simple&lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;requires an additional pass. This is &lt;strong&gt;not&lt;/strong&gt; OK.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;Same memory texture, same pagetable texture.&lt;/li&gt;
  &lt;li&gt;Each page contains in fact the N textures sequentially. So when one page is
loaded, N textures are queried and loaded.&lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Easy to implement, but I have to load N textures.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;Same memory texture, multiple pagetable textures.&lt;/li&gt;
  &lt;li&gt;pagetables are small, 16x16 or 32x32. Overhead is not huge.&lt;/li&gt;
  &lt;li&gt;I can unload some channels for distant objects (normal maps by ex).&lt;/li&gt;
  &lt;li&gt;Drawback is I have now N*2 texture sampling in the shader: one for each
texture and its associated pagetable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because I like the flexibility of this last option, I chose to implement it.
In the final version, each object has 4 textures:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;memory (1 mip level)&lt;/li&gt;
  &lt;li&gt;albedo pagetable (N mip levels)&lt;/li&gt;
  &lt;li&gt;roughness/metallic pagetable (N mip levels)&lt;/li&gt;
  &lt;li&gt;normal pagetable (N mip levels)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the following demo, page loading is done in the main thread, but limited to
1 page per frame, making the loading process very visible.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Bottom-left graph shows the main memory.&lt;/li&gt;
  &lt;li&gt;Other graphs show the pagetables and their corresponding mip-levels.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&quot;vimeo mt-3 mb-3&quot;&gt;
  &lt;iframe src=&quot;https://player.vimeo.com/video/706176808&quot; loading=&quot;lazy&quot; frameborder=&quot;0&quot; allowvr=&quot;&quot; allowfullscreen=&quot;&quot; mozallowfullscreen=&quot;&quot; webkitallowfullscreen=&quot;&quot;&gt;
  &lt;/iframe&gt;
&lt;/div&gt;

&lt;h1 id=&quot;page-request--subsampling-random-and-frame-budget&quot;&gt;Page request : subsampling, random and frame budget.&lt;/h1&gt;

&lt;p&gt;For each frame, I need to do this initial pass to check texture visibility.
Reading this framebuffer on the CPU between each frame is quite slow, and for
a 4K output, this is prohibitively expensive.&lt;/p&gt;

&lt;p&gt;The good news is: I don’t need a 4K framebuffer in that case! Pages are
covering N pixels, so we can just reduce the framebuffer size and hope our
pages will still be requested!&lt;/p&gt;

&lt;p&gt;The demo above is using a 32x32 framebuffer. Which is &lt;strong&gt;very&lt;/strong&gt; small. If done
naïvely, this wouldn’t work: some pages would be caught between 2 rendered
pixels, and never loaded.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th style=&quot;text-align: center&quot;&gt;&lt;img src=&quot;/assets/posts/2022-svt-07.webp&quot; alt=&quot;missing pages&quot; /&gt;&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;&lt;em&gt;8x8 framebuffer, no jitter.&lt;/em&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td style=&quot;text-align: center&quot;&gt; &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;A way to solve that is add some jitter to the initial pass. The page request
viewpoint is not exactly the camera’s position, but the camera’s position +
some random noise.&lt;/p&gt;

&lt;p&gt;This way, we can increase coverage without increasing the framebuffer size.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th style=&quot;text-align: center&quot;&gt;&lt;img src=&quot;/assets/posts/2022-svt-08.webp&quot; alt=&quot;missing pages&quot; /&gt;&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;&lt;em&gt;8x8 framebuffer, jitter.&lt;/em&gt;&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot;&gt;
      &lt;p&gt;See Intel Architectures Developer’s Manual: Vol. 3A, Chapter 3 &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;
</description>
				<link>https://www.studiopixl.com/2022-04-27/sparse-virtual-textures</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2022-04-27/sparse-virtual-textures</guid>
				<pubDate>Wed, 27 Apr 2022 00:00:00 +0000</pubDate>
				
					<category>graphics</category>
				
			</item>
		
			<item>
				<title>Noise, blur and neural networks</title>
				<description>&lt;p&gt;       I never experimented with machine learning or denoising. I guess
having obscure matrices combined together to produce some result scared me a
bit.. Surprising for someone who loves computer graphics… 🙃&lt;br /&gt;
After failing an interview for an ML-related position (surprising?) I thought
enough is enough, time to play catch-up!&lt;/p&gt;

&lt;p&gt;For this project, I started with the basics: &lt;a href=&quot;https://www.coursera.org/learn/machine-learning&quot;&gt;Andrew NG ML course&lt;/a&gt;.
After a couple days &lt;em&gt;— and obviously becoming the greatest ML expert in the
world —&lt;/em&gt; I decided to tackle the easiest problem ever: &lt;strong&gt;image denoising&lt;/strong&gt;!&lt;/p&gt;

&lt;h1 id=&quot;the-goal&quot;&gt;The goal&lt;/h1&gt;

&lt;p&gt;Denoising is a complex field, and some very bright people are making a career
out of it. Not my goal!&lt;/p&gt;

&lt;p&gt;Here I’ll try to explore some classic denoising techniques, implement them, and
once used to some of the problems, build a custom model to improve the result.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The input:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2022-noise-00.webp&quot; alt=&quot;challenge image&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I believe this should be a good candidate:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;has a flat shape to check edge preservation.&lt;/li&gt;
  &lt;li&gt;has some “noise” to keep (foliage).&lt;/li&gt;
  &lt;li&gt;has some small structured details (steel beams).&lt;/li&gt;
  &lt;li&gt;has smooth gradients (sky).&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;step-1---sanity-check&quot;&gt;Step 1 - sanity check&lt;/h1&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2022-noise-01.webp&quot; alt=&quot;pixel line&quot; /&gt;&lt;/p&gt;

&lt;p&gt;From Wikipedia:&lt;/p&gt;
&lt;blockquote&gt;
  &lt;p&gt;noise is a general term for unwanted […] modifications that a signal may
suffer&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The graph above represents a line of pixels being part of a smooth shade.
In red are 2 bad pixels. They are bad because they interrupt the smoothness of
our graph, and thus are perceived as noise.&lt;/p&gt;

&lt;p&gt;How can we remove some outliers in that case? Averaging! Each pixel value is
averaged in regard to its neighbors. In this case, this would help reduce
perceptible noise.&lt;/p&gt;

&lt;div class=&quot;language-csharp highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  &lt;span class=&quot;k&quot;&gt;foreach&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;y&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;image&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;neighbors&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;extract_window_around&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;y&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;window_size&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;m&quot;&gt;10&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;res&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;average&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;neighbors&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;set&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;y&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;res&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2022-noise-02.webp&quot; alt=&quot;smooth, before &amp;amp; after&quot; /&gt;&lt;/p&gt;

&lt;p&gt;But in real life, that’s terrible..&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2022-noise-03.webp&quot; alt=&quot;real, before &amp;amp; after&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The reason for this poor performance is we don’t discriminate valid details
from noise. We loose our edges, and all details are lost.&lt;/p&gt;

&lt;h2 id=&quot;step-3---better-average---yuv-vs-rgb&quot;&gt;Step 3 - Better average - YUV vs RGB&lt;/h2&gt;

&lt;p&gt;The previous image was generated by averaging RGB values using a 10-pixels
sliding window. Because it was averaging RGB values, it mixed colors.
As result, edges were blurred in a very perceptible way, leading to an
unpleasant result.&lt;/p&gt;

&lt;p&gt;YUV is another color representation, splitting the channels not as red, green,
and blue, but color, and luminosity.
Colors are represented using polar coordinates, and luminosity is a single
linear value.&lt;/p&gt;

&lt;p&gt;If we look at the sky, the noise doesn’t seem to alter the color a lot, only
the brightness of the blue. So averaging using the same window, but only on
the luminance component should give better results:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2022-noise-04.webp&quot; alt=&quot;yuv, smooth&quot; /&gt;
&lt;img src=&quot;/assets/posts/2022-noise-05.webp&quot; alt=&quot;yuv, real&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;step-4---selective-average&quot;&gt;Step 4 - selective average&lt;/h2&gt;

&lt;p&gt;Using YUV vs RGB helped: the sky looks fine, and the green edges look sharper.
Sadly, the rest of the image looks &lt;strong&gt;bad&lt;/strong&gt;.
The reason is that I still use the same window size for the sky and the tower.&lt;/p&gt;

&lt;p&gt;I can improve that solution using a new input: an edge intensity map. Using the
well known &lt;a href=&quot;https://en.wikipedia.org/wiki/Sobel_operator&quot;&gt;Sobel operator&lt;/a&gt; I can
generate the list of areas to &lt;strong&gt;avoid&lt;/strong&gt;.&lt;/p&gt;

&lt;div class=&quot;language-csharp highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  &lt;span class=&quot;n&quot;&gt;edge_map&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;sobel&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;foreach&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;y&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;image&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;window_size&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;lerp&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;m&quot;&gt;10&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;edge_map&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;at&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;y&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;))&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;neighbors&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;extract_window_around&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;y&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;window_size&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;res&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;average&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;neighbors&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;set&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;y&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;res&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2022-noise-06.webp&quot; alt=&quot;edge, real&quot; /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;✅ The square edges are preserved.&lt;/li&gt;
  &lt;li&gt;✅ The sky blur is gone&lt;/li&gt;
  &lt;li&gt;✅ The Eiffel Tower’s edges seem preserved.&lt;/li&gt;
  &lt;li&gt;❌ Artifacts visible in the sky (top-right)&lt;/li&gt;
  &lt;li&gt;❌ The foliage texture is lost.&lt;/li&gt;
  &lt;li&gt;❌ The metallic structure lost precision.&lt;/li&gt;
  &lt;li&gt;❌ The grass mowing pattern is completely lost.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;step-5---ml-based-noise-detection&quot;&gt;Step 5 - ML-based noise detection&lt;/h2&gt;

&lt;p&gt;In the previous step, I tried to discriminate areas to blur and keep as-is.
The issue is my discrimination criteria: edges.
I was focusing on keeping edges, but lost good noise like the foliage.&lt;/p&gt;

&lt;p&gt;So now I wonder, can I split good noise from bad noise using a classification
model?&lt;/p&gt;

&lt;div class=&quot;language-csharp highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  &lt;span class=&quot;k&quot;&gt;foreach&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;y&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;image&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;window&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;extract_window_around&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;y&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;window_size&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;bad_noise_probability&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;run_model&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;window&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;blur_window_size&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;lerp&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;m&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;10&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;bad_noise_probability&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;res&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;average_pixels&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;y&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;blur_window_size&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;set&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;y&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;res&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;For this model, I tried to go with a naïve approach:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;select a set of clean images&lt;/li&gt;
  &lt;li&gt;generate their noisy counterpart in an image editor&lt;/li&gt;
  &lt;li&gt;split these images in 16x16 pixel chunks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2022-noise-07.webp&quot; alt=&quot;model training set extraction&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Those would represent my training &amp;amp; test set (6000 items and 600 items). The
goal is now from a 16 pixel window, determine if the pixel belongs to noise,
or belongs to some details.&lt;/p&gt;

&lt;p&gt;Then, I would iterate over my pixels, extract the 16x16 window around, run
the model on it, and use this probability to select my blur window.
My guess is that we should now be able to differentiate foliage from sky noise.&lt;/p&gt;

&lt;p&gt;Here is the model output: in red the parts to clean, in black the parts to
keep.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2022-noise-08.webp&quot; alt=&quot;model output&quot; /&gt;&lt;/p&gt;

&lt;p&gt;And here is the output:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2022-noise-09.webp&quot; alt=&quot;final result&quot; /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;✅ Edges are preserved.&lt;/li&gt;
  &lt;li&gt;✅ Steel structure is clear in the middle.&lt;/li&gt;
  &lt;li&gt;✅ Left foliage looks textured.&lt;/li&gt;
  &lt;li&gt;❌ Right foliage shadows are still noisy.&lt;/li&gt;
  &lt;li&gt;❌ Some areas of the steel structure are blurred.&lt;/li&gt;
  &lt;li&gt;❌ Sky has artifacts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The model training set is composed of only ~6000 chunks extracted from 4 images
(2 good, 2 noisy). Training the same model on a better dataset might be a first
solution to improve the noise classification.&lt;/p&gt;

&lt;p&gt;This result seems better than the bilateral filtering, so I guess that’s enough
for a first step into the ML world.
I will stop there for now, and move on to the next project!&lt;/p&gt;
</description>
				<link>https://www.studiopixl.com/2022-04-13/denoising</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2022-04-13/denoising</guid>
				<pubDate>Wed, 13 Apr 2022 00:00:00 +0000</pubDate>
				
					<category>graphics</category>
				
			</item>
		
			<item>
				<title>CTF Write-up: Sogeti 2019 - BadVM</title>
				<description>&lt;p&gt;Some friends were registered to this CTF, and since I had some days off, I
decided to work a bit on one RE exercise.&lt;/p&gt;

&lt;p&gt;The binary is called &lt;strong&gt;BadVM&lt;/strong&gt;:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;[nathan@Jyn badvm]$ ./badvm-original
### BadVM 0.1 ###

Veuillez entrer le mot de passe:
toto
Ca mouline ...
Plus qu&apos;un instant ... On avait la réponse depuis le début en faite :&amp;gt;
Perdu ...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It is a stripped, ELF 64 PIE binary. Time to start Binary Ninja.
This binary has no anti-debug, nor packing techniques. Just some calls to
&lt;em&gt;sleep&lt;/em&gt;.
Once these calls NOPed, we can start reversing the VM.&lt;/p&gt;

&lt;p&gt;The VM is initialized in the function I called &lt;strong&gt;load_vm&lt;/strong&gt; (0xde6).
Then, the function at 0xd5f is called, let’s call it &lt;strong&gt;vm_trampoline&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This function will choose the next instruction to execute. Load it’s address
in &lt;em&gt;rax&lt;/em&gt; and call it. &lt;strong&gt;vm_trampoline&lt;/strong&gt; is called at the end of each
instruction. Thus, each instruction is a new entry in the backtrace.&lt;/p&gt;

&lt;p&gt;This means, when returning from the first call to &lt;strong&gt;vm_trampoline&lt;/strong&gt;, we can
read the result and return it.
This takes us back to &lt;strong&gt;load_vm&lt;/strong&gt;, and result is checked.&lt;/p&gt;

&lt;p&gt;In case of an invalid character in the password, we have an early-exit. Input
is checked linearly, no hash or anything, Thus &lt;strong&gt;instruction counting&lt;/strong&gt; works
well.&lt;/p&gt;

&lt;p&gt;Since I was on holidays, I decided to experiment a bit with &lt;strong&gt;lldb&lt;/strong&gt;, and write
a instrument this VM using its API.&lt;/p&gt;

&lt;h2 id=&quot;reversing-the-vm&quot;&gt;Reversing the VM&lt;/h2&gt;

&lt;p&gt;This VM uses 0x300 bytes long buffer to run. Some points of interest:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;0x4: register A (rip)&lt;/li&gt;
  &lt;li&gt;0x5: register B&lt;/li&gt;
  &lt;li&gt;0xFF: register C (result)&lt;/li&gt;
  &lt;li&gt;0x2fc: register D&lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;0x2fe: register E (instruction mask?)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;0x32: password buffer (30 bytes)&lt;/li&gt;
  &lt;li&gt;0x2b: data buffer (xor data, 30 bytes)&lt;/li&gt;
  &lt;li&gt;0x200: data start (binary’s .data is copied in this area)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instruction are encoded as follows:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2019-badvm-00.webp&quot; alt=&quot;opcode&quot; /&gt;&lt;/p&gt;

&lt;p&gt;To select the instruction, the VM contains a jump-table.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2019-badvm-01.webp&quot; alt=&quot;jump-table&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Here one of the instructions (a ~GOTO):&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2019-badvm-02.webp&quot; alt=&quot;instruction&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Final note: each instruction/function has the following prototype:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2019-badvm-03.webp&quot; alt=&quot;prototype&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;instrumenting-using-lldb&quot;&gt;Instrumenting using LLDB&lt;/h2&gt;

&lt;p&gt;This VM does not check its own code, thus we can freely use software
breakpoints. The code is not rewritten, thus offsets are kept.
This allow us to simply use LLDB’s python API to instrument and analyse
the VM behavior.&lt;/p&gt;

&lt;p&gt;First step, create an lldb instance:&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;init&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;():&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;dbg&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;lldb&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;SBDebugger&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;Create&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;dbg&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;SetAsync&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;True&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;console&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;dbg&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;GetCommandInterpreter&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;

    &lt;span class=&quot;n&quot;&gt;error&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;lldb&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;SBError&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;target&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;dbg&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;CreateTarget&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;./badvm&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;None&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;None&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;bp&quot;&gt;True&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;error&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;# check error
&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;info&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;lldb&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;SBLaunchInfo&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;bp&quot;&gt;None&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;process&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;target&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;Launch&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;info&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;error&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;nf&quot;&gt;print&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;[LLDB] process launched&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now, we can register out breakpoints. Since vm_trampoline is called before
each instruction, we only need this one:&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;    &lt;span class=&quot;n&quot;&gt;target&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;BreakpointCreateByAddress&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;p_offset&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;VM_LOAD_BRKP_OFFSET&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now, we can run. To interact with the binary, we can use LLDB’s events.
Registering a listener, we can be notified each time the process stops,
or when a breakpoint is hit.&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;listener&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;dbg&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;GetListener&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;event&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;lldb&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;SBEvent&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;not&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;listener&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;WaitForEvent&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;event&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;continue&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;event&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;GetType&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;EVENT_STATE_CHANGED&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;# handle_event(process, program_offset, vm_memory, event)
&lt;/span&gt;    &lt;span class=&quot;k&quot;&gt;continue&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;regs&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;get_gprs&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;get_frame&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;process&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;))&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;regs&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;rip&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;program_offset&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;address&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;nf&quot;&gt;print&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;break location: 0x{:x} (0x{:x})&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;format&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
          &lt;span class=&quot;n&quot;&gt;regs&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;rip&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;program_offset&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;regs&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;rip&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]))&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;To read memory, or registers, we can simply do it like that&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;process&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;ReadUnsignedFromMemory&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;vm_memory&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;err&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;process&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;selected_thread&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;frame&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;frame_number&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;].&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;registers&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;# registers[0] contains general purpose registers
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now we can implement a pretty-printer to have “readable” instructions.
Once everything together, we can dump the execution trace:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;mov [0x00], 0xff
mov [0x01], 0x01
mov tmp, [0x00]  	# tmp=0xff
mov [tmp], [0x01]	# src=0x1
mov [0x00], 0x0b
mov [0x01], 0x1d
mov tmp, [0x00]  	# tmp=0xb
mov [tmp], [0x01]	# src=0x1d
mov [0x01], 0x0b
mov tmp, [0x01]  	# tmp=0xb
mov [0x00], [tmp]	# [tmp]=0x1d
mov r5, [0x00]
sub r5, [0x0a]   	# 0x1d - 0x0 = 0x1d
if r5 == 0:
    mov rip, 0x2d
mov [0x01], 0x0a
[...]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now, we can reverse the program running in the VM:&lt;/p&gt;

&lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;validate&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;password&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;xor_data&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;len&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;password&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;len&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;xor_data&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;

    &lt;span class=&quot;n&quot;&gt;D&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt; &lt;span class=&quot;ow&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;range&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;len&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;xor_data&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)):&lt;/span&gt;
        &lt;span class=&quot;n&quot;&gt;tmp&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;D&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;mh&quot;&gt;0xAC&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;%&lt;/span&gt; &lt;span class=&quot;mh&quot;&gt;0x2D&lt;/span&gt;
        &lt;span class=&quot;n&quot;&gt;D&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tmp&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;xor_data&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;i&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;chr&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;ord&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;password&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;i&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;])&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;^&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tmp&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
            &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt;

    &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;len&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;xor_data&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And we get the flag:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;SCE{1_4m_not_4n_is4_d3s1yn3r}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;This VM has no anti-debug, packing or anything special. But it was a funny
binary to reverse.
To instrument the VM, lldb is useful, but using &lt;strong&gt;DynamiRIO&lt;/strong&gt; would be a more
elegant method.&lt;/p&gt;
</description>
				<link>https://www.studiopixl.com/2019-03-01/ctf-writeup-badvm</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2019-03-01/ctf-writeup-badvm</guid>
				<pubDate>Fri, 01 Mar 2019 00:00:00 +0000</pubDate>
				
					<category>ctf</category>
				
					<category>lse</category>
				
			</item>
		
			<item>
				<title>3D engine - Parrallax Mapping with self-shadows</title>
				<description>&lt;p&gt;Working on my 3D game engine is the perfect occasion to reimplement classic
algorithms. On today’s menu: &lt;strong&gt;self-shadowed steep parrallax-mapping&lt;/strong&gt;
First step, get the classic steep parrallax-mapping.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2019-parrallax-01.webp&quot; alt=&quot;parrallax final result&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Here a two good links to implement this algorithm:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://learnopengl.com/Advanced-Lighting/Parallax-Mapping&quot;&gt;Learn OpenGL - Parrallax Mapping&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://graphics.cs.brown.edu/games/SteepParallax/&quot;&gt;Morgan McGuire’s article&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Steep parrallax-mapping allows us to get a pretty good result (10 samples):&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2019-parrallax-03.webp&quot; alt=&quot;parrallax closeup 1&quot; /&gt;
&lt;img src=&quot;/assets/posts/2019-parrallax-02.webp&quot; alt=&quot;parrallax closeup 2&quot; /&gt;&lt;/p&gt;

&lt;p&gt;But something is missing. Let’s implement self-shadows.&lt;/p&gt;

&lt;p&gt;Self shadows are only computed on directional lights. The algorithm is very
simple.&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;convert light direction in tangent space&lt;/li&gt;
  &lt;li&gt;compute steep parrallax-mapping&lt;/li&gt;
  &lt;li&gt;from the resulting coordinate, ray-march towards the light&lt;/li&gt;
  &lt;li&gt;If there is an intersection, reduce exposition&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And then, &lt;em&gt;TADAA&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://gist.github.com/Keenuts/969a1d412a00c0d044693add2355dff1&quot;&gt;Shader code available here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(&lt;strong&gt;2&lt;/strong&gt; steps are more than enough for this part.)&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2019-parrallax-01.webp&quot; alt=&quot;parrallax final result&quot; /&gt;&lt;/p&gt;
</description>
				<link>https://www.studiopixl.com/2019-02-28/game-engine-parrallax</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2019-02-28/game-engine-parrallax</guid>
				<pubDate>Thu, 28 Feb 2019 00:00:00 +0000</pubDate>
				
					<category>graphics</category>
				
					<category>lse</category>
				
			</item>
		
			<item>
				<title>Vulkan-ize Virglrenderer - experiment</title>
				<description>&lt;p&gt;&lt;a href=&quot;https://virgil3d.github.io/&quot;&gt;Virglrenderer&lt;/a&gt; provides OpenGL acceleration to a guest running on QEMU.&lt;/p&gt;

&lt;p&gt;My current GSoC project is to add support for the Vulkan API.&lt;/p&gt;

&lt;p&gt;Vulkan is drastically different to OpenGL. Thus, this addition is not straight-forward.
My current idea is to add an alternative path for Vulkan.
Currently, two different states are kept, one for OpenGL, and one for Vulkan.
Commands will either go to the OpenGL or Vulkan front-end.&lt;/p&gt;

&lt;p&gt;For now, only compute shaders are supported.
The work is divided in two parts: a Vulkan ICD in MESA, and a new front-end for Virgl and
vtest.&lt;/p&gt;

&lt;p&gt;If you have any feedback, do not hesitate !&lt;/p&gt;

&lt;p&gt;This experiment can be tested using this &lt;a href=&quot;https://github.com/Keenuts/vulkan-virgl&quot;&gt;repository&lt;/a&gt;.
If you have an Intel driver in use, you might be able to use the Dockerfile provided.&lt;/p&gt;

&lt;p&gt;Each part is also available independently:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/Keenuts/mesa&quot;&gt;MESA&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/Keenuts/virglrenderer&quot;&gt;VirglRenderer&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/Keenuts/vulkan-compute&quot;&gt;Vulkan compute sample&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
				<link>https://www.studiopixl.com/2018-07-24/vulkan-ize-virgl-experiment</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2018-07-24/vulkan-ize-virgl-experiment</guid>
				<pubDate>Tue, 24 Jul 2018 00:00:00 +0000</pubDate>
				
					<category>libvirt</category>
				
					<category>gsoc</category>
				
					<category>graphics</category>
				
					<category>lse</category>
				
			</item>
		
			<item>
				<title>GSoC 2018 - Vulkan-ize Virglrenderer</title>
				<description>&lt;p&gt;Several months ago started the GSoC 2018. Once again, I found a project which
got my attention.&lt;/p&gt;

&lt;center&gt; Vulkan-ize VirglRenderer &lt;/center&gt;
&lt;hr /&gt;

&lt;p&gt;&lt;a href=&quot;https://virgil3d.github.io/&quot;&gt;Virglrenderer&lt;/a&gt; is a library designed to provide
QEMU guests with OpenGL acceleration. It is composed of several components:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;a MESA driver, on the guest, which generates Virgl commands&lt;/li&gt;
  &lt;li&gt;a lib, on the host, which takes virgl commands and generated OpenGL calls
from it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to read more: &lt;a href=&quot;/2017-08-27/3d-acceleration-using-virtio.html&quot;&gt;3D Acceleration using VirtIO&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This library was built with OpenGL in mind. Today, Vulkan is correctly
supported, and is becoming a new standard. It might be time to bring Vulkan to
QEMU’s guests !&lt;/p&gt;

&lt;p&gt;To do so, we will need to work on two components:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;A Vulkan ICD. Writting one for MESA sounds like a good idea.&lt;/li&gt;
  &lt;li&gt;A Vulkan back-end for Virglrenderer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, we can face the first issue: Vulkan is not designed with abstraction in
mind. The time of the old GlBegin/glVertex is kinda dead.&lt;/p&gt;

&lt;p&gt;If we want to avoid any unnecessary abstraction, we cannot easily reduce the
amount of calls made to the API.
Thus, the vast majority of the VK calls will be forwarded to the host.
However, there is some area in which we can bend the rules a bit.&lt;/p&gt;

&lt;hr /&gt;
&lt;h3 id=&quot;the-rest-of-this-post-contains-the-same-content-as-the-announce-email-virgl-ml&quot;&gt;The rest of this post contains the same content as the announce email (virgl ML).&lt;/h3&gt;

&lt;h1 id=&quot;project-status&quot;&gt;Project status&lt;/h1&gt;

&lt;ul&gt;
  &lt;li&gt;Several Vulkan objects can be created&lt;/li&gt;
  &lt;li&gt;Memory can be mapped and altered on the client.&lt;/li&gt;
  &lt;li&gt;Changes are written/read to/from the server on flush/invalidation&lt;/li&gt;
  &lt;li&gt;Basic features for command buffers are supported.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a result, a sample compute shader can be ran, and the results can
be read.&lt;/p&gt;

&lt;p&gt;I only use vtest for now. The client part lies in mesa/srv/virgl.&lt;/p&gt;

&lt;h1 id=&quot;current-behavior&quot;&gt;Current behavior&lt;/h1&gt;

&lt;p&gt;To compile virglrenderer with vulkan, the option –with-vulkan is needed.
Running the server as-is does not enable Vulkan. And for now, Vulkan
cannot be used in parallel with OpenGL (Issue #1).
To enable Vulkan, the environment variable VTEST_USE_VULKAN must be set.&lt;/p&gt;

&lt;h2 id=&quot;initialization&quot;&gt;Initialization:&lt;/h2&gt;

&lt;p&gt;The client driver is registered as a classic Vulkan ICD.
When the loader call icdNegociateLoaderICDInterfaceVersion, the driver
connects to the server.
On failure, the driver reports as an invalid driver.&lt;/p&gt;

&lt;p&gt;Once connected, the ICD will fetch and cache all physical devices.
It will also fetch information about queue, memory and so.
Physical devices are then exposed as virtual-gpus.
Memory areas are showed as-is, except for the
&lt;em&gt;VK_MEMORY_PROPERTY_HOST_COHERENT&lt;/em&gt; bit, which is disabled.
This forces the application to notify every modification made to a
mapped memory.&lt;/p&gt;

&lt;p&gt;The object creation part relies heavily on API-Forwarding. For now, I
don’t see how I could avoid that.&lt;/p&gt;

&lt;h2 id=&quot;memory-transfers&quot;&gt;Memory transfers&lt;/h2&gt;

&lt;p&gt;Once basic objects are created, the client will ask to map some
memory. For now, no clever thing is done.
The ICD will provide a buffer. On flush, a transfer command is issued.
Virglrenderer will then map the corresponding memory region,
write/read, and unmap it.
A memory manager could be used on the server in the future to avoid
mapping/unmapping regions each time a transfer occurs.&lt;/p&gt;

&lt;h2 id=&quot;commands-and-execution&quot;&gt;Commands and execution&lt;/h2&gt;

&lt;p&gt;Command pool creation is forwarded to the server. For now, a command
buffer is attached to its pool.
To retrieve a command buffer from a handle, I need to know from which
pool it came from. (Issue #2)
Command buffer creation is also forwarded to the server.&lt;/p&gt;

&lt;p&gt;Command buffers state is managed on the client. Each vkCmd* call will
modify an internal state.
Once vkEndCommandBuffer is called, the state is sent to the server.
The server will then call corresponding vkCmd* functions to match
retrieved the state.&lt;/p&gt;

&lt;h2 id=&quot;code-generation&quot;&gt;Code generation&lt;/h2&gt;

&lt;p&gt;Vulkan entry points are generated at compile time. Heavily inspired
from Intel’s entry-point generation.
However, since object creation relies on API-Forwarding, I started to
work on a code generator for these functions.&lt;/p&gt;

&lt;p&gt;Using a json, the interesting informations are outlined. Then a Python
script will generate functions used to forward object creation to the
vtest pipe.
Even-though the Vulkan API seams pretty consistent, some specific
cases and time constraints forced me to abandon it.&lt;/p&gt;

&lt;p&gt;This script is still available in the mesa/src/virgl/tools and
virglrenderer/tools folder, but is lacking features.
Also, since I had different needs on both sides of vtest, scripts
diverge a lot.
The most recent version is the Virglrenderer one. It’s a second
iteration, and it might be easier to work with.&lt;/p&gt;

&lt;p&gt;In the current state, I use it to generate a skeleton for vtest
functions, and then fixes the implementation.
In the future, it could save us some time, especially if we use the
same protocol for VirtIO commands.&lt;/p&gt;

&lt;h2 id=&quot;issues&quot;&gt;Issues&lt;/h2&gt;

&lt;h4 id=&quot;1-virglrenderer-vulkan-cannot-be-used-next-to-opengl&quot;&gt;1: (Virglrenderer) Vulkan cannot be used next to OpenGL.&lt;/h4&gt;

&lt;p&gt;There is no reason for it except a badly though integration of the vulkan
initialization part into virglrenderer.&lt;/p&gt;

&lt;h4 id=&quot;2-virglrenderer-command-buffers-are-scattered-into-several-pools&quot;&gt;2: (Virglrenderer) Command buffers are scattered into several pools&lt;/h4&gt;

&lt;p&gt;Command buffers are scattered into several pools the client created.
To fetch a command buffer vk-handle, I need to
first fetch the corresponding pool from a logical device, then fetch
the command buffer.
Since VirtIO ant Vtest provides a FIFO, maybe we could drop the
command pool creation forwarding. Use only one pool per instance, and
thus simplify command buffers lookups.&lt;/p&gt;

&lt;h4 id=&quot;3-mesa-vtest-and-virtio-switch-is-not-straightforward-right-now&quot;&gt;3: (MESA) Vtest and VirtIO switch is not straightforward right now.&lt;/h4&gt;
&lt;p&gt;An
idea could be to add a level between vgl_vk* functions an vtest.
vgl_vk* function would still manage the state of the ICD.
the mid-layer would convert handles and payload to a common protocol
for both VirtIO and Vtest. (Both could use vgl handles and some metadata).
Then, a backend function, which would choose between vtest and virtio.&lt;/p&gt;

&lt;p&gt;The handles could be either forwarded as-is (vtest case)
Or translated to real virgl handles in the case of a kernel driver
which could do a translation, or check them. But the metadata should
not change.&lt;/p&gt;

&lt;h4 id=&quot;4-virglrenderermesa-vtest-error-handling-is-bad&quot;&gt;4: (Virglrenderer/MESA) vtest error handling is bad.&lt;/h4&gt;

&lt;p&gt;Each command sends a result payload, and optionally, data. This result payload
contains two informations. An error code, and a numerical value. Use as a
handle, or a size. On server failure, error-codes should be used.&lt;/p&gt;

&lt;h4 id=&quot;5-bugs-bugs-and-bugs&quot;&gt;5: bugs, bugs and bugs.&lt;/h4&gt;

&lt;p&gt;This project is absolutely NOT usable right now.&lt;/p&gt;

&lt;h1 id=&quot;next-steps&quot;&gt;Next steps&lt;/h1&gt;

&lt;p&gt;My first step should be to rebase this project onto the current
virglrenderer version, and rewrite the history. In the mean time,
rewrite the initialization part to allow both OpenGL and Vulkan to be ran.
Then, fix the vtest/virtio architecture. Add this new mid-layer.
Once refactorized, I should work on the error handling for
client-server interactions.&lt;/p&gt;

&lt;p&gt;Once in a sane state, other issues will have to be addressed.&lt;/p&gt;

&lt;h1 id=&quot;how-to-test-it&quot;&gt;How to test it&lt;/h1&gt;

&lt;p&gt;There is a main repo used to build and test it rapidly.
In it, a bash script and a dockerfile (+ readme, todo)&lt;/p&gt;

&lt;p&gt;The bash script in itself should be enough. But if the compilation
fails for a reason, the dockerfile could be used.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/Keenuts/vulkan-virgl&quot;&gt;repository&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The README provided should be enough to make the sample app run.&lt;/p&gt;

&lt;h1 id=&quot;repositories&quot;&gt;Repositories&lt;/h1&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/Keenuts/mesa&quot;&gt;MESA&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/Keenuts/virglrenderer&quot;&gt;VirglRenderer&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/Keenuts/vulkan-compute&quot;&gt;Vulkan compute sample&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/Keenuts/vulkan-virgl&quot;&gt;helper repository&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
				<link>https://www.studiopixl.com/2018-07-12/vulkan-ize-virgl</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2018-07-12/vulkan-ize-virgl</guid>
				<pubDate>Thu, 12 Jul 2018 00:00:00 +0000</pubDate>
				
					<category>graphics</category>
				
			</item>
		
			<item>
				<title>Vulkan API search engine</title>
				<description>&lt;p&gt;Currently working on a Vulkan extension for &lt;a href=&quot;https://virgil3d.github.io/&quot;&gt;VirglRenderer&lt;/a&gt;,
I need to grep the API all the time.
The official documentation gives me two options:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;search the Vulkan spec (huge PDF)&lt;/li&gt;
  &lt;li&gt;use my browser custom engine feature and play with Khronos’ registry URL’s&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first is painful, and the second too strict (case sensitive).&lt;/p&gt;

&lt;p&gt;Recently, I also went to an &lt;a href=&quot;https://www.algolia.com/&quot;&gt;Algolia&lt;/a&gt; hosted meeting.
Their search engine API looked good, and in my case, it’s free!&lt;/p&gt;

&lt;p&gt;Thus, I took a couple hours off from my GSoC, and crafted this thing:
a dirty Vulkan API search engine.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Edit 2024-04-02:&lt;/p&gt;
  &lt;ul&gt;
    &lt;li&gt;This utility had no users for a year. Algolia has scheduled the index
deletion.&lt;/li&gt;
    &lt;li&gt;Its index is very outdated (Vulkan 1.0, very few KHR/vendor extensions).&lt;/li&gt;
    &lt;li&gt;The official Vulkan documentation has improved, making this utility
obsolete.&lt;/li&gt;
  &lt;/ul&gt;

  &lt;p&gt;For those 3 reasons, I am sunsetting this utility.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2018-vulkan-search-engine.webp&quot; alt=&quot;screenshot&quot; /&gt;&lt;/p&gt;
</description>
				<link>https://www.studiopixl.com/2018-06-16/vulkan-search</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2018-06-16/vulkan-search</guid>
				<pubDate>Sat, 16 Jun 2018 00:00:00 +0000</pubDate>
				
					<category>graphics</category>
				
			</item>
		
			<item>
				<title>Raytracing 2 - KD-Tree and Photons</title>
				<description>&lt;p&gt;Last ray/path-tracers I did were simple. No acceleration datastructure, no complex
lighting methods. And, I never tried &lt;strong&gt;GO&lt;/strong&gt;.&lt;/p&gt;

&lt;h2 id=&quot;raytracing--kd-trees&quot;&gt;Raytracing &amp;amp; KD-Trees&lt;/h2&gt;

&lt;p&gt;This tracer only supports triangles. I wanted to keep it simple.&lt;br /&gt;
Drawback: rendering a sphere was slow. Thus, Instead of storing my tris in an array,
I stored them in a &lt;a href=&quot;https://en.wikipedia.org/wiki/K-d_tree&quot;&gt;KD-Tree&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A KD-Tree is a tree based data-structure.
Each node has a bouding box, and each tri contained in the node or in the children is in
that bounding box.&lt;br /&gt;
This enable our tracer to quickly discard branch of a model which won’t cross our ray.&lt;/p&gt;

&lt;p&gt;Since I wanted to visualized my tree, I implemented a 
CPU &lt;a href=&quot;https://en.wikipedia.org/wiki/Rasterisation&quot;&gt;rasterizer&lt;/a&gt; in this tracer.
Here is a rendering showing the bounding boxes:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2018-raytracer-go-2.webp&quot; alt=&quot;Depth of field demo&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;clean-output&quot;&gt;Clean output&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2018-raytracer-go-1.webp&quot; alt=&quot;KD-Tree demo&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This project is far from finished. I still need to support indirect lighting and maybe
use another shading model.&lt;/p&gt;

&lt;p&gt;Code available &lt;a href=&quot;https://github.com/Keenuts/rt&quot;&gt;on GitHub&lt;/a&gt;&lt;/p&gt;
</description>
				<link>https://www.studiopixl.com/2018-05-02/raytracer-go</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2018-05-02/raytracer-go</guid>
				<pubDate>Wed, 02 May 2018 00:00:00 +0000</pubDate>
				
					<category>graphics</category>
				
			</item>
		
			<item>
				<title>CAN bus reverse on a Toyota Yaris</title>
				<description>&lt;h1 id=&quot;talking-with-cars&quot;&gt;Talking with Cars&lt;/h1&gt;

&lt;p&gt;During my last internship, a coworked had a Toyota Yaris (2007).
This car has an OBD-2 plug, and the owner was curious about what we could do with it.
We had access to a simple CAN-bus probe, and some spare time.&lt;/p&gt;

&lt;h2 id=&quot;press--seek&quot;&gt;Press &amp;amp; Seek&lt;/h2&gt;

&lt;p&gt;The first step is to understand what parts a linked to what packets.
Our approach was to sort packets by IDs, and highlight changing bytes.
Then, touch everything in the car we could think of.
Once some basic informations where figured out, we could show some graphs.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2018-can-terminal.webp&quot; alt=&quot;CAN terminal&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;talking-with-cars-1&quot;&gt;Talking with Cars&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/P1kachu&quot;&gt;Stanislas&lt;/a&gt;, another student, worked on a fake gamepad using his Fiat500 can packets.
The code base was in Python, and all values were hardcoded.
We could easily improve the architecture by implementing a src-&amp;gt;sink model inspired from GStreamer’s.&lt;/p&gt;

&lt;p&gt;All done. PR has been merged, and here is the repo:
&lt;a href=&quot;https://github.com/P1kachu/talking-with-cars&quot;&gt;Repository&lt;/a&gt;&lt;/p&gt;
</description>
				<link>https://www.studiopixl.com/2018-02-28/talking-with-cars</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2018-02-28/talking-with-cars</guid>
				<pubDate>Wed, 28 Feb 2018 00:00:00 +0000</pubDate>
				
					<category>misc</category>
				
			</item>
		
			<item>
				<title>Raytracing &amp; Pathtracing</title>
				<description>&lt;p&gt;Another project I’ve been working on during my daily commute.
A raytracer(left) and a pathtracer(right).&lt;/p&gt;

&lt;p&gt;Both available &lt;a href=&quot;https://github.com/Keenuts/things-to-render-things&quot;&gt;on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2018-raytracer-1.webp&quot; alt=&quot;Raytracing cornell box&quot; /&gt;
&lt;img src=&quot;/assets/posts/2018-pathtracer-1.webp&quot; alt=&quot;Pathtracer cornell box&quot; /&gt;&lt;/p&gt;

&lt;p&gt;(Not the same Cornell box)&lt;/p&gt;
</description>
				<link>https://www.studiopixl.com/2018-01-04/rendering-algorithms</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2018-01-04/rendering-algorithms</guid>
				<pubDate>Thu, 04 Jan 2018 00:00:00 +0000</pubDate>
				
					<category>graphics</category>
				
			</item>
		
			<item>
				<title>GSoC 2017 - 3D acceleration using VirtIOGPU</title>
				<description>&lt;p&gt;Several months ago started the GSoC 2017. Among all the projects available one got my attention:&lt;/p&gt;

&lt;center&gt; Add OpenGL support on a Windows guest using VirGL &lt;/center&gt;

&lt;hr /&gt;

&lt;p&gt;In a VM, to access real hardware, we have two methods: passthrough, and virtualization extensions (Intel VT-x, AMD-V..).
When it comes to GPUs possibilities drop down to one: passtrough.
Intel has a virtualization extension (GVT), but we want to support every devices.
Thus, we need to fall-back to a software based method.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Emulation ? Since we want 3D acceleration, better forget it&lt;/li&gt;
  &lt;li&gt;API-forwarding ? This means we need to have the same OpenGL API between guest host, also no.&lt;/li&gt;
  &lt;li&gt;Paravirtualization ? Yes !&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since a couple years, VirtIO devices became a good standard on QEMU. 
Then, Dave Airlie started to work on &lt;a href=&quot;https://virgil3d.github.io/&quot;&gt;VirGL&lt;/a&gt; and a VirtIO-gpu.
Both help provide a descent virtual-GPU which rely on the host graphic stack.&lt;/p&gt;

&lt;p&gt;This article will present VirtIO devices, and what kind of operations a guest can do using VirGL.&lt;/p&gt;

&lt;p&gt;I also invite you to read a &lt;a href=&quot;/2017-05-13/linux-graphic-stack-an-overview&quot;&gt;previous article I wrote about Linux’s graphic stack&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;virtio-devices&quot;&gt;VirtIO devices&lt;/h2&gt;

&lt;p&gt;Since we will use a VirtIO based device, let’s see how it works.
First, these devices behave as regular PCI devices. We have a config space, some dedicated memory, and interruptions.
Second very important point, VirtIO devices communicate with ring-buffers used as FIFO queues.
This device is entirely emulated in QEMU, and can realize DMA transfers by sharing common pages between the guest and the host.&lt;/p&gt;

&lt;h3 id=&quot;communication-queues&quot;&gt;Communication queues&lt;/h3&gt;

&lt;p&gt;On our v-gpu, we have 2 queues. One dedicated to the hardware cursor, and another for everything else.
To send a command in the queue, it goes like this:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;guest: allocate pages on the host&lt;/li&gt;
  &lt;li&gt;guest: send a header and pointers to our physical pages (guest POV)  in the ring buffer.&lt;/li&gt;
  &lt;li&gt;guest: send an interruption&lt;/li&gt;
  &lt;li&gt;VMExit&lt;/li&gt;
  &lt;li&gt;host: QEMU read our header and pointers. Translate addresses to match local virtual address range.&lt;/li&gt;
  &lt;li&gt;host: read the command, execute it&lt;/li&gt;
  &lt;li&gt;host: write back to ring buffer&lt;/li&gt;
  &lt;li&gt;host: send interruption&lt;/li&gt;
  &lt;li&gt;guest: handle interruption, read ring buffer and handle answer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2017-virtio-communication.webp&quot; alt=&quot;virtio_device_communication&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;virgl&quot;&gt;VirGL&lt;/h3&gt;

&lt;p&gt;VirGL can be summed up as a simple state-machine, keeping track of resources, and translating command buffers to a sequence of OpenGL calls.
It exposes two kinds of commands: let’s say 2D and 3D.&lt;/p&gt;

&lt;p&gt;2D commands are mainly focused on resources management. We can allocate memory on the host by creating a 2D resource. Then initialize a DMA transfer by linking this resource’s memory areas to guest’s physical pages.
To ease resource management between applications on the guest, VirGL also adds a simple context feature. Resource creation is global, but to use them, you must attach them to the context.&lt;/p&gt;

&lt;p&gt;Then, 3D commands. These are close to what we can find in a API like Vulkan. We can setup a viewport, scissor state, create a VBO, and draw it.
Shaders are also supported, but we first need to translate them to TGSI; an assembly-like representation. Once on the host, they will be re-translated to GLSL and sent to OpenGL.&lt;/p&gt;

&lt;p&gt;You can find a part of the spec on this &lt;a href=&quot;https://github.com/Keenuts/virtio-gpu-documentation&quot;&gt;repository&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;opengl-on-windows&quot;&gt;OpenGL on Windows&lt;/h2&gt;

&lt;p&gt;Windows graphic stack can be decomposed as follows:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2017-windows-stack.webp&quot; alt=&quot;windows graphic stack&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Interresting parts are:&lt;/p&gt;

&lt;p&gt;OpenGL ICD (Installable client driver):&lt;/p&gt;
&lt;blockquote&gt;
  &lt;p&gt;This is our OpenGL implementation -&amp;gt; the state machine, which can speak to our kernel driver.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;GDI.dll:&lt;/p&gt;
&lt;blockquote&gt;
  &lt;p&gt;A simple syscall wrapper for us.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;D3D Subsystem:&lt;/p&gt;
&lt;blockquote&gt;
  &lt;p&gt;First part of the kernel graphic stack. It exposes a 3D and 2D API. Since we are not a licensed developer, let’s try to avoid this.
From the documentation, we have a some functions to bypass it: &lt;a href=&quot;https://msdn.microsoft.com/en-us/library/windows/hardware/ff559653(v=vs.85).aspx&quot;&gt;DxgkDdiEscape&lt;/a&gt; is one.
This functions takes a buffer, a size, and lets it pass trough this subsystem, directly to the underlying driver.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;DOD (Display Only Driver)&lt;/p&gt;
&lt;blockquote&gt;
  &lt;p&gt;Our kernel driver. This part will have to communicate to both kernel/ICD and VirtIO-gpu.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&quot;opengl-state-tracker&quot;&gt;OpenGL State-Tracker&lt;/h2&gt;

&lt;p&gt;OpenGL rely on a state machine we have to implement. Let’s start by drawing on the frame-buffer.&lt;/p&gt;

&lt;p&gt;We start a new application, want to split it from the rest. So we start by creating a VirGL context.
Then create a 2D resource (800x600 RGBA seams great), and attach it to our VGL-context.&lt;/p&gt;

&lt;p&gt;We might want to draw something now. We have two options, either use the 3D command INLINE_WRITE, or DMA.
Using INLINE_WRITE means sending all our pixels through a VirtIO queue. So let’s use DMA !&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;We start by allocating our memory pages on the guest.&lt;/li&gt;
  &lt;li&gt;Then, send physical addresses to VirGL (guest POV)&lt;/li&gt;
  &lt;li&gt;VirGL will translate PA addresses to local virtual addresses, and link these pages to our resource.&lt;/li&gt;
  &lt;li&gt;Back to the guest, we can write our pixels to the frame-buffer.&lt;/li&gt;
  &lt;li&gt;To notify the V-gpu, we use the TRANSFER_TO_HOST_2D command, which tells QEMU to sync resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, let’s draw some pixels on this frame-buffer.
We will need :&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;create an OpenGL context&lt;/li&gt;
  &lt;li&gt;setup our viewport and scissor settings (ie: screen limits)&lt;/li&gt;
  &lt;li&gt;create a VBO&lt;/li&gt;
  &lt;li&gt;link the VBO to a vertex/normals/color buffer&lt;/li&gt;
  &lt;li&gt;create vertex and frag shaders&lt;/li&gt;
  &lt;li&gt;setup a rasterizer&lt;/li&gt;
  &lt;li&gt;setup the frame-buffer to use the one we created earlier&lt;/li&gt;
  &lt;li&gt;create a constant buffer&lt;/li&gt;
  &lt;li&gt;send the draw call&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A 3D Command is a set of UINT32. The first one is used as a header, followed by N arguments.
A command buffer can contains several commands stacked together in one big UINT32 array.&lt;/p&gt;

&lt;p&gt;Earlier, we created resources in VGL-Contexts. Now we will need 3D objects.
These are created sending 3D commands, and are not shared between VGL contexts.
Once created, we have to bind them to the current opengl-context.&lt;/p&gt;

&lt;p&gt;Now, if everything goes well, we should be able to display something like that:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2017-opengl-virtio.webp&quot; alt=&quot;opengl in windows with qemu&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Once more, explaining all the commands would be uninteresting, but there is a spec for that !&lt;/p&gt;

&lt;p&gt;If you are still interested, here are couple of links:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b&quot;&gt;GIST to present the project&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://gitlab.com/spice/virtio-gpu-wddm/virtio-gpu-wddm-dod&quot;&gt;DOD Driver&lt;/a&gt;: The kernel driver needed on the Windows guest to communicate with the VirtIO-gpu&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/Keenuts/virtio-gpu-win-icd&quot;&gt;ICD Driver&lt;/a&gt;: opengl32.dll, the userland driver including a basic state-tracker:&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/Keenuts/virtio-gpu-documentation&quot;&gt;VirGL Reference&lt;/a&gt; : partial reference of VirGL 2D and 3D commands&lt;/li&gt;
&lt;/ul&gt;
</description>
				<link>https://www.studiopixl.com/2017-08-27/3d-acceleration-using-virtio</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2017-08-27/3d-acceleration-using-virtio</guid>
				<pubDate>Sun, 27 Aug 2017 00:00:00 +0000</pubDate>
				
					<category>libvirt</category>
				
					<category>gsoc</category>
				
					<category>lse</category>
				
			</item>
		
			<item>
				<title>Talk LSE-Week - QEMU and Virgl3D</title>
				<description>&lt;p&gt;This talk presented my ongoing project at this time: &lt;br /&gt;
Implement an OpenGL driver for Windows working with VirtIO-gpu.&lt;/p&gt;

&lt;p&gt;The slides are available &lt;a href=&quot;/assets/slides/2017-lseweek-qemu.pdf&quot;&gt;HERE&lt;/a&gt;&lt;/p&gt;

&lt;div class=&quot;youtube&quot;&gt;
  &lt;iframe src=&quot;https://www.youtube.com/embed/_8z7gMPRm2Q&quot; loading=&quot;lazy&quot; frameborder=&quot;0&quot; allowvr=&quot;&quot; allowfullscreen=&quot;&quot; mozallowfullscreen=&quot;&quot; allow=&quot;autoplay; encrypted-media&quot; webkitallowfullscreen=&quot;&quot;&gt;
  &lt;/iframe&gt;
&lt;/div&gt;
</description>
				<link>https://www.studiopixl.com/2017-07-20/talk-lseweek</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2017-07-20/talk-lseweek</guid>
				<pubDate>Thu, 20 Jul 2017 00:00:00 +0000</pubDate>
				
					<category>graphics</category>
				
			</item>
		
			<item>
				<title>Linux graphic stack</title>
				<description>&lt;p&gt;In January 2017, results arrived. I was accepted at the LSE, a system laboratory in my school. We were 4, and had to find a new project to work on. One wanted to work on the linux kernel security, another on Valgrind, and then, there is me. I didn’t knew how to start, but I wanted to work on something related to GPUs.&lt;/p&gt;

&lt;p&gt;My teacher arrived, and explained the current problem with Windows and QEMU: we don’t have any hardware acceleration. Might be useful to do something about it ! I was not ready…&lt;/p&gt;

&lt;p&gt;The first step was to understand Linux graphic stack, and then find out how Windows could have done it.
Finally, how we can bring this together using Virgl3D and VirtIO queues.&lt;/p&gt;

&lt;p&gt;This article will try present you a rapid overview of the graphic stack on Linux. There already is some pretty good articles about the userland part, so I won’t focus on that, and put some links.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;opengl-101&quot;&gt;OpenGL 101&lt;/h2&gt;
&lt;p&gt;Let’s begin with a simple OpenGL application:&lt;/p&gt;

&lt;div class=&quot;language-c highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;kt&quot;&gt;int&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;main&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;kt&quot;&gt;int&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;argc&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;char&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;**&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;argv&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;glutInit&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;amp;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;argc&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;argv&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;glutInitWindowSize&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;300&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;300&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;glutCreateWindow&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&quot;Hello world :D&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;

    &lt;span class=&quot;n&quot;&gt;glClear&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;GL_COLOR_BUFFER_BIT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;glBegin&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;GL_TRIANGLE&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
        &lt;span class=&quot;n&quot;&gt;glVertex3f&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
        &lt;span class=&quot;n&quot;&gt;glVertex3f&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
        &lt;span class=&quot;n&quot;&gt;glVertex3f&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;glEnd&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;();&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;glFlush&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;();&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&lt;em&gt;This is a non working dummy sample, for the idea&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As we can see, there is three main steps:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Get a window&lt;/li&gt;
  &lt;li&gt;Prepare our vertices, data…&lt;/li&gt;
  &lt;li&gt;Render&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But how can we do that ?&lt;/p&gt;

&lt;h2 id=&quot;linux-graphic-stack&quot;&gt;Linux graphic stack&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2017-linux-stack.webp&quot; alt=&quot;linux graphic stack&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;level-1-userland-x-and-libgl&quot;&gt;Level 1: Userland, X and libGL&lt;/h3&gt;

&lt;p&gt;The first part of our code looked like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-C&quot;&gt;glutInit(&amp;amp;argc, argv);
glutInitDisplayMode(GLUT_SINGLE);
glutInitWindowSize(300, 300);
glutInitWindowPosition(100, 100);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But in fact, actions can be resumed to something like this:&lt;/p&gt;

&lt;div class=&quot;language-c highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;CTX&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;glCreateContext&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;CONNECTION&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;xcb_connect&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;xcb_create_window&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;CONNECTION&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;PARAMS&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;SURFACE&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;WINDOW&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;What ? a connection, a context ?
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;To manage our display, Linux can use several programs. A well known is the X server. Since it’s a real server, we have to connect to it first before being able to request anything. To ease our work, we will use the lib XCB. 
Once a window is created, any desktop manager compatible with X will be able to display it.
For more informations about an OpenGL context -&amp;gt; &lt;a href=&quot;https://www.khronos.org/opengl/wiki/OpenGL_Context&quot;&gt;Khronos wiki&lt;/a&gt;&lt;/p&gt;

&lt;h3 id=&quot;meet-mesa&quot;&gt;Meet Mesa&lt;/h3&gt;

&lt;p&gt;Mesa is an implementation of OpenGL on Linux. Our entry point is libGL, just a dynamic library letting us interface with the openGL runtime.
The idea is the following:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;libGL is used by an OpenGL application to interact with Mesa&lt;/li&gt;
  &lt;li&gt;Generic OpenGL state tracker. Shaders are compiled to TGSI and optimized&lt;/li&gt;
  &lt;li&gt;GPU layer : A translation layer specific to our graphic chipset&lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;libDRM and WinSys: an API specific to the kernel, used interface with the DRM&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;OpenGL state tracker: from basic commands like GlBegin GlVertex3 and so on, Mesa will be able to generate real calls, to create command buffers, vertex buffers, etc… Shaders will be compiled into an intermediate representation: TGSI. A first batch of optimizations in done on this step.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;GPU layer: We now need to translate TGSI shaders to something our GPU can understand, real instructions. We will also shape our commands for a specific chipset.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;libDRM and WinSys: We send this data to the kernel, using this interface&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this architecture, if I want to add a support to my own graphic card, I will have to replace one part : the GPU layer&lt;/p&gt;

&lt;p&gt;For more informations about Mesa and Gallium -&amp;gt; &lt;a href=&quot;https://en.wikipedia.org/wiki/Gallium3D&quot;&gt;Wikipedia&lt;/a&gt;
Another good article on the userland part -&amp;gt; &lt;a href=&quot;https://blogs.igalia.com/itoral/2014/07/29/a-brief-introduction-to-the-linux-graphics-stack/&quot;&gt;Igalia blog&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;welcome-to-kernelland-&quot;&gt;Welcome to KernelLand !&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2017-linux-stack.webp&quot; alt=&quot;linux graphic stack&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;meet-the-drm&quot;&gt;Meet the DRM&lt;/h3&gt;

&lt;p&gt;DRM: Direct Rendering Manager. This is more or less an IOCTL API composed of several modules. Each driver can add some specific entry points, but there is a common API designed to provide a minimal support.
Two modules will be described: KMS and the infamous couple TTM &amp;amp; GEM.&lt;/p&gt;

&lt;h3 id=&quot;meet-kms&quot;&gt;Meet KMS&lt;/h3&gt;

&lt;p&gt;Remember the first step of our OpenGL application ? Ask for a window, getting a place to put some fancy pixels ? That’s the job of the KMS: Kernel Mode-setting.&lt;/p&gt;

&lt;p&gt;A long time ago, we used UMS: user mode setting. The idea was to manage our hardware directly from userland. Problems: every application needed to support all the devices. It means a lot of code was written, again and again. And what if two applications wanted to access to the same resources ?
So, KMS. But why ?&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2017-KMS.webp&quot; alt=&quot;KMS&quot; /&gt;&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Framebuffer: a buffer in memory, designed to store pixels
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The story begins with a &lt;strong&gt;plane&lt;/strong&gt;. Picture it like a group of resources used to create a image. A plane can contains several framebuffers.
A big one, to store the full picture, and maybe a small one, something like 64x64 for an hardware cursor ?
These framebuffers can be mixed together on the hardware to generate a final framebuffer.&lt;/p&gt;

&lt;p&gt;Now, we have a buffer storing our picture. We assigned it to a &lt;strong&gt;CRTC&lt;/strong&gt; (Cathode Ray Tube Controller). A CRTC is directly linked to an ouput. It means if your card has two CRTCs, you can have two different output.
Final step, printing something on the screen. A screen is connected using a standard port, HDMI, DVI, VGA… this means encoding our stream to a well defined protocol.
That’s it, we have some pixels on our screen !&lt;/p&gt;

&lt;h3 id=&quot;ttm--gem&quot;&gt;TTM &amp;amp; GEM&lt;/h3&gt;

&lt;p&gt;We can print some pixels, great ! But how can we do some fancy 3D stuff ? We have our GL calls going through some mumbo-jumbo stuff, and then what ? How can I actually write something on my GPU’s memory ?&lt;/p&gt;

&lt;p&gt;There globally two kind of memory architecture: &lt;strong&gt;UMA&lt;/strong&gt; and &lt;strong&gt;dedicated&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;UMA for Unified Memory Access, is used by Intel Graphics, or on some Android devices. All your memory is accessible from one memory space.&lt;/li&gt;
  &lt;li&gt;Dedicated memory:  You can’t directly access your memory from the CPU.If you want to write it, you have to map a CPU addressable area, write your data, and then, use specific mechanisms to send it on the dedicated memory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;TTM and GEM are two different APIs designed to manage this. TTM is the old one, designed to covering every possible cases. The result is a big and complex interface no sane developer would use.
Around 2008, GEM was introduced. A new and lighter API, designed to manage UMA architectures.
Nowadays, GEM is often used as a frontend, but when dedicated memory management is needed, TTM is used as backend.&lt;/p&gt;

&lt;h3 id=&quot;gem-for-dummies&quot;&gt;GEM for dummies&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2017-GEM.webp&quot; alt=&quot;GEM&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The main idea is to link a resource to a GEM handle. Now you &lt;em&gt;only&lt;/em&gt; need to tell when a GEM is needed, and memory will be moved on and out  our vram. But there is a small problem. To share resources, GEM uses global identifiers.
A GEM is linked to a unique, &lt;strong&gt;global&lt;/strong&gt; identifier. This means any program could ask for a specific GEM and get access to the resource… &lt;strong&gt;any&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Gladly, we have &lt;em&gt;DMA-BUF&lt;/em&gt;. The idea is to link a buffer to a file descriptor. We add some functions to convert a local GEM identifier to a fd, and can safely share our resources.&lt;/p&gt;

&lt;p&gt;I’ll stop here for now, but I invite you to check some articles on DMA (Direct memory access) and read &lt;a href=&quot;https://lwn.net/Articles/283793/&quot;&gt;this article about TTM &amp;amp; GEM&lt;/a&gt;&lt;/p&gt;
</description>
				<link>https://www.studiopixl.com/2017-05-13/linux-graphic-stack-an-overview</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2017-05-13/linux-graphic-stack-an-overview</guid>
				<pubDate>Sat, 13 May 2017 00:00:00 +0000</pubDate>
				
					<category>linux</category>
				
					<category>graphics</category>
				
					<category>lse</category>
				
			</item>
		
			<item>
				<title>GSoC 2017 - API Forwarding</title>
				<description>&lt;p&gt;Writing an ICD is a problem in itself. Add to this Windows kernel interfaces, virtIO queues management, resources transfer between host and guest, and BOOM, you are lost.
This brings us to our first step: something not efficient but simpler, API Forwarding.&lt;/p&gt;

&lt;h2 id=&quot;tasks&quot;&gt;Tasks&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;Hook OpenGL calls&lt;/li&gt;
  &lt;li&gt;Serialize function calls&lt;/li&gt;
  &lt;li&gt;Send them to the miniport-driver, then the host&lt;/li&gt;
  &lt;li&gt;De-serialize calls and execute them on the host.&lt;/li&gt;
  &lt;li&gt;Send some data back to the guest&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;realization&quot;&gt;Realization&lt;/h2&gt;

&lt;h3 id=&quot;icd&quot;&gt;ICD&lt;/h3&gt;
&lt;p&gt;The ICD part (Userland) is pretty straightforward. Make your own opengl32.dll, serialize the calls. Now find a sweet function in gdi32.dll to throw your mumbo-jumbo on the kernel side.
Fortunately, we have this:&lt;/p&gt;

&lt;div class=&quot;language-c highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;NTSTATUS&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;APIENTRY&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;DxgkDdiEscape&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;_In_&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;const&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;HANDLE&lt;/span&gt;         &lt;span class=&quot;n&quot;&gt;hAdapter&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;_In_&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;const&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;DXGKARG_ESCAPE&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;pEscape&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;...&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;A beautiful function available on both DOD and full display driver. It takes a pointer on an userland buffer, and send it to our display driver.
Wait… userland buffer, no check, kernel part ? Mmmmm…. &lt;a href=&quot;https://googleprojectzero.blogspot.fr/2017/02/attacking-windows-nvidia-driver.html&quot;&gt;What could go wrong ?&lt;/a&gt;&lt;/p&gt;

&lt;h3 id=&quot;kernel-part&quot;&gt;Kernel part&lt;/h3&gt;

&lt;p&gt;To initialize a display driver, you must call a function: &lt;a href=&quot;https://msdn.microsoft.com/en-us/library/windows/hardware/ff560824(v=vs.85).aspx&quot;&gt;DxgkInitialize&lt;/a&gt;
This function will take a big structure, containing function pointers to your driver.
For a display only driver, you will have a reduced set of function to implement. And for a full featured driver, well…&lt;/p&gt;

&lt;p&gt;Anyway, now the game is to run the driver, and see where we crash. Sadly we cannot just hope to add some functions, and run only using the working DOD code base. Windows wants something more, and the game is to find what, Yay !
Since we have a working DOD driver, let’s find how we could trick.&lt;/p&gt;

&lt;h3 id=&quot;icd--kernel-communication&quot;&gt;ICD &amp;lt;=&amp;gt; Kernel communication&lt;/h3&gt;

&lt;p&gt;We can register two type of driver: a DOD driver using &lt;strong&gt;DxgkInitializeDisplayOnlyDriver&lt;/strong&gt; and &lt;strong&gt;DxgkInitialize&lt;/strong&gt;.
Windows will then know which kind of features each driver can support (fine tune will be done using query callbacks).
Both drivers can implement &lt;strong&gt;DxgkDdiEscape&lt;/strong&gt;. Great, we will fool Windows and use this DOD as a fully featured 3D driver ! &lt;strong&gt;WRONG !&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Setup of the ICD part, sending everything through our escape functions ? check. But return values seams off.
After investigation, and any function taking a userland buffer, I came to a conclusion: OpenGL ICD part cannot communicate with a DOD driver. Windows knows we are display only, and fall-back our ICD calls on it’s own driver.&lt;/p&gt;

&lt;p&gt;So now, what’s the plan ?
Let’s put this problem aside, and try to focus on the real part: create proper commands for the host.&lt;/p&gt;

</description>
				<link>https://www.studiopixl.com/2017-05-13/gsoc-log2-api-forwarding</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2017-05-13/gsoc-log2-api-forwarding</guid>
				<pubDate>Sat, 13 May 2017 00:00:00 +0000</pubDate>
				
					<category>libvirt</category>
				
					<category>gsoc</category>
				
					<category>lse</category>
				
			</item>
		
			<item>
				<title>GSoC 2017 - Project presentation</title>
				<description>&lt;p&gt;On my arrival at the lab, I started a &lt;em&gt;little&lt;/em&gt; project: working on a display only driver for Windows. A good way to start learning what was hidden under the hood of an OpenGL application.
&lt;a href=&quot;https://developers.google.com/open-source/gsoc/&quot;&gt;Google Summer of Code 2017&lt;/a&gt; arrived, and subject were published. Among these, QEMU’s ‘Windows Virgl driver’.
Great ! Let’s apply !&lt;/p&gt;

&lt;p&gt;Applications closed early April. I took a look at the already existing DOD driver &lt;a href=&quot;https://github.com/vrozenfe/virtio-gpu-win&quot;&gt;(non official repo)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and also decided to learn a bit more about &lt;a href=&quot;/2017-05-05/first-steps-with-vulkan&quot;&gt;Vulkan&lt;/a&gt;.
Results came,  and I was selected, excellent !&lt;/p&gt;

&lt;h2 id=&quot;mission&quot;&gt;Mission&lt;/h2&gt;

&lt;p&gt;The idea is to bring 3d acceleration on Windows guests running with QEMU. Using VirtIO devices and &lt;a href=&quot;https://virgil3d.github.io&quot;&gt;Virgl3d&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;context&quot;&gt;Context&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2017-windows-stack.webp&quot; alt=&quot;windows stack&quot; /&gt;&lt;/p&gt;

&lt;p&gt;On this stack, we can work on three parts: opengl32.dll, ICD and Miniport driver.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;OpenGL32.dll is just  a dynamic library used to communicate with out runtime driver.&lt;/li&gt;
  &lt;li&gt;ICD: this is the OpenGL implementation. This part is the equivalent of Mesa on Linux.&lt;/li&gt;
  &lt;li&gt;Miniport-driver: this is the kernel driver. Hardware specific, we are going to do our hypercalls here.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;problems&quot;&gt;Problems&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;Windows is not open source. We have some basic ideas about D3DKrnl subsystem behaviour, but nothing is certain.&lt;/li&gt;
  &lt;li&gt;To develop a complete OpenGL state tracker is a lot of work.&lt;/li&gt;
  &lt;li&gt;Virgl3D takes some calls, and bytecode for shaders, re-translate it to GLSL, and call OpenGL again. Which means we will do the same work twice. Once on the host, once on the guest.&lt;/li&gt;
&lt;/ul&gt;
</description>
				<link>https://www.studiopixl.com/2017-05-11/gsoc-log1-project-presentation</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2017-05-11/gsoc-log1-project-presentation</guid>
				<pubDate>Thu, 11 May 2017 00:00:00 +0000</pubDate>
				
					<category>libvirt</category>
				
					<category>gsoc</category>
				
			</item>
		
			<item>
				<title>First steps with Vulkan</title>
				<description>&lt;p&gt;Vulkan is great, vulkan is love, vulkan is $(COMPLIMENT)&lt;/p&gt;

&lt;p&gt;I heard a lot about this API, but never took some time to try it… So why not
now ?
The goal was to do a simple OBJ file viewer. Then, try to improve performances,
and of course, since it’s Vulkan, go multithread! The API is pretty simple to
use. We fill out some structs, and call a vkSomething function.&lt;/p&gt;

&lt;p&gt;Some samples are available in the SDK, and of course, there is this
&lt;a href=&quot;https://vulkan-tutorial.com&quot;&gt; Vulkan-tutorial website&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So commits after commits, the code grew, until the first “Hello, world !”:
displaying a white triangle. 1000 lines of code vs 30, that’s quite steep.
After some additional 7641 lines, here it us, a simple obj viewer!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Keenuts/VulkanBasics&quot;&gt;GitHub link&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2017-vulkan-r5d4.webp&quot; alt=&quot;R5D4 on Vulkan app under i3&quot; /&gt;&lt;/p&gt;

&lt;p&gt;But we are the May the 5th, and &lt;a href=&quot;https://summerofcode.withgoogle.com/&quot;&gt;Google Summer of Code&lt;/a&gt;
results are out! 🥳 Time to focus on the next big project: &lt;strong&gt;OpenGL driver for a Windows on QEMU&lt;/strong&gt; VMs.&lt;/p&gt;
</description>
				<link>https://www.studiopixl.com/2017-05-05/first-steps-with-vulkan</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2017-05-05/first-steps-with-vulkan</guid>
				<pubDate>Fri, 05 May 2017 00:00:00 +0000</pubDate>
				
					<category>graphics</category>
				
			</item>
		
			<item>
				<title>Model time - R5D4</title>
				<description>&lt;p&gt;For my upcoming Vulkan project, I needed a model. So here it is !
R5D4, the &lt;em&gt;unchosen&lt;/em&gt; droid from Star Wars.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tools: 3ds Max, Substance Painter, Photoshop&lt;/em&gt;&lt;/p&gt;

&lt;div class=&quot;sketchfab&quot;&gt;
  &lt;iframe src=&quot;https://sketchfab.com/models/1e3bdecc5e0945beb5625848ab588b04/embed?camera=0&quot; loading=&quot;lazy&quot; frameborder=&quot;0&quot; allowvr=&quot;&quot; allowfullscreen=&quot;&quot; mozallowfullscreen=&quot;true&quot; webkitallowfullscreen=&quot;true&quot; onmousewheel=&quot;&quot; alt=&quot;R5D4 model (3dsMax, Substancce Painter, Photoshop)&quot;&gt;
  &lt;/iframe&gt;
&lt;/div&gt;
</description>
				<link>https://www.studiopixl.com/2017-02-06/model-time-r5d4</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2017-02-06/model-time-r5d4</guid>
				<pubDate>Mon, 06 Feb 2017 00:00:00 +0000</pubDate>
				
					<category>graphics</category>
				
			</item>
		
			<item>
				<title>Realtime ocean rendering and buyoancy</title>
				<description>&lt;p&gt;My Erasmus took place in Spain, in a city near the coast. So each morning,
seeing the sea reminded me how I never tried to work on water rendering.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.gamedeveloper.com/programming/water-interaction-model-for-boats-in-video-games&quot;&gt;This article&lt;/a&gt;
made me want to also work on the buoyancy aspect. And because I was using Unity
a lot at that time I decided to create a usable Asset.&lt;/p&gt;

&lt;p&gt;Two constraints:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;This asset had to work on every model.&lt;/li&gt;
  &lt;li&gt;It had to be easy to use on every ocean model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is the asset (free): &lt;a href=&quot;https://assetstore.unity.com/packages/tools/physics/fast-buoyancy-61079&quot;&gt;ASSET-STORE LINK&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here are some preview:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2016-buoyancy-screenshot.webp&quot; alt=&quot;screenshot in Unity 5&quot; /&gt;&lt;/p&gt;
&lt;div class=&quot;vimeo w-80&quot;&gt;
  &lt;iframe src=&quot;https://player.vimeo.com/video/199221175?title=0&amp;amp;byline=0&amp;amp;portrait=0&quot; loading=&quot;lazy&quot; frameborder=&quot;0&quot; webkitallowfullscreen=&quot;&quot; mozallowfullscreen=&quot;&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;
&lt;/div&gt;
&lt;div class=&quot;sketchfab w-80 mt-3&quot;&gt;
  &lt;iframe src=&quot;https://sketchfab.com/models/8f0cc747598e496295135acb3ebc14ed/embed?camera=0&quot; loading=&quot;lazy&quot; frameborder=&quot;0&quot; allowvr=&quot;&quot; allowfullscreen=&quot;&quot; mozallowfullscreen=&quot;true&quot; webkitallowfullscreen=&quot;true&quot; onmousewheel=&quot;&quot;&gt;
  &lt;/iframe&gt;
&lt;/div&gt;
</description>
				<link>https://www.studiopixl.com/2016-06-20/buyoancy-and-ocean-rendering</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2016-06-20/buyoancy-and-ocean-rendering</guid>
				<pubDate>Mon, 20 Jun 2016 00:00:00 +0000</pubDate>
				
					<category>graphics</category>
				
			</item>
		
			<item>
				<title>Model time - Wall-E</title>
				<description>&lt;p&gt;The famous Wall-E for this week’s mini project.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tools: 3ds Max, Substance Painter, Photoshop&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2016-model-time-walle.webp&quot; alt=&quot;Wall-E model (3ds Max + Substance Painter&quot; /&gt;&lt;/p&gt;

&lt;div class=&quot;sketchfab w-80&quot;&gt;
  &lt;iframe src=&quot;https://sketchfab.com/models/a425a376d0c74d44ac064faa167d07ac/embed?camera=0&quot; loading=&quot;lazy&quot; frameborder=&quot;0&quot; allowvr=&quot;&quot; allowfullscreen=&quot;&quot; mozallowfullscreen=&quot;true&quot; webkitallowfullscreen=&quot;true&quot; onmousewheel=&quot;&quot;&gt;
  &lt;/iframe&gt;
&lt;/div&gt;
</description>
				<link>https://www.studiopixl.com/2016-05-10/model-time-wall-e</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2016-05-10/model-time-wall-e</guid>
				<pubDate>Tue, 10 May 2016 00:00:00 +0000</pubDate>
				
					<category>graphics</category>
				
			</item>
		
			<item>
				<title>Model time - Snowspeeder</title>
				<description>&lt;p&gt;Being a teaching assistant this year. I needed prepare a lab about procedural
level generation. Goal for the students: write an infinite “runner” (flyer?).
For this occasion, I decided to reproduce a T-47 Snowspeeder from Star Wars.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tools: 3ds Max, Substance Painter, Photoshop&lt;/em&gt;&lt;/p&gt;

&lt;div class=&quot;sketchfab&quot;&gt;
  &lt;iframe src=&quot;https://sketchfab.com/models/fafea0517ad34ee485bcf97282985aa6/embed?camera=0&quot; loading=&quot;lazy&quot; frameborder=&quot;0&quot; allowvr=&quot;&quot; allowfullscreen=&quot;&quot; mozallowfullscreen=&quot;true&quot; webkitallowfullscreen=&quot;true&quot; onmousewheel=&quot;&quot; alt=&quot;T-47 model (3dsMax, Substancce Painter, Photoshop)&quot;&gt;
  &lt;/iframe&gt;
&lt;/div&gt;
</description>
				<link>https://www.studiopixl.com/2015-11-07/model-time-snowspeeder</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2015-11-07/model-time-snowspeeder</guid>
				<pubDate>Sat, 07 Nov 2015 00:00:00 +0000</pubDate>
				
					<category>graphics</category>
				
			</item>
		
			<item>
				<title>Multiplayer game - Starfight</title>
				<description>&lt;p&gt;      In my first year at EPITA, I had an assignment. Create a video-game using Unity.
I had 4 weeks, 3 missing teammates, and an idea.&lt;/p&gt;

&lt;p&gt;The idea was to combine both the &lt;a href=&quot;https://en.wikipedia.org/wiki/First-person_shooter&quot;&gt;FPS&lt;/a&gt;
and &lt;a href=&quot;https://en.wikipedia.org/wiki/Real-time_strategy&quot;&gt;RTS&lt;/a&gt;
style in a single multiplayer game.&lt;/p&gt;

&lt;p&gt;A game round required 5 players: &lt;br /&gt;
  The first — through a RTS gameplay — had to protect a central orb. &lt;br /&gt;
  The other 4 had to seek and destroy the orb the Quake’s fashion.&lt;/p&gt;

&lt;p&gt;The RTS side could spawn units, and give movement order. Attack and aim was
controller by the AI.
The FPS side could fight against these units, or spawn some fixed towers to
defend against the units.&lt;/p&gt;

&lt;p&gt;The project evaluation was made through presentations. So I decided to create
some trailers, and give my teachers an E3-like presentation.
Here are these trailers and some screenshots of the game. &lt;br /&gt;
Outcome: top promotion.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Technologies: 3ds Max, Unity, Substance Painter, Adobe Premiere, Photoshop,
C#&lt;/em&gt;&lt;/p&gt;

&lt;div class=&quot;vimeo w-80&quot;&gt;
  &lt;iframe src=&quot;https://player.vimeo.com/video/271849184&quot; loading=&quot;lazy&quot; frameborder=&quot;0&quot; allowvr=&quot;&quot; allowfullscreen=&quot;&quot; mozallowfullscreen=&quot;&quot; webkitallowfullscreen=&quot;&quot;&gt;
  &lt;/iframe&gt;
&lt;/div&gt;
&lt;div class=&quot;vimeo mt-3 w-80 mb-3&quot;&gt;
  &lt;iframe src=&quot;https://player.vimeo.com/video/139079659&quot; loading=&quot;lazy&quot; frameborder=&quot;0&quot; allowvr=&quot;&quot; allowfullscreen=&quot;&quot; mozallowfullscreen=&quot;&quot; webkitallowfullscreen=&quot;&quot;&gt;
  &lt;/iframe&gt;
&lt;/div&gt;

&lt;p&gt;(If somebody at DICE is wondering, yes it is a rip-off of a Battlefront devlog 🙂 )&lt;/p&gt;
</description>
				<link>https://www.studiopixl.com/2015-07-15/starfight</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2015-07-15/starfight</guid>
				<pubDate>Wed, 15 Jul 2015 00:00:00 +0000</pubDate>
				
					<category>graphics, games</category>
				
			</item>
		
			<item>
				<title>Rooftop experiment</title>
				<description>&lt;p&gt;I was able to use a drone an afternoon. Thus, it was the perfect opportunity to
learn more about tracking and mixing real footage with 3D objects.
I decided to film one tower at my parent’s farm. Then, create a fictive observatory, and
merge everything together.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tools: After Effect, Boujou, 3ds Max, Photoshop&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2015-rooftop-model.webp&quot; alt=&quot;Observatory model&quot; /&gt;
&lt;img src=&quot;/assets/posts/2015-rooftop-raw.webp&quot; alt=&quot;Raw footage&quot; /&gt;&lt;/p&gt;

&lt;div class=&quot;vimeo w-80&quot;&gt;
  &lt;iframe src=&quot;https://player.vimeo.com/video/146555300&quot; loading=&quot;lazy&quot; frameborder=&quot;0&quot; allowvr=&quot;&quot; allowfullscreen=&quot;&quot; mozallowfullscreen=&quot;&quot; webkitallowfullscreen=&quot;&quot;&gt;
  &lt;/iframe&gt;
&lt;/div&gt;
</description>
				<link>https://www.studiopixl.com/2015-02-05/rooftop</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2015-02-05/rooftop</guid>
				<pubDate>Thu, 05 Feb 2015 00:00:00 +0000</pubDate>
				
					<category>graphics</category>
				
			</item>
		
			<item>
				<title>Art project - Children &amp; Education</title>
				<description>&lt;blockquote&gt;
  &lt;p&gt;Children don’t have the experience a grown up has.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This sentence was the baseline for an art project I worked on during my final
year of high school. This book I wrote was one of the three pieces presented
on this occasion.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tools: Adobe Photoshop, Illustrator&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2014-book.webp&quot; alt=&quot;Children&apos;s book&quot; /&gt;&lt;/p&gt;
</description>
				<link>https://www.studiopixl.com/2014-06-15/children-s-book</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2014-06-15/children-s-book</guid>
				<pubDate>Sun, 15 Jun 2014 00:00:00 +0000</pubDate>
				
					<category>misc</category>
				
			</item>
		
			<item>
				<title>Model time - Door</title>
				<description>&lt;p&gt;Let’s go back in time: 2008, when I was 13.&lt;br /&gt;
This was one my first &lt;em&gt;serious&lt;/em&gt; project. Learning about volumetric lights on&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tools: 3ds Max&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/posts/2008-model-door.webp&quot; alt=&quot;3D rendering of the project&quot; /&gt;&lt;/p&gt;
</description>
				<link>https://www.studiopixl.com/2008-05-05/model-time-door</link>
				<guid isPermaLink="true">https://www.studiopixl.com/2008-05-05/model-time-door</guid>
				<pubDate>Mon, 05 May 2008 00:00:00 +0000</pubDate>
				
					<category>graphics</category>
				
			</item>
		
	</channel>
</rss>
