<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[foobarflies]]></title><description><![CDATA[Code on the counter]]></description><link>https://blog.tchap.me/</link><generator>Ghost 3.42</generator><lastBuildDate>Tue, 14 Oct 2025 12:27:46 GMT</lastBuildDate><atom:link href="https://blog.tchap.me/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[A simple Mattermost Docker installation]]></title><description><![CDATA[<p>I was trying to setup a full instance of <a href="https://mattermost.com/">Mattermost</a> recently and decided to use an existing Docker host so that I can test it out and easily rebuild / tinker with it.</p><p>The official documentation states that there is a community-driven repository that offers just that, Mattermost on Docker: </p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/mattermost/mattermost-docker"><div class="kg-bookmark-content"><div class="kg-bookmark-title">mattermost/</div></div></a></figure>]]></description><link>https://blog.tchap.me/a-simple-mattermost-docker-installation/</link><guid isPermaLink="false">60993dffda4743361fdbf4d2</guid><category><![CDATA[cloud]]></category><category><![CDATA[docker]]></category><dc:creator><![CDATA[tchap@tchap.me]]></dc:creator><pubDate>Mon, 10 May 2021 17:14:32 GMT</pubDate><media:content url="https://blog.tchap.me/content/images/2021/05/mattermost-2.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.tchap.me/content/images/2021/05/mattermost-2.png" alt="A simple Mattermost Docker installation"><p>I was trying to setup a full instance of <a href="https://mattermost.com/">Mattermost</a> recently and decided to use an existing Docker host so that I can test it out and easily rebuild / tinker with it.</p><p>The official documentation states that there is a community-driven repository that offers just that, Mattermost on Docker: </p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/mattermost/mattermost-docker"><div class="kg-bookmark-content"><div class="kg-bookmark-title">mattermost/mattermost-docker</div><div class="kg-bookmark-description">Dockerfile for mattermost in production. Contribute to mattermost/mattermost-docker development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="A simple Mattermost Docker installation"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">mattermost</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/2ea43ecc3627156a6ea1ba88d9c8af35d6942f26a63ee8ca9a92991e03d41947/mattermost/mattermost-docker" alt="A simple Mattermost Docker installation"></div></a></figure><p>... but the README greats us with a warning message: </p><blockquote>The current state of this repository doesn't work out-of-the box since Mattermost server v5.31+ requires PostgreSQL versions of 10 or higher.</blockquote><p>Mattermost being a Go application backed by a PostgreSQL server, I figured that this should not be overly difficult to achieve on its own, so I opened up a terminal and created a simple compose file to deploy it easily.</p><h3 id="structure">Structure</h3><!--kg-card-begin: markdown--><ul>
<li>We need a <code>Dockerfile</code> to build the actual server container</li>
<li>We need to create a <code>config.json</code> file</li>
<li>And a <code>docker-compose.yml</code> file to setup the database along the server, and bind them together</li>
</ul>
<!--kg-card-end: markdown--><p>The <code>Dockerfile</code> is pretty straightforward. We start for a relatively stable debian Buster:</p><!--kg-card-begin: markdown--><pre><code class="language-Dockerfile">FROM debian:buster

ENV version=5.34.2

# Install the PHP extensions we need
RUN set -eux; \
	apt-get update; \
	apt-get install -y --no-install-recommends \
	ca-certificates \
	openssl \
	curl \
	gnupg

RUN apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; \
	rm -rf /var/lib/apt/lists/*

# Get release
ADD https://releases.mattermost.com/${version}/mattermost-${version}-linux-amd64.tar.gz /

RUN tar -xzf /mattermost-${version}-linux-amd64.tar.gz
RUN mv /mattermost /opt/.

RUN rm -rf /mattermost-${version}-linux-amd64.tar.gz

# Add correct configuration
COPY config.json /opt/mattermost/config/config.json

# Set up a system user and group called mattermost that will run this service, and set the ownership and permissions.
RUN useradd --system --user-group mattermost
RUN chown -R mattermost:mattermost /opt/mattermost
RUN chmod -R g+w /opt/mattermost

USER mattermost:mattermost

VOLUME /opt/mattermost/data

CMD [&quot;/opt/mattermost/bin/mattermost&quot;]
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Key points here are:</p>
<ul>
<li>We need <code>ca-certificates</code> and <code>openssl</code> so that we can access https resources and APIs</li>
<li>We want the container to run with a non-root account (hence the <code>mattermost</code> user)</li>
<li>We want to expose the Mattermost data in a volume</li>
</ul>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>You need a <code>.env</code> file to hold the env vars for the PostgreSQL database</p>
<pre><code class="language-env">POSTGRES_USER=mmuser
POSTGRES_PASSWORD=a_strong_password
POSTGRES_DB=mattermost
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>And a <code>docker-compose.yml</code> file to hold all this together:</p>
<pre><code class="language-docker-compose">version: '3.7'

services:
  mattermost-postgres:
    container_name: mattermost-postgres
    image: postgres:13.2-alpine
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}
      PGDATA: /mattermost/postgres
    volumes:
       - postgres:/mattermost/postgres
    networks:
      - mattermost
    restart: unless-stopped

  mattermost-server:
    container_name: mattermost-server
    build:
      context: ./
    image: mattermost-server:latest
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - data:/opt/mattermost/data
    networks:
      - mattermost
    ports:
      - &quot;127.0.0.1:8066:8065&quot;
    restart: unless-stopped
    depends_on:
      - mattermost-postgres
    ulimits:
      nofile: 49152

networks:
  mattermost:

volumes:
  postgres:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /mattermost/postgres
  data:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /mattermost/data
</code></pre>
<blockquote>
<p>On the host, you need to have the <code>/mattermost/data</code> and <code>/mattermost/postgres</code> directories available.</p>
</blockquote>
<!--kg-card-end: markdown--><p>The <code>config.json</code> file that is referenced in the <code>Dockerfile</code> is a copy/paste of the standard Mattermost configuration that you can find here:</p><!--kg-card-begin: markdown--><p><a href="https://github.com/mattermost/mattermost-docker/blob/master/contrib/aws/app/mattermost/config/config.json">https://github.com/mattermost/mattermost-docker/blob/master/contrib/aws/app/mattermost/config/config.json</a></p>
<!--kg-card-end: markdown--><p>The main keys to update are:</p><!--kg-card-begin: markdown--><ul>
<li><strong>ServiceSettings</strong> &gt; <strong>SiteURL</strong>: You want to put your domain name here, with the <code>https://</code> part, ex: <code>https://mattermost.my_domain.com</code></li>
<li><strong>SqlSettings</strong> &gt; <strong>DriverName</strong> and <strong>DataSource</strong>: <code>DriverName</code> should be <code>postgres</code> and <code>DataSource</code> is your full connection string which should be along those lines: <code>postgres://mmuser:[password]@[postgres_container]:5432/mattermost?sslmode=disable&amp;connect_timeout=10</code><br>
Don't forget to change your password for the one that you setup in your <code>.env</code> file, and the <code>postgres_container</code> for the name that you gave to your container in the <code>docker-compose.yml</code> file (<code>mattermost-postgres</code> in my example)</li>
<li><strong>EmailSettings</strong>: If you want to have notifications, you need to put a working SMTP configuration here (in <code>EnableSMTPAuth</code>, <code>SMTPUsername</code>, <code>SMTPPassword</code>, <code>SMTPServer</code> and <code>SMTPPort</code>, and also set <code>SendEmailNotifications</code> to <code>true</code>, and fill in <code>FeedbackName</code>, <code>FeedbackEmail</code> and <code>ReplyToAddress</code>)</li>
</ul>
<p>Protip for these keys:</p>
<ul>
<li><strong>FileSettings</strong> &gt; <strong>Directory</strong></li>
<li><strong>PluginSettings</strong> &gt; <strong>Directory</strong> and <strong>ClientDirectory</strong></li>
</ul>
<p>You want to put <em>absolute</em> directories in here, instead of relative ones, it should work better (ex <code>/opt/mattermost/plugins</code> instead of <code>./plugins</code>)</p>
<!--kg-card-end: markdown--><h3 id="start-up">Start up</h3><p>Now that you're all set, you can:</p><h5 id="build-the-server-container">Build the server container</h5><p>With <code>docker compose build mattermost-server</code></p><h5 id="start-the-app">Start the app</h5><p>With <code>docker compose up -d</code></p><p>You should be able to visit <strong>127.0.0.1:8066</strong> on your Docker host to assess that everything runs fine.</p><h3 id="reverse-proxy">Reverse proxy</h3><p>So far, the Mattermost instance is only available locally on your Docker host (we specifically requested it to with <code>"127.0.0.1:8066:8065"</code> in the <code>ports</code> section of the <code>docker-compose.yml</code>). But we have <code>mattermost.my_domain.com</code> waiting for us, so let's reverse proxy it !</p><h4 id="on-the-host">On the host</h4><p>If you already have a web server on the host that serves some other containers to the public web, it's easy to use it to reverse proxy the Mattermost app.</p><p>I'm mainly using <a href="https://caddyserver.com/v2">Caddy</a> nowadays, and it's as easy as using this configuration:</p><!--kg-card-begin: markdown--><pre><code>mattermost.my_domain.com {
    reverse_proxy 127.0.0.1:8066
}
</code></pre>
<!--kg-card-end: markdown--><h4 id="via-another-container">Via another container</h4><p>We could also use <a href="https://doc.traefik.io/traefik/">Traefik</a> (See the docker images <a href="https://hub.docker.com/_/traefik">here</a>) to reverse-proxy the 8065 port of the <code>mattermost-server</code> container to the world.</p><p><em>This is quite easy to do and I'll leave that as an exercise for the reader.</em></p><hr><p>Once you have the server and the proxy up and running, you should be able to access the instance, create the system account if not done before, and start using Mattermost:</p><figure class="kg-card kg-image-card"><img src="https://blog.tchap.me/content/images/2021/05/mattermost.png" class="kg-image" alt="A simple Mattermost Docker installation" srcset="https://blog.tchap.me/content/images/size/w600/2021/05/mattermost.png 600w, https://blog.tchap.me/content/images/size/w1000/2021/05/mattermost.png 1000w, https://blog.tchap.me/content/images/2021/05/mattermost.png 1424w" sizes="(min-width: 720px) 720px"></figure><hr><p>🎉</p>]]></content:encoded></item><item><title><![CDATA[A year with my own private cloud]]></title><description><![CDATA[<p>More than a year ago, I decided to roll up my sleeves and create my own private cloud. I wrote extensively about it <a href="https://blog.tchap.me/your-own-public-cloud-why-not/">here</a>.</p><p>Now, it has become my whole personal infrastructure and it works very well (<em>to my standards, at least</em>), so let's look at the changes that it</p>]]></description><link>https://blog.tchap.me/a-year-with-my-own-public-cloud/</link><guid isPermaLink="false">60054a2f4ab4ae025351450a</guid><category><![CDATA[cloud]]></category><category><![CDATA[programming]]></category><category><![CDATA[docker]]></category><dc:creator><![CDATA[tchap@tchap.me]]></dc:creator><pubDate>Tue, 19 Jan 2021 17:19:40 GMT</pubDate><media:content url="https://blog.tchap.me/content/images/2021/01/Screenshot-2021-01-19-at-17.18.48-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.tchap.me/content/images/2021/01/Screenshot-2021-01-19-at-17.18.48-1.png" alt="A year with my own private cloud"><p>More than a year ago, I decided to roll up my sleeves and create my own private cloud. I wrote extensively about it <a href="https://blog.tchap.me/your-own-public-cloud-why-not/">here</a>.</p><p>Now, it has become my whole personal infrastructure and it works very well (<em>to my standards, at least</em>), so let's look at the changes that it underwent in that time and how I improved it to be more resilient, robust, and fit my needs.</p><h2 id="new-services-replacements">New services &amp; replacements</h2><p>Over the course of the year, I got to know better the services I used and some of them I was not perfectly fond of.</p><p>I've updated the repository continuously, check out <a href="https://github.com/tchapi/own-private-cloud">https://github.com/tchapi/own-private-cloud</a>.</p><h3 id="-spinning-my-own-caldav-backend">🗓 Spinning my own CalDAV backend</h3><p>I had chosen Baikal in my first instance and it worked OK, but it proved quite cumbersome to configure and hard to maintain or evolve due to the legacy codebase.</p><p>I thus created my own backend, <a href="https://github.com/tchapi/davis"><strong>Davis</strong></a>, based on <a href="https://symfony.com/" rel="nofollow">Symfony 5</a> and <a href="https://getbootstrap.com/" rel="nofollow">Bootstrap 4</a>, that follows best practices, is easy to configure and dockerize thanks a pretty standard <code>.env</code> workflow.</p><figure class="kg-card kg-image-card"><img src="https://blog.tchap.me/content/images/2021/01/dashboard.png" class="kg-image" alt="A year with my own private cloud"></figure><p>I also added some nice features such as delegation and sharing (<em>directly in the management section</em>), IMAP authentification, etc ...</p><p>It's completely backward-compatible with Baïkal so you can use the exact same MySQL database you used before if you're migrating from it (that's what I did).</p><p>Grab the latest release <a href="https://github.com/tchapi/davis/releases/latest">here</a>.</p><h3 id="-cryptpad">📝 Cryptpad</h3><p>What was missing in the infrastructure was a document editor, in the like of <strong>Google Docs</strong>. I had tested <a href="https://www.onlyoffice.com/">OnlyOffice</a> but it seems a bit <em>too</em> complete and not suited for a simpler, personal use.</p><p>I settled on <strong>Cryptpad</strong>: <a href="https://cryptpad.fr/what-is-cryptpad.html">https://cryptpad.fr/what-is-cryptpad.html</a></p><figure class="kg-card kg-image-card"><img src="https://blog.tchap.me/content/images/2021/01/Screenshot-2021-01-18-at-10.01.21.png" class="kg-image" alt="A year with my own private cloud"></figure><p>The team behind it strongly advocates security and privacy, and although the interface has rough edges, it works quite well.</p><p>I have adapted a quite simple and robust Dockerfile for it (you can have more details on the repository itself), that does not allow new registrations and configures a few things:</p><!--kg-card-begin: html--><script src="https://emgithub.com/embed.js?target=https%3A%2F%2Fgithub.com%2Ftchapi%2Fown-private-cloud%2Fblob%2Fmaster%2Fbuild%2FDockerfile-cryptpad&style=github&showBorder=on&showLineNumbers=on&showFileMeta=on"></script><!--kg-card-end: html--><h3 id="-wekan-planka">👋 Wekan → Planka</h3><p>Eventhough <strong>Wekan</strong> looked promising on the surface, I never got to enjoy using it. The UI is functional but not <em>on par</em> with what Trello, for instance, can offer.</p><p>I went on a quest to find another "task" / kanban software and found <strong>Planka</strong>, which is still in heavy development, but looks really promising:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.tchap.me/content/images/2021/01/Screenshot-2021-01-18-at-10.03.20.png" class="kg-image" alt="A year with my own private cloud"><figcaption>https://github.com/plankanban/planka</figcaption></figure><p>I installed it and I was right away convinced by how snappy it felt. It still lacks a lot of things but I decided to try it anyway (<em>and contribute if I can</em>).</p><p>It's actually a work in progress that you can find in the <code>services/planka</code> branch on the repo <a href="https://github.com/tchapi/own-private-cloud/compare/services/planka">here</a>.</p><figure class="kg-card kg-image-card"><img src="https://blog.tchap.me/content/images/2021/01/Screenshot-2021-01-18-at-10.07.56.png" class="kg-image" alt="A year with my own private cloud"></figure><h3 id="-reverse-proxy">🔀 Reverse proxy</h3><p>I started with an <strong>Nginx</strong> + <strong>Certbot</strong> combo, only to realize that it was quite cumbersome and hard to maintain (<em>especially the certificates</em>).</p><p>I chose to replace both of them with <strong>Traefik</strong>: <a href="https://doc.traefik.io/traefik/">https://doc.traefik.io/traefik/</a></p><p>It's an edge router that is quite used in the Docker world, and is easy to configure (<em>via labels</em>), while being quite opinionated.</p><p>It has its own certificate resolvers / endpoints compliant with ACME, so you can use Let's Encrypt and get certificates in a breeze.</p><p>Moreover, you can have a very nice dashboard listing all your routes and endpoints, quite practical for debugging purposes:</p><figure class="kg-card kg-image-card"><img src="https://blog.tchap.me/content/images/2021/01/Screenshot-2021-01-18-at-17.07.35.png" class="kg-image" alt="A year with my own private cloud"></figure><h2 id="-mails">📬 Mails</h2><p>I have had in mind to switch to a more privacy-respectful email provider but I never actually did it and kept using my <strong>{insert any mainstream provider here}</strong> address.</p><p>But then I read this: <a href="https://poolp.org/posts/2019-08-30/you-should-not-run-your-mail-server-because-mail-is-hard/">https://poolp.org/posts/2019-08-30/you-should-not-run-your-mail-server-because-mail-is-hard</a> — a blog post from <strong>Gilles Chehade</strong> who is an OpenBSD contributor and creator of OpenSMTPD, and it convinced me right away that <em>I had</em> to try to set up my own mail server.</p><p>And so I did.</p><p>I grabbed a supplementary IPv4 address for my mail server that I attached to the same OVH cloud instance, and created two containers:</p><!--kg-card-begin: markdown--><ul>
<li>one for OpenSMTPd (SMTP service)</li>
<li>one for Dovecot (IMAP service)</li>
</ul>
<!--kg-card-end: markdown--><p>That would provide me with the bare minimum for my mail server to work. I followed all the security best practices and made sure that I had a clean, standard installation, and just as described in the blog post that I linked above, it worked quite well from the beginning !</p><p>I spent quite a lot of time improving the two Dockerfiles but my mail server has been running flawlessly for about 8 months now, never losing a single mail — and generally being more responsive than the other services that I used.</p><p>Dovecot, moreover, is robust and standard software and I can access my mails from any Apple or Android phone, and any mac OS computer, which is all I need (<em>I guess it works well on Linux and Windows, too</em>).</p><h2 id="next-steps">Next steps</h2><p>I'm updating the containers quite regularly, and I'm always looking out for new types of services to self-host.</p><p>To date, I have not had another need but I might install a private git server sometime soon to host all my code.</p>]]></content:encoded></item><item><title><![CDATA[The need for sensible defaults]]></title><description><![CDATA[<h3 id="a-case-of-switching-from-nginx-to-caddy">A case of switching from Nginx to Caddy</h3><p>When installing a new VPS for a new project, I had to dive in my previous nginx boilerplate configuration files to create a new one, slightly different, to accomodate for my various needs on the project.</p><p>While doing so, I remembered how</p>]]></description><link>https://blog.tchap.me/the-need-for-sensible-defaults/</link><guid isPermaLink="false">5eabff9e626c65364da266cc</guid><category><![CDATA[nginx]]></category><category><![CDATA[caddy]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[tchap@tchap.me]]></dc:creator><pubDate>Sun, 10 May 2020 15:09:44 GMT</pubDate><media:content url="https://blog.tchap.me/content/images/2020/05/a-.png" medium="image"/><content:encoded><![CDATA[<h3 id="a-case-of-switching-from-nginx-to-caddy">A case of switching from Nginx to Caddy</h3><img src="https://blog.tchap.me/content/images/2020/05/a-.png" alt="The need for sensible defaults"><p>When installing a new VPS for a new project, I had to dive in my previous nginx boilerplate configuration files to create a new one, slightly different, to accomodate for my various needs on the project.</p><p>While doing so, I remembered how long and tedious these configuration files were — let's see for example a simple reverse proxy for a simple blog like <a href="https://ghost.org/">Ghost</a>, with redirects and SSL (this is not optimized - just a quick snippet):</p><!--kg-card-begin: markdown--><pre><code>server {
    listen 80;
    server_name www.myghost.blog myghost.blog;
    return 301 https://www.myghost.blog$request_uri;
}
server {
    server_name myghost.blog;

    ssl_certificate /etc/letsencrypt/live/myghost.blog/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/myghost.blog/privkey.pem;

    return 301 https://www.myghost.blog$request_uri;
}
server {
    server_name www.myghost.blog;

    client_max_body_size 10m;

    listen 443 ssl;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_dhparam /etc/ssl/certs/dhparam.pem;
    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_stapling on;
    ssl_stapling_verify on;
    add_header Strict-Transport-Security max-age=15768000;

    ssl_certificate /etc/letsencrypt/live/myghost.blog/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/myghost.blog/privkey.pem;

    location / {
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   Host      $http_host;
        proxy_set_header   X-Forwarded-Proto https;
        proxy_pass         http://127.0.0.1:4369;
    }
}
</code></pre>
<!--kg-card-end: markdown--><p>This is a lot (<em>and some parts are missing here still</em>), and it's mainly here to compensate the facts that the defaults are not suited.</p><p>I decided to look for an alternative web server, that would be production-grade, but that would be much more easy to configure while enforcing safe standards <em>by default</em>.</p><p>... and I came across <a href="https://caddyserver.com/docs/">Caddy</a>.</p><p>The same configuration in Caddy looks like this :</p><!--kg-card-begin: markdown--><pre><code>myghost.blog, www.myghost.blog {
    encode zstd gzip
    reverse_proxy localhost:4369 {
        header_up X-Forwarded-Proto {scheme}
    }
    header {
        Strict-Transport-Security max-age=31536000;
        X-Content-Type-Options nosniff
        Referrer-Policy no-referrer-when-downgrade
    }
}
</code></pre>
<!--kg-card-end: markdown--><p><strong>And that's it ! </strong>No supplementary configuration, no need to indicate the SSL certificates and the sensible ciphers or the stapling (<em>Caddy provision and renew them automatically</em>), and every default is well-chosen and does not need tweaking (in this case).</p><p>This gives an A+ in SSL labs with basically no effort :</p><figure class="kg-card kg-image-card"><img src="https://blog.tchap.me/content/images/2020/05/photo_2020-05-10_11-28-15.jpg" class="kg-image" alt="The need for sensible defaults"></figure><p>Performance-wise, I find it <em>on par</em> with what nginx does, delivering roughly the same amount of requests per second. I haven't extensively tested it yet though — your mileage may vary.</p><p>If you read through the docs (<a href="https://caddyserver.com/docs/">https://caddyserver.com/docs/</a>), you can see all the options that are available for all the directives. It's pretty modular and you can change pretty much every aspect of the server.</p><p>But in most cases, <strong><em>you don't need too</em></strong>.</p><h3 id="the-need-for-sensible-defaults">The need for sensible defaults</h3><p>In fact, what this boils down to is that the extensive configuration in nginx is very powerful, but it lacks <em>consistent sensible default values</em> for all of them.</p><p>I find that Caddy is quite opinionated in this regard, but this is quite a good thing in fact; when you read the Caddy configuration, you instantly get, in 10 lines, the full grasp of what it does.</p><p>You don't even need to scroll to see the full configuration file.</p><p>A lot of expert, production-grade software would clearly benefit from this approach. When using sensible (<em>and probably opinionated</em>) default values :</p><ol><li>the configuration is easier, simpler, shorter,</li><li>it promotes good practices,</li><li>you can configure it right, even if you don't know all the configuration specifics.</li></ol><p>It has cons, too. Specifically, you can lose touch with all the under-the-hood technicalities of the software you use, where all the defaults are hidden at first sight.</p><p><em>But I think that this is a false problem</em>; The majority of nginx configuration files I encountered along my way were mere patchworks of copy/paste of StackOverflow answers. So a long, convoluted, very specific configuration does not necessarily mean that the person who uses it does understand all the ins and outs of it. There's no relation between your expertise and the use of default values, if they are well-chosen.</p><p>And "opinionated" means that different <em>best practices</em> exist. In the case of Caddy vs. Nginx, I think we can all agree that enforcing HTTPS, and setting everything so that your website gets an A+ in the ssllabs test is enough a consensus to be declared a universal best practice. But it may depend on the software.</p><hr><h4 id="bonus-some-more-advantages-of-caddy">Bonus: Some more advantages of Caddy</h4><ul><li>Automatic certificates and renewal via LE</li><li>Static binary, no dependencies</li><li>Zero-downtime configuration reload</li></ul><p>Give it a try !</p>]]></content:encoded></item><item><title><![CDATA[Caddy for Ghost]]></title><description><![CDATA[<p>If you're trying to serve your Ghost blog with <a href="https://caddyserver.com/">Caddy</a>, the configuration is pretty simple, but you <strong>must</strong> add the <code>X-Forwarded-Proto</code> header upstream for the <strong>https</strong> redirection to work :</p><!--kg-card-begin: markdown--><pre><code>myghostblog.com {
    reverse_proxy localhost:3000 {
        header_up X-Forwarded-Proto {scheme}
    }
}
</code></pre>
<!--kg-card-end: markdown--><p>Note that you don't need any <code>file_server</code> directive since everything</p>]]></description><link>https://blog.tchap.me/caddy-for-ghost/</link><guid isPermaLink="false">5eabff00626c65364da266bc</guid><category><![CDATA[caddy]]></category><category><![CDATA[code]]></category><category><![CDATA[node.js]]></category><category><![CDATA[programming]]></category><dc:creator><![CDATA[tchap@tchap.me]]></dc:creator><pubDate>Sun, 19 Apr 2020 22:00:00 GMT</pubDate><content:encoded><![CDATA[<p>If you're trying to serve your Ghost blog with <a href="https://caddyserver.com/">Caddy</a>, the configuration is pretty simple, but you <strong>must</strong> add the <code>X-Forwarded-Proto</code> header upstream for the <strong>https</strong> redirection to work :</p><!--kg-card-begin: markdown--><pre><code>myghostblog.com {
    reverse_proxy localhost:3000 {
        header_up X-Forwarded-Proto {scheme}
    }
}
</code></pre>
<!--kg-card-end: markdown--><p>Note that you don't need any <code>file_server</code> directive since everything is served by Express in the end, even the static files.</p><h4 id="addendum-x-frame-options">Addendum: X-Frame-Options</h4><p>If you want to be able to render your site in the "View site" panel of the administration, you must make sure that iframe embedding is allowed at least for <code>SELF</code> (<em>in case you have deactivated it</em>):</p><!--kg-card-begin: markdown--><pre><code>header {
    X-Frame-Options SELF
}
</code></pre>
<!--kg-card-end: markdown--><p>🎉</p>]]></content:encoded></item><item><title><![CDATA[Your own public cloud — why not ?]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Alright, this is going to be quite a long article, about a subject that I have grown particularely fond of lately : online privacy.</p>
<p>We're going to deal with self-hosting too, a lot 😇. And we'll get technical, don't worry.</p>
<p><strong>In a nutshell, this article is my personal take on what I</strong></p>]]></description><link>https://blog.tchap.me/your-own-public-cloud-why-not/</link><guid isPermaLink="false">5ea824fb25e7cb317d1336d0</guid><category><![CDATA[code]]></category><category><![CDATA[programming]]></category><category><![CDATA[linux]]></category><category><![CDATA[cloud]]></category><dc:creator><![CDATA[tchap@tchap.me]]></dc:creator><pubDate>Fri, 04 Oct 2019 13:06:58 GMT</pubDate><media:content url="https://blog.tchap.me/content/images/2019/10/cloud.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.tchap.me/content/images/2019/10/cloud.png" alt="Your own public cloud — why not ?"><p>Alright, this is going to be quite a long article, about a subject that I have grown particularely fond of lately : online privacy.</p>
<p>We're going to deal with self-hosting too, a lot 😇. And we'll get technical, don't worry.</p>
<p><strong>In a nutshell, this article is my personal take on what I <em>value</em> when I say online privacy, and how I decided to try to mitigate the risks by using open-source tools to create my own public cloud ☁️.</strong></p>
<p><img src="https://blog.tchap.me/content/images/2019/09/self-hosting.jpg" alt="Your own public cloud — why not ?"></p>
<p>Your mileage may vary on the definition of privacy, on the risks taken, and on the legitimity of the tools I will describe and implement, but I think this article can give you a good head start if you're looking into this kind of stuff at the moment.</p>
<p><strong>⎈ Summary</strong></p>
<pre><code>    
  1. What is the problem, to begin with?
  2. A personal cloud?
  3. Technical implications
    a. Self-hosted to the extreme ?
    b. Docker / Compose
    c. Risks and mitigations
  4. Solutions
    a. Drive
    b. Bookmarks
    c. Passwords
    d. Tasks / Boards
    e. Notes
    f. Calendar / Contacts
    g. Raw syncing
  5. Go
    a. Get docker and compose up and running
    b. Retrieve the repo
    c. Create your cloud infrastructure
    d. Add or remove services
    e. Tweak the configuration
    f. Build the custom images
  6. Launch
  7. Final words
    a. A note on E2E encryption
    b. Backups
    c. Donate and contribute

</code></pre>
<br>
<h1 id="1whatistheproblemtobeginwith">1. 💁🏻 What is the problem, to begin with?</h1>
<p>Privacy is becoming a mainstream subject right now, and that is good. The immense amount of data that one generates while using various internet services like social networks, online services or softwares is now more and more a personal stress factor; for techies of course but also more and more for the profanes.</p>
<div style="text-align: center;margin: 30px 0;">* * *</div>
<p>Because of data breaches, first. Not one day goes by without its new security breach (<em>Equifax, Option Way, CircleCI, CapitalOne, Movie Pass, LinkedIn, Twitter ...</em>) exposing thousands of people's personal data, including, in the worst cases, passwords in plain text.</p>
<p>Passwords in <strong>plain text</strong> ? We're in 2019 for Christ's sake — this is just <strong>mind-boggling</strong> that platforms or saas providers that have thousands or millions accounts to manage do not have the slightest knowledge of what basic security / encryption / good practice is.</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/Screenshot-2019-09-16-at-10.50.28.png" alt="Your own public cloud — why not ?"></p>
<div style="font-size: 0.75em; text-align: center;margin-bottom: 20px">This type of answer is beyond understanding</div>
<p>You can also check the 453+ pages of <a href="https://plaintextoffenders.com">https://plaintextoffenders.com</a> to check if a service you want to use has questionable practices.</p>
<div style="text-align: center;margin: 30px 0;">* * *</div>
<p>But also and maybe more importantly (<em>and more concerning to be honest</em>) because of the marketing and targetting mayhem this data is subject to. What does Google do with my emails ? Does it read my Drive documents ? And these Facebook searches ? Why do I keep seeing ads that are directly related to my previous internet searches ? Why do I have to load more than 80ko of trackers on each and every website I browse ? etc...</p>
<p>This is not a problem <em>per se</em>, but it could well become one when the companies / countries that control these targetting and marketing efforts decide or are forced to hand over <em>our</em> data to malicious third parties.</p>
<p>I don't want to be too alarming, but I'm myself concerned about the data I create and how it may or may not be used by the providers I trust it with.</p>
<p>That's why I went (<em>and am still</em>) on a quest to mitigate this risk.</p>
<div style="text-align: center;margin: 30px 0;">* * *</div>
<p>I have quite a bit of history with what I would call &quot;incumbent&quot; service providers, since I've been mainly active on the Internet at the time these providers were gaining a lot of traction: I have Gmail emails and associated Google accounts. I use it for contacts and calendar, too. I have a Dropbox account. I use Trello for todos and whatnots. I use various sync services for my browsers (Firefox, Safari) ... the list goes on.</p>
<p>Now I'm not saying that these providers do a bad job regarding privacy, but regarding all the recent heat a few of them (all of them?) have received, I no longer trust any &quot;big-enough&quot; actor with my personal data.</p>
<p>Now that means that I could find and use smaller, local providers, but how can I be sure that they are well-funded and can provide a full-featured service in the long-run (<em>I'm happy to pay for a service of course, but even then</em>)? And what if they get bought later on ? How can I assess their security measures and processes ? At least I'm sure Google has a team dedicated (<em>even many teams</em>) to secure and backup its servers and put them back online in a snap.</p>
<p>So a small provider is not a really satisfying answer.</p>
<p>Of course I could revoke all these accounts and do everything offline; but we're in the 21th century and as a lot of people out there, I'm not ready to trade the practicality of online tools such as mail, calendar, contacts, file repository, readily available on any device anywhere, for an extreme privacy posture.</p>
<p><strong>TL;DR:</strong>  <a href="https://www.stallman.org/google.html">Richard Stallman</a> is right on many points, but his position is generally too absolute and radical, and not compatible with a fair and reasonable use of today's tools (I think).</p>
<h1 id="2apersonalcloud">2. 🌥 A personal cloud?</h1>
<p>So a personal cloud could be a good solution, in this regard. A small provider, <em>that happens to be yourself</em>. Not perfect, but better than other options.</p>
<p>To me, here are some pros and cons of this solution:</p>
<div style="color: green; font-weight: bolder">Pros :</div>
<ul>
<li>The data is on your servers / you own your data <em>for real</em></li>
<li>You control the applications you put on these servers</li>
<li>You can modify them if needed, to suit your needs</li>
<li>You can choose solutions that are open-source and have auditable security</li>
<li>You can find feature-rich solutions that are the <em>alsmost</em> exact equivalent of Google Drive, Trello, etc ...</li>
</ul>
<div style="color: red; font-weight: bolder">Cons :</div>
<ul>
<li>You are the admin of your cloud apps. This means you are the <strong>technical</strong> admin of your cloud apps.</li>
<li>In case of a problem, it's up to you to fix it</li>
<li>It needs a bit of work to set up correctly</li>
<li>It needs a bit of work to maintain and to update</li>
</ul>
<p>So yes, you need to get a bit under the hood to be able to create your own cloud, that's the culprit. But the amount of work needed is not overwhelming, and we're going to use standard technologies here, with extensive documentation, communities, and easy-to-access help on all the different stacks.</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/youcan.gif" alt="Your own public cloud — why not ?"></p>
<p><strong>You've got this</strong>.</p>
<h3 id="whatexactlydoesapersonalcloudcoverintermsoffunctionality">What <em>exactly</em> does a personal cloud cover in terms of functionality ?</h3>
<p>It depends. You might need a feed reader, a webmail, a calendar, and a file storage solution, or you might just need a todo list and a task manager, with a VPN, ... that's up to you to decide.</p>
<p>In this article, I will focus on some standard needs (<em>that happen to be my needs 😁</em>) : a drive, a calendar and contacts, a bookmarks manager, a password manager, a notes app, a kanban-board (<em>à la</em> Trello) and a sync/backup solution.</p>
<p>This should cover a wide range of needs for a personal user, and could very well work for small associations or companies too, that want to go the open-source / privacy-focused way (and that have a technical contact that can manage that of course).</p>
<h1 id="3technicalimplications">3. 🛠 Technical implications</h1>
<p>Ok, now let's dive into the <em>how</em>.</p>
<h2 id="aselfhostedtotheextreme">a. Self-hosted to the extreme ?</h2>
<p>The first obvious technical issue we're facing is obviously that this cloud will need to be hosted <em>somewhere</em> on the Internet.</p>
<p><strong>Two options are available</strong> : either you push the paradigm to the extreme and use a real, physical server that you <em>physically own</em>, and that will sit behind your home broadband connexion, or you have to rely on the &quot;cloud&quot;, that is, a third party that will provide you with a virtual or physical server, or even an abstracted infrastructure you can deploy things on.</p>
<p>If you are really worried about the whereabouts of your data, your own server is the straightforward solution; personally, I think it is over-complicated and has various drawbacks :</p>
<ul>
<li>Your home IP might change</li>
<li>Your home ADSL / fibre may have connectivity problems from time to time</li>
<li>You could have a power cut / brownout etc</li>
<li>You need to maintain <em>hardware</em></li>
<li>Your cat could trip over the network cable of the server</li>
</ul>
<p>(<em>On a side note, you could also have your own server, but on someone else's Internet, like what <a href="https://blog.codinghorror.com/the-cloud-is-just-someone-elses-computer/">Jeff Atwood did for Discourse</a> — but I think that is not ideal, since apart from the Internet connexion, you still have the drawbacks of having to manage hardware, i.e. replace disks, etc</em>)</p>
<p>So putting your infrastructure somewhere else, where it likely will be monitored round the clock, seems reasonable to me. You need to assess that your provider is somewhat respectful of what you do on your instances, but you can mitigate that quite easily too.</p>
<p>I'm already using <a href="https://www.ovh.com/world/public-cloud/">OVH</a> and I'm pretty happy with their pricings so I would recommand them, but they are a lot of other cloud infrastructure providers out there : <a href="https://aws.amazon.com/">Amazon</a> of course, <a href="https://azure.microsoft.com/en-us/">Microsoft Azure</a>, Oracle, <a href="https://www.vultr.com/products/cloud-compute/#pricing">Vultr</a>, <a href="https://www.rackspace.com/openstack/public/pricing">Rackspace</a>, etc ...</p>
<blockquote>
<p>For the setup that I'm going to describe here, you should expect to pay about 20€ per month at OVH, which is very reasonable.</p>
</blockquote>
<h2 id="bhowisitgoingtoworkexactly">b. How is it going to work exactly ?</h2>
<p>The plan is the following :</p>
<p>First, we'll create a relatively powerful instance on your chosen infrastructure provider. This will serve as a host for an automation / containerization tool (see below), that will do the heavy lifting and provide isolation between services.</p>
<p>You could as well spin off an instance (aka virtual server) for each service of your cloud that you need, but this is not ideal since most of them will sit doing nothing most of the time (<em>you are the sole user of your cloud</em>); and moreover, you end up with a lot of instances to take care of and to pay for.</p>
<p>A powerful host, containerized, should be a better use of the resources.</p>
<p>Next, we are going to create a container for every service that we wish to use : one for our drive, one for our calendars, etc. We'll see that in details later on.</p>
<p>Of course, we'll need to expose all these services to the Internet through the host, while limiting the attack surface. Our containerization tool will help us do that.</p>
<p>Finally, if you want to have a clean namespaced cloud, you will certainly need to create some DNS entries to match for all your container services / ports on the host.</p>
<h2 id="bdockercompose">b. Docker / Compose</h2>
<p>Alright, now, with this plan, the obvious simple containerization tool that comes to (<em>my</em>) mind is <strong>Docker</strong>.</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/Screenshot-2019-09-20-at-13.54.15.png" alt="Your own public cloud — why not ?"></p>
<p>I'm saying obvious here because it's relatively stable, has a lot of documentation, has a great community and a lot of stackoverflow <s>questions</s> answers.</p>
<p>We're clearly not going to create a handful of cloud apps from scratch with our bare hands; There are fantastic automation tools out there that are just right for the job; Docker is my choice but feel free to use the one you like or are comfortable with.</p>
<p>Docker itself would be sufficient to instantiate all we need but it would be tedious — we're going to rely heavily on <code>docker-compose</code> to do the job.</p>
<p>In fact, our entire configuration for our cloud apps will likely reside in a <em>single docker-compose configuration file</em>.</p>
<h2 id="crisksandmitigations">c. Risks and mitigations</h2>
<p>So what's the culprit ?</p>
<h4 id="ourinfrastructureproviderhaveaccesstoourinstanceandstorage">Our infrastructure provider have access to our instance and storage</h4>
<p>Yes, it does. I see two risks here :</p>
<ol>
<li>what if the provider gets p*wned ?</li>
<li>what if the provider decides to use this data (<em>sell it to third-parties, etc</em>) ?</li>
</ol>
<p>Well, point 1 is legit : if someone infiltrates their DC or network and gets your block storage, it will have access to your data. Point 2 is irrelevant; your data is a big blob of disk storage, and is not valuable <em>as-is</em>. You really should be someone of interest for any third-party to try to buy your raw data and try to extract something from it. The only data your provider could sell is your account on their platform with your billing info / personal info, that's it.</p>
<p>For point 1 though, it's up to you to store your data <em>securely</em>, for instance by choosing apps that store your data encrypted. In the apps that we're going to install here, some do and some don't. But for the important data (passwords, notes for example), I chose ones that do.</p>
<h4 id="whatifsomehowstealsmydatabase">What if somehow steals my database ?</h4>
<p>That could happen if you happen to have a misconfigured service or a kind of exploit that has not been patched yet.</p>
<p>Same than the point 1 above : if your data is encrypted, who cares ?</p>
<h4 id="ifmyinfrastructureproviderfails">If my infrastructure provider fails ?</h4>
<p>Well, that could happen too. If it's temporary, well, you just had a downtime. If the <em>physical</em> server on which your data (apps or storage) was stored crashes, then, you lost it all.</p>
<p>It's not as likely as you would think since the &quot;cloud&quot; is not just someone else's computer; it's a bit more complicated than that and involves clusters of cpu and storage and a few layers of software that are generally running on cheap hardware that is expected to fail at some point. &quot;Hardware failure is the norm&quot; (it's Hadoop's motto); hence your data is not really just spinning on one disk somewhere. And is less prone to hardware failures.</p>
<p>But anyway, the providers generally have several ways to deal with that :</p>
<ul>
<li>data is replicated on their side: in case of a crash, they will restore your instance and data from hot backups, or from cold backups they have at another location</li>
<li>they provide tools to make block storage snapshots on your own: either via a web interface or an API, or in an automated fashion<br>
- you can also make backups (a little different from snapshots) so you can restore to any previous state, especially for your instance</li>
</ul>
<h4 id="whatifoneofmycontainerfailsstopscrashes">What if one of my container fails / stops / crashes ?</h4>
<p>Well, it's generally not that bad. Maybe a glitch. As we're going to see, your apps are going to be separated from your data, thus it's possible to restart / stop / rebuild a container from scratch without losing anything.</p>
<p>If you're away on vacation while it crashes, and you don't have access to a computer to restart it, chances are you don't need your cloud anyway ;)</p>
<h1 id="4apps">4. 🖥 Apps</h1>
<p>Ok enough talking (<em>sorry for that</em>), let's get to the point: what apps are we going to use ?</p>
<blockquote>
<p>This list and my choices are totally personal — if you want more (a lot more) options, check out <a href="https://github.com/Kickball/awesome-selfhosted">Kickball's github repository</a> that has an extensive list, for all kind of needs.</p>
</blockquote>
<h2 id="adrive">a. Drive</h2>
<p>Ok let's start with the obvious solution you need: storing documents in the cloud.</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/proxy.duckduckgo.com.jpeg" alt="Your own public cloud — why not ?"></p>
<p>I've settled on Cozy (<a href="https://cozy.io">https://cozy.io</a>); it's a French solution. It's still a bit young and sometimes buggy, but their interface is very intuitive, very clear, and it does the job well. They have a mobile app and a desktop app that are getting updated almost on a daily basis.</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/photo-timeline-en.jpg" alt="Your own public cloud — why not ?"></p>
<p>They take security and privacy quite seriously, even on their hosted plan (very reasonably priced by the way); Their CPO was Tristan Nitot (now heading the Qwant search engine), founder and former president of Mozilla Europe, and they are likely to take over some cloud solutions in the French government and administrations.</p>
<p>Unfortunately their docs for the self-hosted version are not on par yet, but I've already done that work for you with the dockerfile so you won't have to deal with outdated instructions.</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/cozy-drive-home.png" alt="Your own public cloud — why not ?"></p>
<p>The beauty of Cozy is its simple set of features — it's clearly aimed at single-users and not at companies (at least in my opinion); you can still share folders quite easily with people nevertheless, which is very practical.</p>
<p>Extra modules are available : photos, contacts, banks to name a few.. and they share the same &quot;simple, functional&quot; philosophy than the drive counterpart.</p>
<h3 id="alternatives">Alternatives</h3>
<ul>
<li><a href="https://github.com/owncloud-docker/server">ownCloud</a> : I've been using it in a work context and it's pretty powerful, with a lot of extensions and modules. The desktop clients are robust, and they have a strong community if you run into issues.</li>
<li><a href="https://www.seafile.com/en/download/">Seafile</a> : quite powerful too, the team is Chinese, so you might want to check that it complies with your needs and principles. Otherwise, it has everything you need.</li>
<li><a href="https://github.com/nextcloud">NextCloud</a> : I haven't tested this one, but it's the go-to solution that a lot of experts advocate. Might be worth it even if a bit complicated I think for a single-user solution.</li>
</ul>
<h2 id="bbookmarks">b. Bookmarks</h2>
<p>I use bookmarks a lot when researching the web, so I needed a solution to store them and access them anywhere easily.</p>
<p>I obviously didn't want to use any vendor solution such as Firefox Sync or iCloud, to be able to use any browser, and of course so I could self-host this too.</p>
<p>X-browser sync (<a href="https://www.xbrowsersync.org/">https://www.xbrowsersync.org/</a>) seemed to me the perfect solution : simple, encrypted, robust.</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/512x512bb.jpg" alt="Your own public cloud — why not ?"></p>
<p>And it has Docker instructions, too;</p>
<h3 id="alternatives">Alternatives</h3>
<ul>
<li>If you use Firefox, you could spin off your <a href="https://github.com/mozilla-services/syncserver">own Firefox sync server</a> that works with the Browser</li>
</ul>
<h2 id="cpasswordmanager">c. Password manager</h2>
<p>This is a must-have for your own private cloud.</p>
<p>I've looked into strong and robust solutions, and I found <a href="https://www.passbolt.com/">Passbolt</a> to be a very good contender.</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/604283-original.jpg" alt="Your own public cloud — why not ?"></p>
<p>It uses <a href="https://gnupg.org/">GnuPG</a>, is open-source, has a clear <a href="https://www.passbolt.com/roadmap">roadmap</a>, and its development is taking place in the heart of Europe (Luxemburg).</p>
<p>Moreover, they answer quite quickly (on Github at least) which is very nice.</p>
<p>They have a plugin for Firefox / Chrome / Edge (with the Chromium engine), but not Safari yet.</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/passbolt.png" alt="Your own public cloud — why not ?"></p>
<h3 id="alternatives">Alternatives</h3>
<ul>
<li><a href="https://bitwarden.com/">Bitwarden</a> looks great but it threw me a bit off that one needs to request an id and key (<a href="https://bitwarden.com/host/">https://bitwarden.com/host/</a>) to self-host. It seems quite powerful and has a strong documentation.</li>
</ul>
<h2 id="dtasksboards">d. Tasks / Boards</h2>
<p>Everybody loves a good <a href="https://en.m.wikipedia.org/wiki/Kanban_board">Kanban</a> board to sort things out. It deserves a space in your private cloud;</p>
<p>The only worthy Trello opponent I found is <a href="https://wekan.github.io/">Wekan</a>.</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/Screenshot-2019-09-26-at-15.55.32.png" alt="Your own public cloud — why not ?"></p>
<p>It looks very much like Trello, you can import boards (<em>but beware — if the boards are too big it will fail silently!</em>), and can tweaks settings for every part of the software.</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/wekan-markdown.png" alt="Your own public cloud — why not ?"></p>
<p>It has a docker image already, which is a plus.</p>
<h3 id="alternatives">Alternatives</h3>
<ul>
<li><a href="https://taskboard.matthewross.me/">TaskBoard</a> — nice, but was lacking the &quot;Trello&quot; touch and feel. Also, on a technical side, it was not as easy to install properly and maintain</li>
<li><a href="https://restya.com/board">Restyaboard</a> — Same same : very interesting but the interface did not suit me at all.</li>
</ul>
<h2 id="enotes">e. Notes</h2>
<p>This was an easy choice. There is one player out there that is a really ahead of the competition : <a href="https://standardnotes.org/">Standard Notes</a>.</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/standard-notes-logo-image.png" alt="Your own public cloud — why not ?"></p>
<p>It's simple, encrypted, it has a robust set of features and you can add extensions that are simple JS apps (there are a few already available, such as theming, markdown editors, spreadsheets ...).</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/sn.png" alt="Your own public cloud — why not ?"></p>
<p>Yet the apps are beautifully designed and are available on MacOS, Android, iOS ... and there is a web app too, that you can use even if you self-host your instance.</p>
<p>I use it daily and it's a pleasure to take notes with it.</p>
<h3 id="alternatives">Alternatives</h3>
<blockquote>
<p>To be really honest, there are a lot, but none of them just come close to SN. Check the 'others' section below to have a list (not curated by me)</p>
</blockquote>
<h2 id="fcalendarcontacts">f. Calendar / Contacts</h2>
<p>The <em>de facto</em> standard for calendar and contacts exchange on the Internet is CalDav/CardDav, but it's a complicated protocol / standard.</p>
<p>There are a few options out there and I decided to go with one that was on a stack I'm familiar with : PHP and MySQL, so I could easily tweak the code if needed.</p>
<p>So here comes <a href="http://sabre.io/baikal/">Baikal</a>.</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/baikal.png" alt="Your own public cloud — why not ?"></p>
<p>Baikal has adockerized installation, is pretty easy to install and configure, and has multi-users support. It works out-of-the-box with all my macOS apps (calendar, contacts) and also with all my Android apps (see below).</p>
<p>There is a web dashboard that gives you a lot of information at a glance :</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/Baikal-Dashboard.jpg" alt="Your own public cloud — why not ?"></p>
<p>The only problem I'm still trying to mitigate is that answers to invitations do not change the invitee RSVP status in my calendar: for instance if I create an event and invite a few people on gmail adresses, and if they respond to the invitation, I will receive a mail, but my calendar event will not be updated.</p>
<p>This maybe a configuration problem, but I think it's a deeper issue; I'll try to investigate later on to see if this can be fixed - but I have to up my knowledge of the CalDav standard first...</p>
<h3 id="ohyoureonandroid">Oh, you're on Android ?</h3>
<p>Unfortunately getting a CalDav / CardDav server to work on a vanilla Android phone is tricky. I would recommend using the very nice <a href="https://www.davx5.com/">DavX</a> application. It's open-source too (<a href="https://gitlab.com/bitfireAT/davx5-ose">https://gitlab.com/bitfireAT/davx5-ose</a>) but at 4€, it's worth just paying for it.</p>
<h3 id="alternatives">Alternatives</h3>
<ul>
<li><a href="https://sogo.nu/">SoGo</a> — is more complete and extensive, but I needed something simpler since I am planning to use it as a single user</li>
<li><a href="https://radicale.org/">Radicale </a> — a more roots solution, but not updated recently</li>
<li><a href="https://www.calendarserver.org/">Calendar Server</a> (Apple) — seems very robust and well-thought, but I'm wary of the amount of effort Apple have put in the documentation and the OS community on this</li>
</ul>
<h2 id="grawsyncing">g. Raw syncing</h2>
<p>So we already have a drive solution, with a web interface so we can access our documents anywhere anytime with just an Internet connexion.</p>
<p>But not all our documents <em>need</em> to be accessed this way.</p>
<p>You could also have a need to sync a large collection of photos / files to the cloud as a kind of backup, or to be able to retrieve it on another machine (work / home for instance).</p>
<p>I chose <a href="https://syncthing.net/">Syncthing</a> for that.</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/syncthing-logo.png" alt="Your own public cloud — why not ?"></p>
<p>Syncthing is purely decentralized, so you can easily add &quot;nodes&quot; to improve your data resilience. It is easily dockerized.</p>
<p>It has a neat web interface to see how things get replicated across your networks of devices, but it's really a &quot;fire and forget&quot; type of software, which is pretty convenient.</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/syncth.jpeg" alt="Your own public cloud — why not ?"></p>
<h3 id="alternatives">Alternatives</h3>
<ul>
<li><strong>Seafile</strong>, <strong>ownCloud</strong> and <strong>Nextcloud</strong> (<em>that we presented above</em>) are solutions that could work for this too</li>
</ul>
<h2 id="hothers">h. Others ?</h2>
<p>Maybe you need another type of web app that I didn't include here — rejoice! As there is very surely an open-source and easily dockerizable software out there waiting for you.</p>
<p>As a start, Edward has compiled a very comprehensive list on Github here : <a href="https://github.com/Kickball/awesome-selfhosted">https://github.com/Kickball/awesome-selfhosted</a> so you can find your dream package to install in your cloud.</p>
<p>Some of them already have a dockerfile / Docker hub repository, some don't but it's generally not a big deal to create one.</p>
<h1 id="5go">5. ▶️ Go!</h1>
<p>Now's the technical part at last !</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/finally.gif" alt="Your own public cloud — why not ?"></p>
<p>We'll need a few tools to do this.</p>
<p>On your local machine, you'll need <code>docker</code>, <code>docker-machine</code> and <code>docker-compose</code>. At some point, you'll need to clone the repository where I have put all the configuration / scripts, so <code>git</code> is kind of mandatory.</p>
<p>You won't need to install a VM on your local machine since we'll use Docker to remote control another Docker instance (<em>that will be in the cloud</em>). So hopefully you don't need to install Virtualbox or Hyperkit. Nevertheless, if you want to test your config locally when making changes, I'd recommend that you have a minimal VM locally.</p>
<h2 id="agetdockerandcomposeupandrunningonyourlocalmachine">a. Get docker and compose up and running on your local machine</h2>
<p>I won't extend myself too much on this. There are many tutorials on the web if you are not an expert, and the Docker documentation is very comprehensive.</p>
<ul>
<li>For macOS : <a href="https://docs.docker.com/docker-for-mac/install/">https://docs.docker.com/docker-for-mac/install/</a></li>
<li>For Windows : <a href="https://docs.docker.com/docker-for-windows/">https://docs.docker.com/docker-for-windows/</a></li>
</ul>
<h2 id="bretrievetherepo">b. Retrieve the repo</h2>
<p>Head to <a href="https://github.com/tchapi/own-private-cloud">https://github.com/tchapi/own-private-cloud</a> and clone it locally.</p>
<p>The repository uses submodules, don't forget to init them.</p>
<pre><code>git clone https://github.com/tchapi/own-private-cloud
git submodule update --init
cd infra-public
</code></pre>
<p>I have tried to organize it properly so it is understandable. The main part is of course the top-level <code>docker-compose.yml</code> file.</p>
<p>In the <code>build</code> folder reside all the needed Dockerfiles.<br>
All the configuration files (that will ultimately be copied to the containers) live in <code>configurations</code>.<br>
And finally, locally-executed scripts are in the <code>scripts</code> folder.</p>
<h2 id="ccreateyourcloudinfrastructure">c. Create your cloud infrastructure</h2>
<p>In this example I will use <a href="https://www.ovh.com/world/public-cloud/">OVH Public Cloud</a> as my infrastructure provider.</p>
<p>For my cloud, I use the smallest possible production instance, a b2-7 (<em>see screenshot below for specs</em>), and it's largely sufficient in terms of CPU and RAM, but you can adapt accordingly.</p>
<blockquote>
<p><strong>Disclaimer</strong> : these instructions / tutorial only works with OVH Public Cloud. It should be relatively straightforward to adapt it to another cloud provider since most of the steps are just Docker cli commands ..</p>
</blockquote>
<h4 id="0signupforapubliccloudaccount">0. Signup for a public cloud account</h4>
<p>Obviously.</p>
<h4 id="1createanewinstance">1. Create a new instance</h4>
<p>This will be your main machine and Docker host.</p>
<p>You could be tempted to use the web interface like below but <strong>don't</strong> :</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/Screenshot-2019-09-27-at-15.01.37.png" alt="Your own public cloud — why not ?"></p>
<p>Why ? Because if you do so, you won't be able to manage your Docker host with your local <code>docker-machine</code> utility, since it will have no record of having created it.</p>
<p>You're better doing this with the command line.</p>
<p>But before, we need to do two things :</p>
<p>First, you need to retrieve your credentials for the underlying OpenStack service.</p>
<p>To do so, go to the <strong>Users</strong> panel, create a user if you haven't done so yet, and then select &quot;Download OpenStack's RC file&quot; from the menu :</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/Screenshot-2019-09-27-at-15.14.37.png" alt="Your own public cloud — why not ?"></p>
<p>Select the datacenter you want to use primarily (<em>should be the same that your instance will live in</em>) — here, I chose <strong>GRA5</strong> — and check&quot;V3&quot;, then click <strong>Download</strong> :</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/Screenshot-2019-09-27-at-15.14.48.png" alt="Your own public cloud — why not ?"></p>
<p>The resulting <code>openrc.sh</code> file will need to be executed whenever you want to use Docker to control your machines, so the correct environment variables are set beforehand.</p>
<p>Second, you need to add at least one SSH key in your account to that you can login to your machine (and so that Docker can).</p>
<p>In the <strong>SSH Keys</strong> menu, add a new key, and give it a name (for instance, HOME) :</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/Screenshot-2019-09-27-at-15.02.30.png" alt="Your own public cloud — why not ?"></p>
<p>(PS : You may need to create the SSH key in the Horizon web interface too — It's OpenStack's own web GUI that is accessible in the menu on the left)</p>
<p>Once you know the model you want to create (like <code>b2-7</code>), you're ready to create the instance via a cli. On your local machine, source <code>openrc.sh</code>:</p>
<pre><code>source openrc.sh
</code></pre>
<p>Then, use <code>docker-machine</code> to create the instance :</p>
<pre><code>docker-machine create -d openstack \
  --openstack-flavor-name=&quot;b2-7&quot; \
  --openstack-region=&quot;GRA5&quot; \
  --openstack-image-name=&quot;Debian 9&quot; \
  --openstack-net-name=&quot;Ext-Net&quot; \
  --openstack-ssh-user=&quot;debian&quot; \
  --openstack-keypair-name=&quot;HOME&quot; \
  --openstack-private-key-file=&quot;path_to/.ssh/id_rsa&quot; \
  default
</code></pre>
<p>Somes notes :</p>
<ul>
<li>OVH uses OpenStack as its backend for its cloud, so we use the <code>openstack</code> driver to talk to it</li>
<li>use the same datacenter code that your RC file</li>
<li>use the same SSH key that the one you uploaded</li>
<li>name the machine <code>default</code> — it's not mandatory, but if you do you won't have to type the name of your machine when invocating docker-machine</li>
</ul>
<p>The rest should be straightforward.</p>
<p>Wait a few seconds and 💥 ! You have a docker host living on a Debian 9 instance in the cloud.</p>
<p>You can check that everything is correct by going to the <strong>Instances</strong> web interface — you <code>default</code> instance should be there, with your brand new IP (<em>make a note of it !</em>) :</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/Screenshot-2019-09-27-at-14.59.17.png" alt="Your own public cloud — why not ?"></p>
<h4 id="2configureyourinstance">2. Configure your instance</h4>
<p>Now, we need to tweak the host a little bit before we can start working with it.</p>
<p><strong>Entropy</strong></p>
<p>For a cloud instance to be able to generate entropy at a correct rate, we need the <a href="https://issihosts.com/haveged/"><code>haveged</code></a> package — this will be necessary for the password manager for instance :</p>
<pre><code>docker-machine ssh default 'sudo apt update &amp;&amp; sudo apt install -y -f haveged'
</code></pre>
<p><strong>Paths and mounted volumes</strong></p>
<p>If you want to do things correctly, you want to store your data (databases, files, etc) not <strong>directly</strong> on the instance, but rather on an attached volume.</p>
<p>Why? Because that will allow you to :</p>
<ul>
<li>Rebuild your instance without losing any data</li>
<li>Backup your data and your data only when you want</li>
<li>Generally speaking, &quot;separate concerns&quot; — your data and your apps should not live in the same space</li>
</ul>
<p>How to do this ? It's quite easy, you need to create a block storage device (or several, in my case), that you will then mount on your instance.</p>
<p>To create them, simply head to the <strong>Storage</strong> menu, and create two blocks. Here, one named &quot;databases&quot; that will store all databases data files (MySQL, mongo, couch, ...), and one names &quot;files&quot; that will store all &quot;real&quot; files (the cozy cloud files or syncthing files for instance).</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/Screenshot-2019-09-27-at-14.59.26.png" alt="Your own public cloud — why not ?"></p>
<p>Attach them to your <code>default</code> instance via the web interface too; they will be made available as disks.</p>
<p>Once it's done, you need to initialize these disks and mount them. Behold (<em>using <code>fdisk</code> is beyond the scope of this article — <a href="https://www.howtogeek.com/106873/how-to-use-fdisk-to-manage-partitions-on-linux/">here</a></em>) :</p>
<blockquote>
<p>In the following lines, I assume that the <code>databases</code> volume is at <code>/dev/sdb</code> and the <code>files</code> volume at <code>/dev/sdc</code>.</p>
</blockquote>
<p><strong>The databases volume :</strong></p>
<pre><code>docker-machine ssh default 'sudo fdisk /dev/sdb # n, p, w'
docker-machine ssh default 'sudo mkfs.ext4 /dev/sdb1'
docker-machine ssh default 'sudo mkdir /mnt/databases &amp;&amp; sudo mount /dev/sdb1 /mnt/databases'
docker-machine ssh default 'sudo mkdir /mnt/databases/mysql /mnt/databases/couch /mnt/databases/mongo'
</code></pre>
<p><strong>The files volume :</strong></p>
<pre><code>docker-machine ssh default 'sudo fdisk /dev/sdc # n, p, w'
docker-machine ssh default 'sudo mkfs.ext4 /dev/sdc1'
docker-machine ssh default 'sudo mkdir /mnt/files &amp;&amp; sudo mount /dev/sdc1 /mnt/files'
docker-machine ssh default 'sudo mkdir /mnt/files/cozy /mnt/files/sync'
</code></pre>
<h4 id="3testthatdockerworkswellonthehost">3. Test that Docker works well on the host</h4>
<p>Now that your machine is up to date, let's try to see if Docker works correctly on the host.</p>
<p>First, we need to tell your <em>local</em> Docker instance what to control; For that, Docker-machine gives us the correct environment variables that we need to use</p>
<pre><code>docker-machine env default
</code></pre>
<p>To eval those vars automatically, just do :</p>
<pre><code>eval $(docker-machine env default)
</code></pre>
<p>Once it's done, your local docker utility should directly control your remote Docker host. Try :</p>
<pre><code>docker info
</code></pre>
<p>... to see if it works ! This should return information from your instance (operating system should be Debian 9, etc)</p>
<p>If it's all good, you're set up to create containers on your host, from the confort of your local terminal 🎉.</p>
<p>You can now use all of Docker commands to manage your host and containers.</p>
<h2 id="daddorremoveservices">d. Add or remove services</h2>
<p>Now, before you deploy, you might not need all of these services I have described.</p>
<p>Here is the dependencies graph so you can change the docker-compose file easily — each block is a container:</p>
<p><img src="https://blog.tchap.me/content/images/2019/09/depgraph.png" alt="Your own public cloud — why not ?"></p>
<p>Note that if you remove a service (<em>let's say for instance, Calendar</em>), you must remove its network from the networks list, and from the reverse-proxy networks too.</p>
<p>The certbot container is standalone, <strong>but</strong> uses a shared volume with the reverse-proxy container, so that the certificates are available for the two of them.</p>
<p>The only container that is exposed to the web is the reverse-proxy (through port 80 of the host). This container depends on all others just because it needs to know the backend for all virtual hosts.</p>
<p>For instance, in the nginx config of this container (just an excerpt) :</p>
<pre><code>upstream docker-passbolt {
    server passbolt;
}

...

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name passbolt.mydomain.com;

    ssl_certificate /path/to/fullchain.pem;
    ssl_certificate_key /path/to/privkey.pem;

    location / {
        proxy_pass  http://docker-passbolt;
        proxy_redirect     off;
        proxy_set_header   Host $host;
    }
}
</code></pre>
<p>If the <code>passbolt</code> container is not up when nginx starts, it will refuse to start because it can't resolve the upstream.</p>
<p>I could have done one file per virtual host, or a different setup that would allow the reverse-proxy container to <em>not</em> depend on the other containers but I guess modifying a nginx configuration file is relatively easy.</p>
<h2 id="etweaktheconfiguration">e. Tweak the configuration</h2>
<p>You will notice a <code>.env.dist</code> file in the root of the repository. You must duplicate this file to a <code>.env</code> file before building or creating containers.</p>
<p>This <code>.env</code> file is very important : it will allow you to tweak the configuration for your entire cloud, namely :</p>
<ul>
<li>the reverse DNS for each service</li>
<li>the password for each database, service, web interface</li>
<li>your mail configuration</li>
<li>some other container-specific configuration</li>
</ul>
<p>This file is a bit cumbersome to fill in, and it's due to the fact that you must duplicate information in different parts because different containers do not use the same env vars.</p>
<p>This would be easy if the file was a regular env file (in a shell way of speaking), but it's not, and Docker doesn't like variable replacement in this file, so 🤷.</p>
<p><strong>Beware of shell expansions</strong> too, if you're using special characters in the passwords. I've put an example in the dist file.</p>
<h2 id="fbuildthecustomimages">f. Build the custom images</h2>
<p>The docker-compose file is based upon official images and custom ones that I modified. Before launching, and to check that your configuration tweaks did not break anything, it's a good idea to build them.</p>
<p>Beforehand though, you need to build the configuration files. Configuration files are copied onto the various containers and I created them so that the environment variables are written directly in the configuration, to make the whole repository agnostic of my <em>own</em> implementation.</p>
<p>I use a very simple templating system where configuration files include bash-style env vars, like so (see the <code>$NOTES_DOMAIN</code> var):</p>
<pre><code>{
  &quot;url&quot;: &quot;https://$NOTES_DOMAIN/extensions/secure-spreadsheets/dist/index.html&quot;,
  &quot;download_url&quot;: &quot;https://github.com/sn-extensions/secure-spreadsheets/archive/1.3.2.zip&quot;,
  &quot;latest_url&quot;: &quot;https://$NOTES_DOMAIN/extensions/secure-spreadsheets.json&quot;
}
</code></pre>
<p>These vars are simply replaced when playing the script :</p>
<pre><code>./scripts/build-configuration-files.sh
</code></pre>
<blockquote>
<p>The only culprit of this system is that you need to escape <code>$</code>'s with <code>\$ </code></p>
</blockquote>
<p>Once it's done, you can build the different custom containers :</p>
<pre><code>docker-compose build
</code></pre>
<h4 id="configuringthingsbeforelaunch">Configuring things before launch</h4>
<p>Some images need to be prepared before they can be started, or else they will fail and quit.</p>
<p>This is especially true for the reverse-proxy that needs SSL certificates (we only serve https, you should too).</p>
<p>To set dummy SSL certificates, then launch nginx and retrieve the real certificates from Let's encrypt, you should run this script :</p>
<pre><code>./scripts/certbot/init-letsencrypt.sh
</code></pre>
<p>It relies heavily on the configuration of the domains that is in the <code>.env</code> file, so make sure that everything is ok in there before running it.</p>
<blockquote>
<p>Of course, before that, you must have created the correct DNS entries pointing to your instance (the IP you have written early on)</p>
</blockquote>
<p>As for the Cozy container, you must create what they call an <em>instance</em> before being able to connect to your cloud. I guess that this is related to multi-users installations. Just run the below script once :</p>
<pre><code>./scripts/cozy/init-cozycloud.sh
</code></pre>
<blockquote>
<p>NB : you can change the quota (10GB) in the script directly if you need more, but then be careful to not put a quote above your real disk capacity (it's part of the <code>files</code> block storage from before, that we created via the OVH web interface).</p>
</blockquote>
<h4 id="anoteaboutthesecustomimagesandscripts">A note about these custom images and scripts</h4>
<p>So as I was saying, I use mainly custom images derived from the official ones to add my own tweaks. Here are some details of what I did for those of interest.</p>
<h5 id="standardnotes">Standard notes</h5>
<p>I've included a few extensions I find useful in the container.</p>
<p>To add an extension to your desktop Standard Notes app, go to &quot;Extensions&quot; in the bottom bar, click &quot;Import extension&quot; and paste the link to the JSON description file of the extension you want :</p>
<p>On your custom domain : <a href="https://notes.mydomain.com/extensions/">https://notes.mydomain.com/extensions/</a></p>
<ul>
<li>Advanced markdown editor : <code>advanced-markdown-editor.json</code></li>
<li>Plus editor : <code>plus-editor.json</code></li>
<li>Secure spreadsheets : <code>secure-spreadsheets.json</code></li>
<li>Simple task editor : <code>simple-task-editor.json</code></li>
<li>Autocomplete tags : <code>autocomplete-tags.json</code></li>
</ul>
<p><img src="https://blog.tchap.me/content/images/2019/10/extensions.png" alt="Your own public cloud — why not ?"></p>
<blockquote>
<p>NB : Side note on this; Standard notes says that the extensions are &quot;public source&quot; but not &quot;open source&quot;, which I don't quite understand fully, to be honest (<em>see <a href="https://standardnotes.org/help/48/can-i-use-extensions-with-a-self-hosted-server">here</a>, where there is no clear answer to &quot;Can I self host <strong>existing</strong> extensions?&quot;</em>). The source code is published on Github, without license — In my repository I only link to it via submodule, which I think is pretty much in line with <a href="https://help.github.com/en/articles/github-terms-of-service">Github's Terms</a>, but if you're from Standard Notes and want me to remove this, just contact me and I will abide.</p>
</blockquote>
<h5 id="nginxreverseproxy">Nginx — reverse proxy</h5>
<p>I've created a quite reasonable configuration file by following the best practices for SSL parameters and HTTPS.</p>
<p>All services are only served in HTTPS and if possible, via HTTP/2.</p>
<p>All insecure requests are redirected to their secure counterpart, except for the certificate challenges.</p>
<p>With all these settings, all the front websites should achieve at least an <strong>A</strong> rating on <a href="https://www.ssllabs.com/ssltest/">SSL Labs</a></p>
<h5 id="baikalcalendarsandcontacts">Baikal — calendars and contacts</h5>
<p>The official Dockerfile was not really up to date and was including too many things like postfix for instance, so I revamped it quite a lot.</p>
<p>Following, <a href="https://github.com/binfalse">binfalse</a>'s <a href="https://binfalse.de/2016/11/25/mail-support-for-docker-s-php-fpm/">article</a> I also used <code>ssmtp</code> <em>in lieu</em> of sendmail to be able to use an external SMTP to route the emails.</p>
<p>Check out the <a href="https://github.com/tchapi/own-private-cloud/blob/master/build/Dockerfile-baikal">Dockerfile</a> to see exactly all the steps</p>
<h5 id="xbrowsersync">xBrowser sync</h5>
<p>The configuration I created allows only one sync (<em>this is a personal cloud</em>), and by default, does not allow new syncs.</p>
<p>That means that you need to create the container with a different configuration allowing new syncs, create your sync id (<em>via the browser extension</em>), and then if you wish, recreate the container with the initial configuration (not allowing syncs). This is just a security measure.</p>
<h5 id="passbolt">Passbolt</h5>
<p>By default on the new versions, you are disconnected every 24 minutes because of the way the sessions are managed by PHP.</p>
<p>So I extended the garbage collector session lifetime to allow to have 3 days without having to reconnect.</p>
<p>(see <a href="https://github.com/passbolt/passbolt_docker/issues/129">this Github issue</a> for more context)</p>
<h1 id="6launch">6. 🚀 Launch !</h1>
<p>It's time.</p>
<pre><code>docker-compose up -d
</code></pre>
<p>NB : <code>-d</code> is to run in daemon mode.</p>
<p>After having done that, you still need to :</p>
<p><strong>Init the Baikal instance</strong> if needed (<em>if the tables do not already exist in the database</em>)</p>
<pre><code>./scripts/baikal/init-mysql-tables.sh
</code></pre>
<p><strong>Create the Passbolt admin user</strong></p>
<pre><code>./scripts/passbolt/init-admin-user.sh
</code></pre>
<p>It will give you a url that you need to go to to setup your account, your key, etc ...</p>
<p><strong>You now should have secured working services behind your custom subdomains or domains. Congrats 🎉 !</strong></p>
<h1 id="7finalwords">7. 💭 Final words</h1>
<h2 id="anoteone2eencryption">A note on E2E encryption</h2>
<p>Back to privacy. As you have seen, not all solutions I use here offer end-to-end encryption.</p>
<p><strong>Only the notes, passwords and bookmarks are fully E2E encrypted.</strong></p>
<p>The other solutions often provide transport encryption (like Syncthing) and of course we have https, but if someone gets hold of the table containing your Wekan boards and cards, they will be able to read it for sure.</p>
<p>So it's up to you to decide on which services you want to create data. For now, I'm pretty happy with the non-encrypted drive, sync and kanban boards because my main focus is privacy, and not a total security from a hacker that would explicitly decide to extract my data and files.</p>
<p>If you're concerned about this, <a href="https://cryptpad.fr/">Cryptpad</a> apparently has a neat solution for an encrypted Drive + cloud apps, with a dockerfile on their Github. I haven't tested it though.</p>
<p>As for the calendar / contacts, I haven't found an open-source encrypted calendar solution <em>yet</em>. Some providers offer it as a service (for instance, <a href="https://tutanota.com/calendar">tutanota</a>), but that seems to be quite a <em>niche</em>.</p>
<p>I guess the best solution here would be to dive into the <a href="https://github.com/sabre-io/Baikal">Baikal source code</a>, fork it and add encryption on persistence. Feasible, but could be quite a challenge (see <a href="https://github.com/sabre-io/Baikal/issues/250">this</a>).</p>
<blockquote>
<p>NB : The CalDav / CardDav protocols are plain-text in essence, hence the importance of https.</p>
</blockquote>
<p><em>If you find good encrypted alternatives, do not hesitate to put them in the comments below this article.</em></p>
<h2 id="backup">Backup</h2>
<p>It's more probable that your storage will fail before someone tries to steal your data. So backup is an important part of your private personal cloud.</p>
<p>I touched on the subject before, but to recap, it's a good idea to backup your block storage from time to time.</p>
<p>On OVH (<em>my provider of choice</em>), it's relatively easy with the web interface — it's called <strong>Volume snapshot</strong> : just select your storage, click &quot;create a snapshot&quot; and you're done.</p>
<p><img src="https://blog.tchap.me/content/images/2019/10/Screenshot-2019-10-01-at-17.42.07.png" alt="Your own public cloud — why not ?"></p>
<p><img src="https://blog.tchap.me/content/images/2019/10/Screenshot-2019-10-01-at-17.41.59.png" alt="Your own public cloud — why not ?"></p>
<blockquote>
<p>NB : Of course, you can also do this automatically with the OpenStack API / the Horizon interface, but this is out of the scope of this article.</p>
</blockquote>
<p>As for the instance, I think it's less important since your instance is basically a configuration file, so if it crashes, you can just restart a new container with a single command line, without losing any data (only downtime).</p>
<h2 id="donateorcontribute">💰 Donate or contribute</h2>
<p>Ok so now you have you own private cloud and all these apps working well. Time to thank all the people that made this possible!</p>
<p>I would encourage to donate to all these projects, for instance the amount that you would have paid for a month or two of services (for those that provide a cloud-based solution), or a small reasonable fee (<em>like 10€ ?</em>) for the other.</p>
<p>We've worked with free software here, free as in speech. They're also free (<em>as in beer</em>) to download and self-host, but if we want the maintainers of these tools to continue to make releases, patch bugs and add features, we need to acknowledge that these people need to earn a living, too. A little gesture is always welcome.</p>
<p>You can also get involved with the development of these tools if you have the ability and time to do so : by writing clear and precise bug reports, by submitting pull requests, or helping in any other way possible. That's the beauty of open-source.</p>
<hr>
<h2 id="extraliterature">Extra literature</h2>
<p>Some articles I found while researching my self-hosting mania with Docker:</p>
<ul>
<li>
<p>Docker best practices : <a href="https://blog.docker.com/2019/07/intro-guide-to-dockerfile-best-practices/">https://blog.docker.com/2019/07/intro-guide-to-dockerfile-best-practices/</a></p>
</li>
<li>
<p>Nginx Reverse proxy : <a href="https://www.thepolyglotdeveloper.com/2017/03/nginx-reverse-proxy-containerized-docker-applications/">https://www.thepolyglotdeveloper.com/2017/03/nginx-reverse-proxy-containerized-docker-applications/</a></p>
</li>
<li>
<p>Lets Encrypt with Docker : <a href="https://devsidestory.com/lets-encrypt-with-docker/">https://devsidestory.com/lets-encrypt-with-docker/</a></p>
</li>
<li>
<p>Lets Encrypt with Docker (alt) : <a href="https://medium.com/@pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71">https://medium.com/@pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71</a></p>
</li>
<li>
<p>Shell command  / Entrypoint in Docker : <a href="https://stackoverflow.com/questions/41512237/how-to-execute-a-shell-command-before-the-entrypoint-via-the-dockerfile">https://stackoverflow.com/questions/41512237/how-to-execute-a-shell-command-before-the-entrypoint-via-the-dockerfile</a></p>
</li>
<li>
<p>Ignore files for Cozy drive : <a href="https://github.com/cozy-labs/cozy-desktop/blob/master/doc/usage/ignore_files.md">https://github.com/cozy-labs/cozy-desktop/blob/master/doc/usage/ignore_files.md</a></p>
</li>
<li>
<p>About privacy and DNS, a wonderful talk : <a href="https://www.youtube.com/watch?v=pjin3nv8jAo">https://www.youtube.com/watch?v=pjin3nv8jAo</a></p>
</li>
</ul>
<blockquote>
<p>If you want to go further and also self-host your emails, Gilles Chehade (poolp) made a very nice article about this very topic: <a href="https://poolp.org/posts/2019-09-14/setting-up-a-mail-server-with-opensmtpd-dovecot-and-rspamd/">https://poolp.org/posts/2019-09-14/setting-up-a-mail-server-with-opensmtpd-dovecot-and-rspamd/</a></p>
</blockquote>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[[Recycling Thursday] The μLCD-32PT (gen1)]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>The μLCD 32PT is a nice TFT display from the Australians at <a href="https://www.4dsystems.com.au">4DSystems</a>. I have found two in an old box - perfect for Recycling Thursday !</p>
<p><img src="https://blog.tchap.me/content/images/2019/04/ulcd.png" alt="ulcd"></p>
<p>The legacy documentation is <a href="http://old.4dsystems.com.au/prod.php?id=210">here</a> (GFX flavour).</p>
<p>So it's time to spin up a Windows VM and try to make something useful out of</p>]]></description><link>https://blog.tchap.me/throwback-thursday-ulcd-32pt-gen1/</link><guid isPermaLink="false">5ea824fb25e7cb317d1336ce</guid><category><![CDATA[game]]></category><category><![CDATA[code]]></category><category><![CDATA[programming]]></category><category><![CDATA[embedded]]></category><dc:creator><![CDATA[tchap@tchap.me]]></dc:creator><pubDate>Thu, 06 Jun 2019 13:23:00 GMT</pubDate><media:content url="https://blog.tchap.me/content/images/2019/06/flappicaso.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.tchap.me/content/images/2019/06/flappicaso.png" alt="[Recycling Thursday] The μLCD-32PT (gen1)"><p>The μLCD 32PT is a nice TFT display from the Australians at <a href="https://www.4dsystems.com.au">4DSystems</a>. I have found two in an old box - perfect for Recycling Thursday !</p>
<p><img src="https://blog.tchap.me/content/images/2019/04/ulcd.png" alt="[Recycling Thursday] The μLCD-32PT (gen1)"></p>
<p>The legacy documentation is <a href="http://old.4dsystems.com.au/prod.php?id=210">here</a> (GFX flavour).</p>
<p>So it's time to spin up a Windows VM and try to make something useful out of this !</p>
<p>The <strong>μLCD 32PT</strong> has two modes: <strong>SGC</strong> and <strong>GFX</strong>. From 4D directly : <em>&quot;The architecture of the base PICASO chip is such that it can be reconfigured  to operate in 2 distinctively different ways. To configure the device, a PmmC (Personality Module Micro-Code) is downloaded via its serial port. There are 2 types of PmmC available for PICASO.&quot;</em></p>
<h4 id="sgcslavegraphicscontroller">SGC (Slave Graphics Controller)</h4>
<p>In this mode, the module is 'ready to go' by simply connecting it to the serial port of your favourite micro-controller, and sending serial commands to it.</p>
<h4 id="gfxstandalonegraphicscontroller">GFX (Stand-Alone Graphics Controller)</h4>
<p>In this mode, the module is then like a microprocessor which you program, using the 4DGL language (very similar to C), to control the internal graphics and external interfaces. It does not need an external microprocessor, just power.</p>
<p>We know SGC works (done that before), so let's try GFX (standalone) and see if we can do something nice and fun with that.</p>
<p>Let's create a flappy-bird-like game !</p>
<h3 id="settingthecorrectmode">Setting the correct mode</h3>
<p>Since the SGC mode is the <em>default mode</em>, we need to reprogram the microcode to use the GFX mode.</p>
<p>To do this, we will use the <a href="http://old.4dsystems.com.au/prod.php?id=46">PmmC Loader</a> (Windows application) and obviously, we'll need a programming cable like <a href="http://old.4dsystems.com.au/prod.php?id=138">this one from 4DSystems</a>.</p>
<p>The GFX microcode can be downloaded directly from the legacy product page of the <a href="http://old.4dsystems.com.au/prod.php?id=210">uLCD (GFX mode)</a>, on the <strong>Downloads</strong> tab (you're looking for the <code>uLCD-32PT-I-GFX-R32.pmmc</code> file).</p>
<p>Just in case, here are the direct links to the two microcode flavours are :</p>
<ul>
<li>SGC : <a href="http://old.4dsystems.com.au/downloads/Serial-Display-Modules/uLCD-32PT(SGC)/PmmC/uLCD-32PT-I-SGC-R22.PmmC">uLCD-32PT-I-SGC-R22.PmmC</a></li>
<li>GFX (the one we want) : <a href="http://old.4dsystems.com.au/downloads/4DGL-Display-Modules/uLCD-32-PT(GFX)/PmmC/uLCD-32PT-I-GFX-R32.PmmC">uLCD-32PT-I-GFX-R32.PmmC</a></li>
</ul>
<p>Once you have all that installed, launch the PmmC Loader and choose the correct file :</p>
<p><img src="https://blog.tchap.me/content/images/2019/04/Screenshot-2019-04-12-at-17.33.44.png" alt="[Recycling Thursday] The μLCD-32PT (gen1)"></p>
<p>Click Load, and it's done under a minute.</p>
<p><img src="https://blog.tchap.me/content/images/2019/04/Screenshot-2019-04-12-at-17.33.53.png" alt="[Recycling Thursday] The μLCD-32PT (gen1)"></p>
<h3 id="compilinganexample">Compiling an example</h3>
<p>We'll first assess that the module is correctly working by compiling and writing an example.</p>
<p>You have to install the legacy <a href="http://old.4dsystems.com.au/prod.php?id=111">4D Workshop IDE</a></p>
<p><img src="https://blog.tchap.me/content/images/2019/04/Screenshot-2019-04-12-at-17.25.19.png" alt="[Recycling Thursday] The μLCD-32PT (gen1)"></p>
<p>This should work as expected (I tested on a Windows 7 virtual machine).</p>
<p>When you run the Workshop software, you have to select the correct platform : uLCD-32PT_GFX2 in the &quot;Platform&quot; select, and choose the correct COM port (here, COM3).</p>
<p><img src="https://blog.tchap.me/content/images/2019/05/Screenshot-2019-04-15-at-11.04.17.png" alt="[Recycling Thursday] The μLCD-32PT (gen1)"></p>
<p>You can now compile and load any sample to see if it is working correctly.</p>
<h3 id="compilinga4dvisiexample">Compiling a 4DVisi example</h3>
<p>The IDE has a visual editor that allows to create richer interfaces using the 4DVisi format. This is quite powerful to create graphical programs, but it necessitates a bit more work to compile and run;</p>
<p>You need a micro SD card to copy support files that are then used by the program when it's running on the screen. The micro SD card must be formatted in <strong>FAT16</strong> (so, a max of 4Gb).</p>
<p>There is a tool provided by the 4D IDE (Called <strong>RMPET</strong>) that can achieve that on Windows, and if you're on a Mac, I suggest that you use the <code>newfs_msdos</code> utility.</p>
<p>When it's done, you need to copy compiled assets (<code>.cgi</code> and <code>.dat</code> files to the SD card. When you build a solution that uses the 4DVisi format, you will be prompted to do so at the end of the compilation. Just select the drive your SD card is attached to, and click OK.</p>
<p>If you are on a VM like me, it gets a bit more complicated since it's quite unpractical to mount a SD card from the host system. In this case, you have to locate the two files, and manually copy them to the SD card.</p>
<p>Once it's done, insert the SD card into your 4D screen, then recompile and click on <em>'No, thanks'</em> when it asks to copy the file. <strong>Tada</strong> !</p>
<p>Your code should now work and the assets should be correctly used.</p>
<h2 id="ontosomethinginteresting">On to something interesting</h2>
<p>Now that everything works as expected, time to create a game !</p>
<p>We are not going to use the 4DVisi mode because it's not really needed I think, for a simple program like a flappy bird game, and it's a bit more hassle that needed for the assets. So we'll recreate the assets with code and simplify them heavily instead.</p>
<p><strong>(TL,DR; The code is <a href="https://github.com/tchapi/flappicaso">here</a> and open source, as usual)</strong></p>
<h4 id="the4dgl">The 4DGL</h4>
<p>The 4D Graphics Language is a bit like C but not really like it. It's not really practical, to be honest, but it's usable.</p>
<p>There is a reference manual <a href="https://www.4dsystems.com.au/productpages/4DGL/downloads/4DGL_progmanual_R_6_0.pdf">here</a> so you can dive into the language and its subtelties (<code>if</code> → <code>endif</code>, <code>switch</code> → <code>endswitch</code> and <code>for</code> → <code>next</code> .. Wait, what ?).</p>
<h5 id="picasointernalfunctions">PICASO Internal functions</h5>
<p>Of course we'll use the internal functions of the PICASO chip to access the display, the storage, etc ...</p>
<p>The API is documented <a href="https://www.4dsystems.com.au/productpages/PICASO/downloads/PICASO_internalfunctions_R_7_0.pdf">here</a>.</p>
<h5 id="thebasics">The basics</h5>
<p>We'll take a very <em>naive</em> approach for this game with a very simple loop :</p>
<ul>
<li>handle touch events</li>
<li>detect eventual collisions</li>
<li>draw the background</li>
<li>draw the obstacles (that we will move along the X axis)</li>
<li>draw the bird in the correct position</li>
</ul>
<p>Beforehand we'll display a very simple splash screen, and we the user loses, we'll just show the score.</p>
<h5 id="theassets">The assets</h5>
<p>I won't use any assets (ie bitmaps) but rather redraw everything in a very simplist way.</p>
<p>I have copied the bird pixel art and translated that to GFX functions (in <code>calcBirdPosition()</code>). This is a really simple approach. I've simplified the animation and created only two frames : one with the wing up, one down.</p>
<p>As for the &quot;Mario tubes&quot;, they are simple rectangles, that I have very naively lighted from the left of the screen. One light and on dark column of pixels do the trick.</p>
<blockquote>
<p>The display has a 16-bit depth so a tool like this will come in handy to convert 24 bit colors to 16 bit colors :<br>
<a href="http://www.barth-dev.de/online/rgb565-color-picker/">http://www.barth-dev.de/online/rgb565-color-picker/</a></p>
</blockquote>
<h5 id="thegameplay">The gameplay</h5>
<p>It's basic. The bird is falling at constant acceleration (kind of), and a touch makes it &quot;jump&quot; to a few pixels up his position.</p>
<p>The obstacles (3 maximum at the same time) appear from the right of the screen and scroll to the left.</p>
<p>The collision detection is very simple, I just verify that the bird coordinates do no overlap the tubes, or the bottom of the screen.</p>
<blockquote>
<p>The Y+ axis is downward when the FTDI plug is at the top of the display by the way</p>
</blockquote>
<h5 id="savingthescoreonthesdcard">Saving the score on the µSD card</h5>
<p>There is no EEPROM on the µLCD-32PT as far as I can tell, so to keep the score across power cycles, we need to write it down on the SD card.</p>
<p>We'll keep it simple and save the score in a TXT file at the root of the disk (<code>score.txt</code>).</p>
<p>On the device I have here, the SD card is somehow not very robust. I cannot mount the file system at times, and sometimes it works. I guess this may be due to the fact that the board has been sitting in a cupboard above my desk for quite a long time, not protected, and that the SD cards are quite old too.</p>
<hr>
<p><strong>So that's it !</strong></p>
<p>Grab the code on my Github <a href="https://github.com/tchapi/flappicaso">here</a> and enjoy !</p>
<p><img src="https://blog.tchap.me/content/images/2019/06/IMG_20190606_152735.jpg" alt="[Recycling Thursday] The μLCD-32PT (gen1)"></p>
<p><img src="https://blog.tchap.me/content/images/2019/06/IMG_20190606_152653.jpg" alt="[Recycling Thursday] The μLCD-32PT (gen1)"></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[About the Grove O₂ gas sensor]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>For less than 50€, you can get your hands on this tiny and easy to use sensor to monitor O₂ concentration up to 25%.</p>
<p><a href="https://www.seeedstudio.com/Grove-Oxygen-Sensor-ME2-O2-2-p-1541.html">https://www.seeedstudio.com/Grove-Oxygen-Sensor-ME2-O2-2-p-1541.html</a></p>
<p>The problem is, the documentation is really crappy.<br>
After hours of searching and testing to find the actual working example</p>]]></description><link>https://blog.tchap.me/about-the-grove-o2-gas-sensor/</link><guid isPermaLink="false">5ea824fb25e7cb317d1336cf</guid><category><![CDATA[arduino]]></category><category><![CDATA[iot]]></category><category><![CDATA[embedded]]></category><category><![CDATA[programming]]></category><dc:creator><![CDATA[tchap@tchap.me]]></dc:creator><pubDate>Thu, 07 Mar 2019 16:30:09 GMT</pubDate><media:content url="https://blog.tchap.me/content/images/2019/03/grovesensor.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.tchap.me/content/images/2019/03/grovesensor.png" alt="About the Grove O₂ gas sensor"><p>For less than 50€, you can get your hands on this tiny and easy to use sensor to monitor O₂ concentration up to 25%.</p>
<p><a href="https://www.seeedstudio.com/Grove-Oxygen-Sensor-ME2-O2-2-p-1541.html">https://www.seeedstudio.com/Grove-Oxygen-Sensor-ME2-O2-2-p-1541.html</a></p>
<p>The problem is, the documentation is really crappy.<br>
After hours of searching and testing to find the actual working example code for this sensor, I decided to get calibrated values for this sensor and interpolate to get the correct formula.</p>
<h3 id="compatibility">Compatibility</h3>
<p>This sensor is compatible with 5V and 3.3V boards.</p>
<h3 id="measurementrange">Measurement range</h3>
<p>This sensor will provide reliable measurements for an O₂ concentration between 0% and 25%.</p>
<p>This sensor is an electrochemical cell, thus its lifespan will be finite. The datasheet says about 2 years but that highly depends on your environment and the concentration you're trying to detect.</p>
<h3 id="preheatingtime">Preheating time</h3>
<p>Some pages will say that you need 48 Hrs before reading data. <strong>This is not true</strong>. Generally, in normal conditions, say, between 10°C and 30°C, the sensor will give a very reasonable output without preheating time.</p>
<p>That being said, it is a good thing to let it heat for a few minutes before actually reading data, so the output is more <em>stable</em>. Give it 20 minutes max.</p>
<h3 id="examplecodes">Example codes</h3>
<p>There is a documentation page available at <a href="https://seeeddoc.github.io/Grove-Gas_Sensor-O2/">https://seeeddoc.github.io/Grove-Gas_Sensor-O2/</a></p>
<p>The code is <strong>completely wrong</strong>, and will give you wrong values. Just by looking at the expected values for <em>Vout</em> (186 V ? Hmmm), you can tell that it's clearly not adapted. The values are in mA, and are related to the current output of the actual sensor (the ME2-O2-Ф20 on the board). Somehow the guys at Seeed studio messed up their example code.</p>
<p>So forget this code.</p>
<p>There is also some documentation at this page : <a href="http://wiki.seeedstudio.com/Grove-Gas_Sensor-O2/">http://wiki.seeedstudio.com/Grove-Gas_Sensor-O2/</a> where they provide an alternate example code.</p>
<p>Forget it also.</p>
<p>The calculation they make is as follow:</p>
<pre><code>Concentration_O₂ = (Vout * 0.21 / 2) * 100;
</code></pre>
<p>While they do not explain <em>why</em> this would work, it actually give <em>wrong</em> values most of the time.</p>
<p>(FYI : the 0.21 comes from the actual amplifier that has a ratio of 210, but that's it)</p>
<p>A last example I found in a forum is a <a href="https://github.com/SeeedDocument/forum_doc/raw/master/reg/Read_O2_value.zip">zip file</a> (containing C code for Arduino) discussed in the related comment on the Robotshop forums : <a href="https://www.robotshop.com/community/forum/t/grove-o2-gas-sensor/24167/16">https://www.robotshop.com/community/forum/t/grove-o2-gas-sensor/24167/16</a></p>
<p>What this code does is basically assume that you will calibrate it in an open-air environment (20.8% O₂), and that <code>0V</code> = <code>0ppm</code>. This is not really accurate, see below.</p>
<h3 id="linearity">Linearity</h3>
<p>While electrochemical cell sensors are supposed to be strictly linear, there is always some differences in real life. This sensor is no exception; and if you assumes its linearity and that <code>0V</code> = <code>0ppm</code>, you might end up with values that are quite different when they are far from your calibration point (<em>that will likely be ambient air, so 20.8% or so</em>).</p>
<p>If you plan to use it in the full range of its capabilities, you <strong>have</strong> to calibrate it fully. That's what we'll try to do next.</p>
<h3 id="calibration">Calibration</h3>
<p>It's not really a calibration <em>per se</em>, but we're going to find data points where we know the dioxygen concentration for sure, and plot the whole thing, and hopefully find an accurate-enough linear approximation of the formula that we can use in real-world situations.</p>
<p>For this, I used a <strong>MAP Mix Provectus</strong> from <strong>Dansensor</strong> that can output a gas that has a calibrated concentration. With a N₂ and a O₂ bottle, I could expose the sensor to different concentrated gas from 2% to 25% of O₂.</p>
<p>Here are the data points I could get :</p>
<p><img src="https://blog.tchap.me/content/images/2019/03/diagram.png" alt="About the Grove O₂ gas sensor"></p>
<p>As we can see, it's not perfectly linear : there's a little offset (about 0.5%, still significant).</p>
<p>The calculated linear regression give a R² of 0.9986 which is not bad. The slope is 14.581.</p>
<p>So the formula would be :</p>
<pre><code>// For Vout in volts
Concentration_O₂ = Vout * 14.581 + 0.5483; // in percent
</code></pre>
<p>You could also use the multimap function proposed in one of the examples with updated values :</p>
<pre><code>float VoutArray[] = { 0, 0.13, 0.19, 0.24, 0.30, 0.35, 0.43, 0.49, 0.57, 0.64, 1.02, 1.31, 1.42, 1.68 };
float O2ConArray[] = { 0.5, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 21, 25 };
unsigned int ConArraySize = 14;

// This code uses MultiMap implementation from http://playground.arduino.cc/Main/MultiMap
float FmultiMap(float val, float * _in, float * _out, uint8_t size)
{
    // Take care the value is within range
    if (val &lt;= _in[0]) return _out[0];
    if (val &gt;= _in[size-1]) return _out[size-1];

    // Search right interval
    uint8_t pos = 1;  // _in[0] already tested
    while(val &gt; _in[pos]) pos++;

    // This will handle all exact &quot;points&quot; in the _in array
    if (val == _in[pos]) return _out[pos];

    // Interpolate in the right segment for the rest
    return (val - _in[pos-1]) * (_out[pos] - _out[pos-1]) / (_in[pos] - _in[pos-1]) + _out[pos-1];
  }

// For Vout in volts
Concentration_O₂ = FmultiMap(Vout, VoutArray, O2ConArray, ConArraySize);
</code></pre>
<p>All in all, a complete solution would be along the lines of that (<em><code>Vref</code> = <code>5V</code> or <code>3.3V</code> depending on your board</em>) :</p>
<pre><code>// Read Vout on average
unsigned long sum = 0;
for (int i=0; i&lt;32; i++) {
    sum += analogRead(O2_SENSOR_PIN);
    delay(10); // So we sample on a larger time interval
}

// Measured Vout (&gt;&gt; 5 = quick division by 2^5 = 32)
float Vout = (sum &gt;&gt; 5) * (VRef / 1023.0);
// Either one of those two - your preference :
o2_concentration_in_percent = Vout * 14.581 + 0.5483;
o2_concentration_in_percent = FmultiMap(Vout, VoutArray, O2ConArray, ConArraySize);
</code></pre>
<p><strong>And 🎉! It should give you something quite accurate (for a sensor this unexpensive).</strong></p>
<blockquote>
<h3 id="disclaimer">Disclaimer</h3>
<p>On some forums, some users claim that the on-board circuitry for this gas sensor has changed over time, and that different versions exist under the same denomination. That can be true, and in this case, the calibration that I have done above may be off with your hardware, so be advised ;)</p>
</blockquote>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[A service file and a tip for ReSTUNd]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I've recently worked on a quite important WebRTC project including live streams, recording, multiroom chat and broadcasting (more on that maybe in following articles), and in the process, I needed a proper STUN server (<em>Session Traversal Utilities for NAT</em>) to be able to ... well, traverse a NAT.</p>
<p>A few options</p>]]></description><link>https://blog.tchap.me/a-service-file-for-restund/</link><guid isPermaLink="false">5ea824fb25e7cb317d1336cd</guid><category><![CDATA[stun]]></category><category><![CDATA[webrtc]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[tchap@tchap.me]]></dc:creator><pubDate>Mon, 21 Jan 2019 14:19:18 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I've recently worked on a quite important WebRTC project including live streams, recording, multiroom chat and broadcasting (more on that maybe in following articles), and in the process, I needed a proper STUN server (<em>Session Traversal Utilities for NAT</em>) to be able to ... well, traverse a NAT.</p>
<p>A few options exist, but I settled for restund (<a href="http://www.creytiv.com/restund.html">http://www.creytiv.com/restund.html</a>) since it includes a TURN implementation as well (<em>which provides a fallback for when the WebRTC stream cannot be exchanged in a pure peer-to-peer manner</em>).</p>
<h3 id="1compilation">1.   Compilation</h3>
<p>The installation is straightforward but you might encounter a problem when compiling on Ubuntu or Debian :</p>
<pre><code>error: storage size of 'ts' isn't known
</code></pre>
<p>It's mainly due to unavailable POSIX features, and is very easily fixed by changing the <code>Makefile</code> and adding <code>-D_POSIX_C_SOURCE=199309L</code> to the <code>CLFAGS</code> like so :</p>
<pre><code>CFLAGS += -D_POSIX_C_SOURCE=199309L
</code></pre>
<p>And then compile normally :</p>
<pre><code>make &amp;&amp; sudo make install
</code></pre>
<h3 id="2servicefile">2.   Service file</h3>
<p>While setting up my production environment, I figured I needed a service file to properly launch and restart the <strong>restund</strong> binary. The culprit being that we need a <code>Type=forking</code> service file to be able to do so :</p>
<pre><code>[Unit]
Description=ReSTUNd Service

[Service]
ExecStart=/usr/local/sbin/restund -f /etc/restund.conf
Type=forking
Restart=on-failure
RestartSec=4
User=root

[Install]
WantedBy=multi-user.target
</code></pre>
<blockquote>
<p>the configuration file is generally <code>/etc/restund.conf</code>, adapt to your needs</p>
</blockquote>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Isolating I2C slaves]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>While developing for the IoT2040 from Siemens (see <a href="https://w3.siemens.com/mcms/pc-based-automation/en/industrial-iot/pages/default.aspx">the product page</a>), I was having difficulties with the I2C bus; specifically, some peripherals were slow to respond or somehow buggy and the bus was becoming unresponsive.</p>
<p><img src="https://blog.tchap.me/content/images/2018/07/Screen-Shot-2018-07-18-at-11.26.08.png" alt="Screen-Shot-2018-07-18-at-11.26.08"></p>
<p>On this device, the I2C bus is behind an extender, and it seems not very</p>]]></description><link>https://blog.tchap.me/isolating-i2c-slaves/</link><guid isPermaLink="false">5ea824fb25e7cb317d1336cc</guid><category><![CDATA[i2c]]></category><category><![CDATA[iot]]></category><category><![CDATA[gpio]]></category><category><![CDATA[embedded]]></category><dc:creator><![CDATA[tchap@tchap.me]]></dc:creator><pubDate>Thu, 19 Jul 2018 08:00:00 GMT</pubDate><media:content url="https://blog.tchap.me/content/images/2018/07/i2c.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.tchap.me/content/images/2018/07/i2c.png" alt="Isolating I2C slaves"><p>While developing for the IoT2040 from Siemens (see <a href="https://w3.siemens.com/mcms/pc-based-automation/en/industrial-iot/pages/default.aspx">the product page</a>), I was having difficulties with the I2C bus; specifically, some peripherals were slow to respond or somehow buggy and the bus was becoming unresponsive.</p>
<p><img src="https://blog.tchap.me/content/images/2018/07/Screen-Shot-2018-07-18-at-11.26.08.png" alt="Isolating I2C slaves"></p>
<p>On this device, the I2C bus is behind an extender, and it seems not very robust when devices go berserk.</p>
<p>So I was tasked with finding a way to &quot;reboot&quot; the I2C bus if it found itself in a undefined state or if a peripheral was squatting the bus.</p>
<p>What came to mind was to do a power cycle so that the unrespectful devices could be powered off and rebooted fresh, generally freeing the data lines and prompting the bus to reset itself properly.</p>
<p>The naive solution I came up with was a simple dual MOSFET circuit that could allow for a software power cycle to be done via a control pin of the microcontroller that I was using (in this case, a Galileo Gen 2 — this is what is on the Siemens IoT20XX series).</p>
<h3 id="asimpledualmosfetsolution">A simple dual MOSFET solution</h3>
<p>I had being advised to use a dual MOSFET, specifically one N and one P, to act as a switch to power on and off my I2C slaves.</p>
<p>I settled on the <strong>FDS8958A</strong> Dual N&amp;P-Channel PowerTrench MOSFET from ON (Fairchild) (<a href="http://www.farnell.com/datasheets/80249.pdf">datasheet</a>). There is no application schematics in the datasheet but something along those lines (<em>adapt the R1/R2 values accordingly</em>) should work as expected :</p>
<p><img src="https://blog.tchap.me/content/images/2018/07/mosfet.png" alt="Isolating I2C slaves"><br>
(<em>the schematics is quite messy since I only had a standard 8 pin package chip and not a differentiated P and N component on Fritzing</em>)</p>
<p><code>5V</code> is my power source (we are using the Galileo Gen2 in 5V), <code>I2C_BUS_CTRL</code> is the control pin (HIGH means MOSFET is active), and <code>I2C_VCC</code> is the 5V output (all have the same GND) that is driven by the control pin.</p>
<blockquote>
<p>The two middle pins D1 and D2 of the chip are not used as they are in fact redundant.</p>
</blockquote>
<p>This works quite well, as the circuit is very fast (~10ns) and the output very steady. Setting the pin HIGH would power up the I2C slaves, and LOW would shut them down;</p>
<p><strong>BUT</strong></p>
<p>We then tried this circuit in a real world situation where the bus was used. And it turned out quite problematic, for various reasons :</p>
<ul>
<li>when powered down, some I2C slaves pulls the SDA and SCL lines LOW (~ 0.7V), which disables all traffic on the whole bus. I don't know if it's a standard behaviour but it certainly is not consistent accross the peripherals that I have</li>
<li>the expander did not seem to be able to recover from a powered down I2C slave correctly (and the driver was not happy about it). This may be due to the fact that the lines were driven low</li>
<li>Some low power devices actually act like they are still powered from the SCL line, which is unsettling. It defeats the whole purpose of the MOSFET since the peripherals does not really go through a power cycle</li>
</ul>
<p>At the end of the day, power cycling the I2C slaves was causing more problems than before : <strong>the bus was unusable</strong> and the only option was a hard reboot on the IoT20XX.</p>
<h3 id="ani2crepeatertotherescue">An I2C repeater to the rescue</h3>
<p>So I had this idea of cutting the I2C SDA/SCL lines too when powering down the slaves so as to be sure that they were really off the bus.</p>
<p>This would ensure that a powered-off slave would not prevent the bus from operating correctly.</p>
<p>I first tried the <strong>TCA4311A</strong> (<a href="http://www.ti.com/lit/ds/symlink/tca4311a.pdf">datasheet</a>). It's a neat little bus buffer that is specificaly designed for hot-swapping I2C slaves planes; I figured this is exactly what I wanted.</p>
<p>The application diagram was pretty straightforward :</p>
<p><img src="https://blog.tchap.me/content/images/2018/07/Screen-Shot-2018-07-18-at-14.49.20.png" alt="Isolating I2C slaves"><br>
<em>(Taken from the datasheet)</em></p>
<p>Unfortunately this chip seemed to have a problem with the SMBus protocol. Specifically, all words were not transmitted correctly (<em>whereas all single bytes were properly received by the slave</em>). I had no access to a scope at the time so it was difficult to know exactly why though and to debug, but I guess this could be linked to the pre-charge circuitry (see §11 of the datasheet).</p>
<p>So I unsoldered my TCA4311A and (horribly) replaced it (please don't judge me) with a different kind of bird — an I2C repeater — to test if this would be better :</p>
<p><img src="https://blog.tchap.me/content/images/2018/07/dont.png" alt="Isolating I2C slaves"></p>
<p>It's the <strong>TCA9517</strong> (<a href="http://www.ti.com/lit/ds/symlink/tca9517.pdf">datasheet</a>), that acts like a buffer too but in a different fashion.</p>
<p>The schematics to include it is pretty standard, as per the previous I2C bus buffer :</p>
<p><img src="https://blog.tchap.me/content/images/2018/07/9517.png" alt="Isolating I2C slaves"></p>
<p><code>I2C_SCL_IOT</code> and <code>I2C_SDA_IOT</code> are on the master side, whereas <code>I2C_SDA_OUT</code> and <code>I2C_SCL_OUT</code> are on the slave side. The <code>I2C_BUS_CTRL</code> is the same net as for the Dual MOSFET.</p>
<p><strong>And this worked !</strong></p>
<p>Of course, some pull-ups are required on each side of the buffer (I used 4,7kΩ but YMMV) :</p>
<p><img src="https://blog.tchap.me/content/images/2018/07/Bus.png" alt="Isolating I2C slaves"></p>
<p>With these two chips : the dual MOSFET and the bus repeater, I can now totally disconnect the needed I2C slaves when they behave badly. By setting one pin of my microcontroller LOW then HIGH, the VCC line of the slaves is down to GND and the slave part of the bus is deactivated, then reactivated as if it was connected for the first time.</p>
<hr>
<p><em>Comments welcome as usual if you find an error or if you want to add something to this.</em></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Starting with Yocto]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I recently got to dive into the embedded Linux build systems world while working on a project where I needed to create a distribution for an embedded device with limited resources.</p>
<p>Amongst the different options, I had identified two major tools : Buildroot and Yocto (<em>See this fantastic talk by Alexandre</em></p>]]></description><link>https://blog.tchap.me/starting-with-yocto/</link><guid isPermaLink="false">5ea824fb25e7cb317d1336c8</guid><category><![CDATA[yocto]]></category><category><![CDATA[linux]]></category><category><![CDATA[embedded]]></category><category><![CDATA[digital signage]]></category><category><![CDATA[minnowboard]]></category><dc:creator><![CDATA[tchap@tchap.me]]></dc:creator><pubDate>Tue, 22 May 2018 07:48:07 GMT</pubDate><media:content url="https://blog.tchap.me/content/images/2018/04/code.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.tchap.me/content/images/2018/04/code.png" alt="Starting with Yocto"><p>I recently got to dive into the embedded Linux build systems world while working on a project where I needed to create a distribution for an embedded device with limited resources.</p>
<p>Amongst the different options, I had identified two major tools : Buildroot and Yocto (<em>See this fantastic talk by Alexandre Belloni and Thomas Petazzoni from <a href="https://bootlin.com">Bootlin</a> : <a href="https://bootlin.com/pub/conferences/2016/elc/belloni-petazzoni-buildroot-oe/belloni-petazzoni-buildroot-oe.pdf">Buildroot vs. OpenEmbedded/Yocto Project</a></em>).</p>
<p>I decided to go for Yocto even if the learning curve was supposedly a bit steeper. In this article I'll try to describe my experience and give some tips along the way.</p>
<p><strong>In a nutshell, what I wanted to do</strong></p>
<p>So, I want to build an image for an Intel-based board, the Minnowboard Turbot. My goal is to have an image that boots silently, that runs a X environment and that can launch a fullscreen browser on boot.Pretty much a kiosk.</p>
<p>The specs of the board are here : <a href="https://minnowboard.org/minnowboard-turbot/technical-specs">https://minnowboard.org/minnowboard-turbot/technical-specs</a></p>
<p><img src="https://blog.tchap.me/content/images/2018/04/turbot.png" alt="Starting with Yocto"></p>
<h1 id="gettingstarted">Getting started</h1>
<p>Building embedded Linux images is not really tricky <em>per se</em>, but it needs a lot of resources, and a lot of time. I mean a lot because you may be reading this article on your quad core 8Go Macbook Pro and probably think &quot;ahah&quot;. Don't you dare. You need at least 100Go of disk and a good 16Go of RAM to have correct build times for Yocto. And when they say 100Go, be sure that they will be filled quite fast, and that the RAM /CPU usage will be close to 100% all the time, so it's a pain to work on the same machine at the same time.</p>
<p>Of course, I won't copy and paste the whole <strong>Getting started</strong> section of the Yocto project website since it's not really useful, but let's comment on a few steps here.</p>
<h3 id="getthecorrectdocs">Get the correct docs</h3>
<p>Here is the up to date documentation first :<br>
<a href="https://www.yoctoproject.org/docs/latest/yocto-project-qs/yocto-project-qs.html">https://www.yoctoproject.org/docs/latest/yocto-project-qs/yocto-project-qs.html</a></p>
<blockquote>
<p>Pro tip™ : if you search for <code>&quot;yocto getting started&quot;</code> on Google, you most likely will land on an outdated version of the same documentation, which is .. unsettling.</p>
</blockquote>
<h3 id="decidewhichmachineyouwilluseforthebuild">Decide which machine you will use for the build</h3>
<p><small><em>→ refers to the <a href="https://www.yoctoproject.org/docs/latest/yocto-project-qs/yocto-project-qs.html#yp-resources">resources</a> in the docs</em></small></p>
<p>I had a medium range linux machine (16Go, quad core) that was not used, so I decided to wipe it clean and reinstall a brand new Debian 9 on it.</p>
<p>When they say you need 50Go at least, double that number to be sure. My current folder is actually 64Go, and I cleaned most of previous builds and images to free up space already :</p>
<pre><code>root@builder:~$ du -h -s yocto
64G	  yocto
</code></pre>
<p>You can use docker containers (via <a href="http://www.yoctoproject.org/docs/2.5/dev-manual/dev-manual.html#setting-up-to-use-crops">CROPS</a>) but I would not recommend it since you will likely need a lot of resources.</p>
<p>The full <code>uname -a</code> for the machine I used is :<br>
<code>Linux 4.9.0-5-amd64 #1 SMP Debian 4.9.65-3+deb9u2 (2018-01-04) x86_64 GNU/Linux</code></p>
<p>Having a <em>lot</em> of RAM on the build machine can certainly help but having more than 4 cores shouldn't change your build time that much, since the things that take time will usually block your builds (many other packages depend on it), so it needs to be built <em>before</em>.</p>
<p>Processor speed can give you shorter build times.</p>
<h3 id="packages">Packages</h3>
<p><small><em>→ refers to the <a href="https://www.yoctoproject.org/docs/latest/yocto-project-qs/yocto-project-qs.html#packages">packages</a> in the docs</em></small></p>
<p>You need a few packages to be able to build everything correctly. Apart from the standard packages that the doc tells you to install ... :</p>
<pre><code>apt install gawk wget git diffstat unzip texinfo gcc-multilib \
 build-essential chrpath socat cpio python python3 python3-pip python3-pexpect \
 xz-utils debianutils iputils-ping libsdl1.2-dev xterm
</code></pre>
<p><em>(Note that I installed <code>git</code> and not <code>git-core</code> since the latter is obsolete)</em></p>
<p>... I would recommend these as well :</p>
<ul>
<li>
<p>Tools, always handy to have to modify a local conf easily or get something from the web</p>
<p><code>vim htop wget curl screen tree</code></p>
</li>
<li>
<p>Packages I needed to install at some point to resolve an error in the build process (<em>no clear explanation as why, but hey, the more packages the merrier</em>)</p>
<p><code>libssl-dev libev-dev asciidoc bc</code></p>
</li>
</ul>
<h3 id="buildingandrunninginqemu">Building and running in qemu</h3>
<p>Cloning the repo is straightforward, you can source the <code>oe-init-build-env</code> file<br>
and start the build with <code>bitbake core-image-sato</code> from the build folder for example.</p>
<p>The getting started tutorial is quite well-written in that regard.</p>
<p>4 hours minimum later, and you can emulate your machine with :</p>
<pre><code>runqemu qemux86
</code></pre>
<p><em>(that is, if you have not changed any file, or option, for now)</em></p>
<blockquote>
<p>Pro tip™: if you have a headless machine : <code>runqemu qemux86 nographic</code></p>
</blockquote>
<h3 id="copyingtoasdcardfortestingonthetargethardware">Copying to a SD card for testing on the target hardware</h3>
<p>When compiling for Intel targets, a <code>.wic</code> image is created and is pretty straight-forward to use : just dd it to your SD card :</p>
<pre><code>sudo dd of=/dev/diskX if=yourimage.rootfs.wic
</code></pre>
<p>Note : by default meta-intel <code>.wic</code> images only have an EFI bootloader.</p>
<blockquote>
<p>Pro tip™: when you want to copy, use <code>bs=512k</code>. It's the standard bootloader size, and I've been told numerous times that it's always better to use this size. I stick with it and I think it's a good idea. And moreover I've had cases where using a different size would make the resulting SD card unable to boot <em>at all</em>.</p>
</blockquote>
<h1 id="customizeeverything">Customize everything</h1>
<p>So far, it's an easy ride. But we don't want a standard Sato GUI. So let's start customizing things. That's where things get a little harder.</p>
<h3 id="inanutshell">In a nutshell</h3>
<p>There is a quite complex image that gives you a general overview of the architecture of the Yocto environment :</p>
<p><img src="https://blog.tchap.me/content/images/2018/04/yocto-environment.png" alt="Starting with Yocto"></p>
<p>... but I find it too abstract, so I'll simplify the concept and try to give you a more basic view of how it works <em>in practice</em> :</p>
<p><img src="https://blog.tchap.me/content/images/2018/04/Screen-Shot-2018-04-24-at-12.00.51.png" alt="Starting with Yocto"></p>
<p>To customize your build, what you want to do is create a layer to hold everything in it. This layer (in green above) will reside alongside the other layers of your build.</p>
<p><strong>A layer</strong> (Named along the lines of <code>meta-my-layer</code> for example) contains <strong>recipes</strong>, which can be <strong>image recipes</strong> (<em>a special kind of recipe</em>). Technically, a layer is a folder with some conventional files in it.</p>
<p>You create your layer, and you create <strong>all your recipes in it</strong>, that you can organize in <strong>recipe folders</strong> (<code>recipes-common</code>, <code>recipes-my-app</code>, etc ... in grey on the schema). A 'recipe folder' is just a folder. A <strong>recipe</strong> is also just a folder that contains at least one mandatory recipe file (<em>a priori</em>, a <code>.bb</code> file</p>
<p>Your <strong>recipes</strong> will possibly be only <strong><em>appending</em></strong> stuff to an already-existing recipe (a conf file, some bash scripts, etc). In this case, the recipe file is a <code>.bbappend</code> file</p>
<p>Finally, your <strong>image recipes</strong> will possibly call the packages from the recipes you created in your own layer, and packages from other layers too.</p>
<h3 id="theprocessforcustomizing">The process for customizing</h3>
<p>Basically, you want to start with the smallest image that suits your needs, and add the packages you need in it (let's say chromium, X, and a custom app of yours, for instance). Two different steps to take:</p>
<ul>
<li>Clone the different layers that contain the recipes for the packages that you want to install</li>
<li>Create your own layer and recipes for your custom app, and your modifications to existing packages if needed</li>
<li>Finally, create an image layer that will hold the reference to all these packages : the ones that exist and your custom ones.</li>
</ul>
<h2 id="thefilesyoumustcareabout">The files you must care about</h2>
<p>Apart from your layer folder (<em>we'll come to that in a bit</em>), there are only two files that are important for the build :</p>
<pre><code>build/conf/local.conf
build/conf/bblayers.conf
</code></pre>
<p>These will be modified quite often while you create your perfect image, so be sure to keep them at hand.</p>
<h2 id="addingexistinglayerstoyourbuild">Adding existing layers to your build</h2>
<p>Now, before you can install <em>any</em> package onto your build, you might want to add the layer in which this package is. This is the role of the <code>build/conf/bblayers.conf</code> file, that allows you to add layers in your build.</p>
<p>First, you have to download the layer in a folder somewhere. I use the root folder of poky so that all the layers are in the same place, but it's up to you.</p>
<p>If you are looking for layers or recipes, this is the place to go to :</p>
<ul>
<li>Layers : <a href="https://layers.openembedded.org/layerindex/branch/master/layers/">https://layers.openembedded.org/layerindex/branch/master/layers/</a></li>
<li>Recipes : <a href="https://layers.openembedded.org/layerindex/branch/master/recipes/">https://layers.openembedded.org/layerindex/branch/master/recipes/</a></li>
</ul>
<p>You will then have to define the path of your layer in the <code>build/conf/bblayers.conf</code> file.</p>
<p>Suppose you want to use the <code>meta-intel</code> layer. First, clone it with :</p>
<pre><code>git clone git://git.yoctoproject.org/meta-intel
</code></pre>
<p>Then, add its path in the file :</p>
<pre><code>POKY_BBLAYERS_CONF_VERSION = &quot;3&quot;

BBPATH = &quot;${TOPDIR}&quot;
BBFILES ?= &quot;&quot;

BBLAYERS ?= &quot; \
  ${TOPDIR}/../meta \
  ${TOPDIR}/../meta-poky \
  ${TOPDIR}/../meta-yocto-bsp \
  ${TOPDIR}/../meta-intel \         # &lt;----- HERE
  &quot;

BBLAYERS_NON_REMOVABLE ?= &quot; \
  ${TOPDIR}/../meta \
  ${TOPDIR}/../meta-yocto \
  &quot;
</code></pre>
<p>Once you rebuild your image or any package, this layer will be taken into account and the necessary recipes will be found.</p>
<h2 id="creatingyourlayer">Creating your layer</h2>
<p>Now, let's create our own layer and recipes.</p>
<p>We'll name it <strong>meta-tchap</strong>. It will reside in the poky folder, where all the other meta-folders reside too.</p>
<p>We'll just create the basic structure here, with an <code>images</code> folder that will contain an image recipe.</p>
<pre><code>root@builder:~/yocto/poky$ tree meta-tchap
meta-tchap
├── conf
│   └── layer.conf
└── recipes-core
    └── images
        └── core-image-iot.bb
</code></pre>
<blockquote>
<p>Pro tip™: There is a tool for that if you don't want to dive into the files : <code>yocto-layer</code>, that you can access when you have sourced everything in your build folder. Something simple like <code>yocto-layer create meta-tchap</code> should do the trick.</p>
</blockquote>
<h4 id="thelayerconffile">The layer.conf file</h4>
<p>It's a relatively standard file that is not likely to change, it justs imports all the recipes of the layer :</p>
<pre><code>    # We have a conf and classes directory, add to BBPATH
    BBPATH .= &quot;:${LAYERDIR}&quot;

    # We have recipes-* directories, add to BBFILES
    BBFILES += &quot;${LAYERDIR}/recipes-*/*/*.bb \
        ${LAYERDIR}/recipes-*/*/*.bbappend&quot;

    BBFILE_COLLECTIONS += &quot;meta-tchap&quot;
    BBFILE_PATTERN_meta-tchap= &quot;^${LAYERDIR}/&quot;
    BBFILE_PRIORITY_meta-tchap = &quot;100&quot;
</code></pre>
<p>Since it's my own layer, I usually bump the priority up <em>(Each layer has a priority, which is used by bitbake to decide which layer takes precedence if there are recipe files with the same name in multiple layers. A higher numeric value represents a higher priority.)</em></p>
<h2 id="creatingtheimagerecipe">Creating the image recipe</h2>
<p>An image recipe <em>is</em> a recipe. You just have to define what will be installed on it as a basis, with features and packages.</p>
<p>Let's create a simple image with X11, Chromium, Node and some other packages. We'll call it <code>core-image-iot</code> and it will reside in <code>...meta-tchap/recipes-core/images/core-image-iot.bb</code> :</p>
<pre><code>SUMMARY = &quot;A full-featured + basic X11 / Chromium / NodeJS image.&quot;
LICENSE = &quot;MIT&quot;

IMAGE_FEATURES_append = &quot; splash x11-base hwcodecs ssh-server-openssh post-install-logging&quot;
IMAGE_FEATURES_remove = &quot;allow-empty-password&quot;
IMAGE_FEATURES_remove = &quot;empty-root-password&quot;

IMAGE_INSTALL = &quot;\
    packagegroup-core-boot \
    packagegroup-core-full-cmdline \
    psplash \
    chromium-x11 sudo vim git curl htop wget unzip usbutils \
    nodejs nodejs-npm \
    ${CORE_IMAGE_EXTRA_INSTALL} \
    &quot;

inherit core-image extrausers

# Here, we set the root password for the image
# password = test
EXTRA_USERS_PARAMS = &quot;usermod -p m7or76bu6AEY6 root;&quot;
</code></pre>
<p>This image inherits the <code>core-image</code> recipe, that has a lot already (see the <a href="https://github.com/openembedded/openembedded-core/blob/master/meta/classes/core-image.bbclass">file</a>).</p>
<p>In this recipe, we use:</p>
<ul>
<li><strong>IMAGE_FEATURES</strong> : it allows us to add features to the images, such as using the post-install-logging or adding x11. Features are just wrappers around packages, in fact</li>
<li><strong>IMAGE_INSTALL</strong> : it adds packages to the image, such as git, chromium-x11, etc ...</li>
<li><strong>EXTRA_USERS_PARAMS</strong> : it's a module that allow to work with users. Here, we use it to set the root password easily. You have to <code>inherit</code> it first.</li>
</ul>
<p>Once we have this file, we can build this image very easily with :</p>
<pre><code>bitbake core-image-iot
</code></pre>
<blockquote>
<p>You need to clone a few layers before being able to build this image, since it relies on packages that are not in the base layer : <a href="https://github.com/OSSystems/meta-browser">https://github.com/OSSystems/meta-browser</a> and <a href="https://github.com/imyller/meta-nodejs">https://github.com/imyller/meta-nodejs</a></p>
</blockquote>
<h2 id="addingexistingpackagestoyourbuild">Adding existing packages to your build</h2>
<p>There are some packages that we don't want to put into our image recipe, but in the configuration file first, so it's easier to switch on/off between different builds or for testing.</p>
<p>To simply add a package to your build, you can add it to your <code>build/conf/local.conf</code> file :</p>
<pre><code>IMAGE_INSTALL_append = &quot; htop&quot;
</code></pre>
<blockquote>
<p>Pro tip™ : note the &quot; &quot; (space) after the opening quote, it's important, the string will be concatenated.</p>
</blockquote>
<p>In this example, the htop package (that exist <a href="https://layers.openembedded.org/layerindex/recipe/995/">here</a>).</p>
<p>As we have seen before, you have to have the layer somewhere beforehand. In the <code>htop</code> case, it's the <code>meta-oe</code> layer. You thus have to clone the repository (see info <a href="https://layers.openembedded.org/layerindex/branch/master/layer/meta-oe/">here</a>):</p>
<pre><code>git clone https://layers.openembedded.org/layerindex/branch/master/layer/meta-oe/
</code></pre>
<p>The <code>meta-oe</code> layer itself is in a <em>subfolder</em> of that git repo (as indicated in the openembedded.org page), so you should had the relevant path to your <code>build/conf/bblayers.conf</code> file like this :</p>
<pre><code>${TOPDIR}/../meta-openembedded/meta-oe \
</code></pre>
<p>which give us a <code>bblayers.conf</code> file like this :</p>
<pre><code>BBLAYERS ?= &quot; \
  ${TOPDIR}/../meta \
  ${TOPDIR}/../meta-poky \
  ${TOPDIR}/../meta-yocto-bsp \
  ${TOPDIR}/../meta-openembedded/meta-oe \ # &lt;----- HERE
  &quot;
</code></pre>
<p>Now, you're ready to rebuild your image and tada, the <code>htop</code> binary should be available once you boot it.</p>
<pre><code> root@iotboard:~/$ htop --version
    htop 2.0.2 - (C) 2004-2016 Hisham Muhammad
    Released under the GNU GPL.
</code></pre>
<div style="border: 1px solid grey; padding: 10px;">
Congrats ! You now have a functionning build with your own image, your own recipes and probably a few standard package factored in too, that's great.
</div>
<h2 id="appendanexistingrecipe">Append an existing recipe</h2>
<p>Now you might have a recipe that <em>exists</em> but that you wish to modify somehow. You could create a whole new recipe, copy and paste the content of the recipe you wish to amend and call it a day (if your layer has a higher priority of course). But then, it's not really the Yocto way.</p>
<p>If you want to <em>append</em> something to a recipe, you can easily create a <code>bbappend</code> file and make your changes there.</p>
<p>Let's say you want to change the Chromium X11 recipe to add some flags to the build. Create the recipe folder as if you would create your own recipe and create a <code>chromium-x11_%.bbappend</code> file in it, with the following content :</p>
<pre><code>FILESEXTRAPATHS_prepend := &quot;${THISDIR}/${PN}:&quot;
PACKAGECONFIG = &quot;use-egl kiosk-mode proprietary-codecs&quot;
</code></pre>
<p>Here, I'm adding these flags to the build (second line). The first line is a boilerplate instruction telling bitbake where it should look for extra files if any; Just put it there in all your append recipes.</p>
<p>In my layer folder (<code>meta-tchap</code>), I now have a folder more with the following structure :</p>
<pre><code>recipes-browser/
└── chromium
    └── chromium-x11_%.bbappend
</code></pre>
<p><strong>And that's it</strong> ! (don't forget to add your custom layer in the <code>conf/bblayers.conf</code> if you haven't done so yet).</p>
<p>PS : the name <code>recipe-browsers</code> is completely up to you. <code>chromium</code> is not : it's the name of the recipe you are appending.</p>
<p>If you want to check that your append is taken into account correctly, you can use the <code>bitbake-layers</code> utility :</p>
<pre><code>bitbake-layers show-appends
</code></pre>
<p>It will show you which recipe is appended and by <em>whom</em>. Very practical.</p>
<h2 id="quietbootwithstandardkernels">Quiet boot with standard kernels</h2>
<p>It's as easy as adding :</p>
<pre><code>APPEND += &quot;quiet vt.global_cursor_default=0&quot;
</code></pre>
<p>.. to your image recipe, or your <code>local.conf</code> file.</p>
<h2 id="quietbootwithmetaintel">Quiet boot with meta-intel</h2>
<p>Well, not that easy, and so far I did not manage to pass boot options when using the <code>meta-intel</code> layer.</p>
<p>See : <a href="https://stackoverflow.com/questions/49033507/amend-boot-cmdline-in-custom-image-build">https://stackoverflow.com/questions/49033507/amend-boot-cmdline-in-custom-image-build</a></p>
<p>It seems <em>complicated</em>. As far as I understand the kernel does not take the <code>APPEND</code> variables into account. This seems to work on non-Intel builds, but not with the <code>meta-intel</code> layer.</p>
<p>To circumvent this limitation, I created a postinst script (<em>see below</em>) with the following content :</p>
<pre><code>pkg_postinst_${PN} () {
    #!/bin/sh -e
    if [ x&quot;$D&quot; = &quot;x&quot; ]; then
        echo &quot;default boot&quot; &gt; /boot/loader/loader.conf
        sed -i 's/console=tty0/quiet splash vt.global_cursor_default=0 vt.cur_default=0/' /boot/loader/entries/boot.conf
    else
        exit 1
    fi
}
</code></pre>
<p>This will modify options directly in the boot loader entries under <code>/boot</code>. This works well.</p>
<h2 id="exploringpsplash">Exploring psplash</h2>
<p>psplash is the <em>de facto</em> standard on poky to display a splash image while the system it booting.</p>
<blockquote>
<p>Disclaimer : it's tied to <strong>sysvinit</strong> by default, so if you want to use <strong>systemd</strong> as your init system of choice, psplash won't work. Patches exist, though (I have not tested them) — one I found is <a href="https://patchwork.ozlabs.org/patch/351283/">https://patchwork.ozlabs.org/patch/351283/</a>.</p>
</blockquote>
<h3 id="createacustompsplashimage">Create a custom psplash image</h3>
<p>Two options here. Either you create it yourself and add the image file (as a .h file) in your layer, or you let bitbake recompile the image file to a .h file each time you build.</p>
<p>In both cases, you need to install psplash :</p>
<pre><code>IMAGE_INSTALL_append = &quot; psplash&quot;
</code></pre>
<p>And create the bbappend file (I suggest <code>psplash_git.bbappend</code>) along with the <code>files</code> directory that will hold the image :</p>
<pre><code>vi ...meta-tchap/recipes-core/psplash/psplash_git.bbappend
mkdir ...meta-tchap/recipes-core/psplash/files/
</code></pre>
<p><em>(do not forget to adapt for your own path)</em></p>
<h4 id="1firstoptioncreatetheimageyourself">1. First option : Create the image yourself</h4>
<p>Clone the repo somewhere :</p>
<pre><code>git clone git://git.yoctoproject.org/psplash &amp;&amp; cd psplash
</code></pre>
<p>Considering your image is at <code>./psplash-poky.png</code> :</p>
<pre><code>./make-image-header.sh ./psplash-poky POKY
mv psplash-poky-img.h ...meta-tchap/recipes-core/psplash/files/.
</code></pre>
<p><em>(change the paths to fit your config obviously here too)</em></p>
<p>Your bbappend file will look like this :</p>
<pre><code>FILESEXTRAPATHS_prepend := &quot;${THISDIR}/files:&quot;
SPLASH_IMAGES = &quot;file://psplash-poky-img.h;outsuffix=default&quot;
</code></pre>
<blockquote>
<p>If you want to change the image, you have to redo this process (except the git clone of course)</p>
</blockquote>
<h4 id="2secondoptionletbitbakedoit">2. Second option : Let bitbake do it</h4>
<p>Well, pretty straightforward. No need to clone anything, create the bbappend file directly and put your PNG image in <code>...meta-tchap/recipes-core/psplash/files/</code> directly :</p>
<pre><code>    FILESEXTRAPATHS_prepend := &quot;${THISDIR}/files:&quot;
    DEPENDS += &quot;gdk-pixbuf-native&quot;
    SRC_URI += &quot;file://psplash-poky.png&quot;
    SPLASH_IMAGES = &quot;file://psplash-poky-img.h;outsuffix=default&quot;

    do_configure_append () {
        cd ${S}
        # will create psplash-poky-img.h for you :
        ./make-image-header.sh ./psplash-poky POKY
    }
</code></pre>
<p>This will compile the image file at configure time.</p>
<h3 id="psplashrotation">psplash rotation</h3>
<p>Because sometimes you need it, add this to your <code>psplash_git.bbappend</code> (if you want 90°) :</p>
<pre><code>do_install_append() {
    echo 90 &gt; ${D}/etc/rotation
}
</code></pre>
<h2 id="chromiumonabasicx11image">Chromium on a basic X11 image</h2>
<p>Working on my kiosk image, adding nodejs and npm was a walk in the park (<em>just add the layer and append the recipes to the image</em>). But adding Chromium and making it start at boot required a little more work.</p>
<p>You need a working X environment to display Chromium. Fortunately the <code>x11-base</code> image feature (<code>IMAGE_FEATURES_append = &quot; x11-base&quot;</code>) provides just that, and no more.</p>
<p>You can then go for a full-fledged window manager but I personnaly think this is overkill if you're working on a device with limited resources such as an embedded system.</p>
<p>The X11 base feature contains just the vital minimum for this use case :</p>
<ul>
<li>A very simple and lightweight window manager (Matchbox)</li>
<li>the Mini-X-Session session manager</li>
</ul>
<p>Matchbox is an relatively old piece of code (~2012), is mainly intended for embedded systems and differs from most other window managers in that it only shows one window at a time. That's exactly what we need, in fact.</p>
<p>Mini-X-Session is a very simple session manager for X, that provides just the right boilerplate for us to create our session and launch the browser (see <a href="http://cgit.openembedded.org/openembedded-core/tree/meta/recipes-graphics/mini-x-session/mini-x-session_0.1.bb">http://cgit.openembedded.org/openembedded-core/tree/meta/recipes-graphics/mini-x-session/mini-x-session_0.1.bb</a>)</p>
<p><strong>The mini_x session file</strong></p>
<p>So, how do we glue all this together ? Well, first I'd recommend you create a user to run your X session, let's call him <code>myUnprivilegedUser</code>. You don't really want to run X as root on a probably user-facing linux box.</p>
<p>I suppose I have an app running at <a href="http://localhost:3001">http://localhost:3001</a> for the sake of it (The app you want to run in your browser).</p>
<p>You can then create a session file in <code>/etc/mini_x/session.d/</code> (or, quicker, replace <code>/etc/mini_x/session</code> altogether if you want to lock things a bit) — I've commented the file below so you know what it does exactly :</p>
<pre><code>#!/bin/sh

# This script will be called via mini X session on behalf of the file owner, after
# being installed in /etc/mini_x/session.d/.

xset s off  &gt; /dev/null 2&gt;&amp;1        # don't activate screensaver
xset -dpms   &gt; /dev/null 2&gt;&amp;1       # disable DPMS (Energy Star) features.
xset s noblank   &gt; /dev/null 2&gt;&amp;1   # don't blank the video device

# Set a resolution
xrandr -s 1920x1080

# Takes care of rotating the screen based on the content of /etc/rotation
# I've only implemented a single case (90 CW) but it is trivial to amend.
if grep -Fxq &quot;90&quot; /etc/rotation
then
    xrandr -o left
fi

# Run the wm first, so that our Chromium window is &quot;aware&quot; of the screen, and can resize correctly (if you don't run a wm first, your Chromium window will likely be half the size of the screen)
matchbox-window-manager -use_titlebar no -use_cursor no &amp;

# Now, run chromium 'as' your user
su -l -c &quot;/usr/bin/chromium --user-data-dir=/tmp --disable-session-crashed-bubble --disable-infobars --noerrdialogs --disable-restore-background-contents --disable-translate --disable-new-tab-first-run http://localhost:3001&quot; myUnprivilegedUser
</code></pre>
<p><strong>All the flags</strong></p>
<p>All the flags are carefully crafter here in an attempt to minimize UI junk like popups and infobars. Customise for your own need.</p>
<p>But you might notice that some flags are missing for a proper kiosk experience. This is because they have been incorporated in the chromium build itself with Yocto, as I'm using a recipe that allows for it : <a href="https://github.com/OSSystems/meta-browser/tree/master/recipes-browser/chromium">https://github.com/OSSystems/meta-browser/tree/master/recipes-browser/chromium</a></p>
<p>For instance, to build Chromium with the kiosk mode already in, it's a easy as appending the Chromium recipe (<em>we have covered that previously, but I'll just redetail here quickly</em>)</p>
<p>Create a <code>chromium-x11_%.bbappend</code> file in <code>meta-tchap/recipes-browser/chromium</code> with the following content :</p>
<pre><code>FILESEXTRAPATHS_prepend := &quot;${THISDIR}/${PN}:&quot;
PACKAGECONFIG = &quot;kiosk-mode&quot;
</code></pre>
<p>Rebuild your image (<em>don't forget that building Chromium is a loooong process</em>), and the flag will already be present (i.e., when running <code>/usr/bin/chromium</code>, it will already be in kiosk mode by default).</p>
<blockquote>
<p><code>Trace/Breakpoint trap</code> ? Oh, I see, it can be due to Chromium not being able to write to the user/data directory. This is the reason why I added the <code>--user-data-dir=/tmp</code> option when launching the browser to be sure that it doesn't panic on start (took me a while to figure this out in fact)</p>
</blockquote>
<h2 id="howtorunascriptonfirstbootandonlyonfirstboot">How to run a script on first boot (<em>and only on first boot</em>)</h2>
<p>As we have seen, there are a few things that seem complex to do on specific kernels / under specific conditions, or that need hardware to complete (and thus cannot be done before the image is booted on the <em>actual</em> hardware). For these, a good old script that launches at first boot and then erases itself is clearly the best option. It's fairly transparent, and chances are good that your hardware will boot at least once before being put in production.</p>
<p>Fortunately, this is easily done in Poky. <strong>Enter <code>pkg_postinst</code></strong>.</p>
<p><code>pkg_postinst_${PN}</code> is a function that is run just after the <em>package</em> in which it resides is installed in the image. If the execution of <code>pkg_postinst</code> exits with success (<code>exit 0</code>), the script is removed (since they have run already). If not, they are keept and run again at first boot. With this, you can defer the execution until the first boot easily.</p>
<p>Let's take an example. I will create a specific recipe in my <code>meta-tchap</code> layer, just for that. Let's call it &quot;startup&quot; (any name will do). I have put this recipe under <code>recipes-common</code> but again here, any recipes folder is fine.</p>
<pre><code>root@builder:~/yocto/poky$ tree meta-tchap
meta-tchap
├── conf
│   └── layer.conf
└── recipes-common
    └── startup
        └── startup_1.0.bb
</code></pre>
<p>In this <code>startup_1.0.bb</code> file, I will just run a very simple script on first boot that creates the file <code>/tmp/i_was_here</code>. The file <code>/tmp/i_was_installed</code> will be installed before, when the package <code>do_install</code> is called :</p>
<pre><code>SUMMARY = &quot;Just a simple recipe to test postint&quot;
LICENSE = &quot;MIT&quot;
PR = &quot;r1&quot;

S = &quot;${WORKDIR}&quot;

do_install () {
    # You have to do something here ! Else, the package will likely not be installed
    touch ${D}/tmp/i_was_installed
}

pkg_postinst_${PN} () {
    #!/bin/sh -e
    if [ x&quot;$D&quot; = &quot;x&quot; ]; then
        # This will run on first boot
        touch /tmp/i_was_here
    else
        # This will run after package installation
        echo &quot;Skipping postinst script, will do on first boot&quot;
        exit 1
    fi
}
</code></pre>
<p>Do not forget to add the package to your image installs in your <code>local.conf</code> :</p>
<pre><code>IMAGE_INSTALL_append = &quot; startup&quot;
</code></pre>
<p>If you build the image and open a <strong>devshell</strong> (<em>see next section</em>) afterwards to check which files are here, you will notice that only the <code>/tmp/i_was_installed</code> file will have been created.</p>
<p>Once you boot this image on the actual hardware, the <code>/tmp/i_was_here</code> file will be created, too, and the script will be deleted from the system automatically.</p>
<blockquote>
<p>Pro tip™ : the list of all poky standard target filesystem paths are in the source <a href="https://git.yoctoproject.org/cgit.cgi/poky/plain/meta/conf/bitbake.conf">here</a></p>
</blockquote>
<h2 id="thedevshell">The devshell</h2>
<p>A devshell is a shell that opens in the recipe's target directory so you can check what has really been done / installed, and what will be copied onto the image.</p>
<pre><code>bitbake -c devshell &lt;recipename&gt;
</code></pre>
<blockquote>
<p>Pro tip™ : you have to fully build your recipe first (<code>bitbake &lt;recipename&gt;</code>), to access all the folders that I go through below. Doing so can help you troubleshoot specific problems with your build. IF you don't, only the source folder will be available (the <code>git</code> folder, for instance)</p>
</blockquote>
<p>This will get you in the working dir for your recipe and source a shell with bitbake’s environment set up. Several folders there :</p>
<ul>
<li><code>git</code> : where the git source are fetched and built (that is, in the case where you use git to fetch your sources)</li>
<li><code>package</code> : is where the files are copied for creating the ipk (or rpm) package that will be installed at the end. This is technically a replica of the <code>/</code> folder of your image, with just the folder and files that your recipes own and will install</li>
<li><code>deploy-ipks</code> (<em>could be named <code>deploy</code>, or <code>deploy-*</code> depending on the package manager that you choose in your <code>local.conf</code> — I choose <strong>ipk</strong></em>). It's what will be installed via the package manager onto the image. Generally, you have three packages, one for production, one for development and one for debug.</li>
</ul>
<p>You'll likely explore the <code>package</code> folder to see what you recipe created and check for potential problems.</p>
<blockquote>
<p>Pro tip™ : <code>exit</code> to go back to your shell</p>
</blockquote>
<h2 id="wheredoesthatvariablevaluecomesfrom">Where does that variable value comes from ?</h2>
<p>You will surely often wonder where does a specific configuration comes from, or what is its value.</p>
<p>For instance, you might want to know what is the preferred provider for the kernel, that is behind an obscure <code>virtual/kernel</code> variable.</p>
<p>In this case, <code>bitbake -e</code> comes in handy :</p>
<pre><code>$ bitbake -e &lt;your_image_name_here&gt; | grep &quot;^PREFERRED_PROVIDER_virtual/kernel&quot;
   PREFERRED_PROVIDER_virtual/kernel=&quot;linux-intel&quot;
   PREFERRED_PROVIDER_virtual/kernel_poky-tiny=&quot;linux-intel&quot;
</code></pre>
<p>Great ! We now know the value !</p>
<h2 id="reconfigurethekernelyousay">Reconfigure the kernel you say ?</h2>
<p>If you want to reconfigure the kernel, you can use the <code>menuconfig</code> utility. <a href="https://software.intel.com/en-us/node/593593">This page</a> has relevant information on how to do it, and especially on how to <em>save</em> your modifications to a <code>.config</code> file and then find this <strong>bloody</strong> file in the Yocto tree.</p>
<pre><code>bitbake -c menuconfig virtual/kernel
</code></pre>
<blockquote>
<p>Pro tip™ : for an intel corei7 build the path you are looking for is something similar to :</p>
</blockquote>
<pre><code>    build
     └ tmp
       └ work
         └ corei7-64-intel-common-poky-linux
           └ linux-intel
             └ 4.9.81+gitAUTOINC+*
               └ linux-corei7-64-intel-common-standard-build
                  └ .config
</code></pre>
<p>For the Minnowboard, one configuration item that needs to be changed is  <code>CONFIG_IGB=y</code>. It adds the <strong>Intel Gigabit Ethernet</strong> driver directly into the kernel instead of as a module, and helps recover the eth0 interface that is not created at boot otherwise.</p>
<div style="text-align: center; margin: 40px auto;">∗ ∗ ∗</div>
<p>Alright. This was a bit long, but I hope it can help anyone fiddling with Yocto to not bang its head over an obscure bug / behaviour. If you spot a mistake or think I have missed something, do not hesitate to drop me a line or comment below.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[HTTPS and no www with Nginx]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I have tried to find the most efficient way to redirect all trafic for a specific domain to its https counterpart, and also to redirect to the domain without the <code>www</code> subdomain.</p>
<p>I'm using Nginx in production and after combining various different solutions I found, I settled on this simple</p>]]></description><link>https://blog.tchap.me/properly-redirect-to-https/</link><guid isPermaLink="false">5ea824fb25e7cb317d1336cb</guid><category><![CDATA[nginx]]></category><dc:creator><![CDATA[tchap@tchap.me]]></dc:creator><pubDate>Thu, 26 Apr 2018 08:40:00 GMT</pubDate><media:content url="https://blog.tchap.me/content/images/2018/04/Nginx.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.tchap.me/content/images/2018/04/Nginx.png" alt="HTTPS and no www with Nginx"><p>I have tried to find the most efficient way to redirect all trafic for a specific domain to its https counterpart, and also to redirect to the domain without the <code>www</code> subdomain.</p>
<p>I'm using Nginx in production and after combining various different solutions I found, I settled on this simple configuration that I now use and that I think is quite straightforward and efficient. Thought I'd share, for what it's worth.</p>
<p>(In this example the backend is a PHP application located in <code>/var/www/my-website/</code> and I'm using letsencrypt for the certificates)</p>
<pre><code># no SSL
# Redirect both urls to the http server block
server {
        server_name my-website.fr www.my-website.fr;
        return 301 https://my-website.fr$request_uri;
}

# The main block : SSL
server {

        server_name my-website.fr www.my-website.fr;

        # If it has a www, rewrite.
        # The 'last' here is important
        # because we are cautious with the 'if's
        # nginx.com/resources/wiki/start/topics/depth/ifisevil
        if ($host ~* ^www\.){
            rewrite ^(.*)$ https://my-website.fr$1 last;
        }

        include ssl.conf;

        ssl_certificate /etc/letsencrypt/live/my-website.fr/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/my-website.fr/privkey.pem;

        # Application-specific stuff, just for illustrating
        index index.php;
        root /var/www/my-website;

        location / {
          # try to serve file directly, fallback to app.php
          try_files $uri /index.php$is_args$args;
        }

        # Pass on to FPM
        location ~ \.php$ {
           include php-fpm.conf;
        }

        # Deny access to .ht* files
        location ~ /\.ht {
          deny all;
        }
}
</code></pre>
<p><code>ssl.conf</code> is as follows (of course, you might need to create the <code>/etc/ssl/certs/dhparam.pem</code> file beforehand, with <code>sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048</code>):</p>
<pre><code>listen 443 ssl;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
</code></pre>
<p>And, for reference, <code>php-fpm.conf</code> is available <a href="https://github.com/tchapi/ansible-playbooks/blob/master/roles/php-fpm/files/php-fpm.nginx.conf">here</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[A sensible DeployerPHP config for Sf4]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In my recent PHP projects, I tend to use Symfony 4, and <a href="https://www.npmjs.com/package/@symfony/webpack-encore">Webpack Encore</a> to build my assets. I find it to be a strong set of base tools for PHP apps.</p>
<p><a href="https://webpack.js.org/">Webpack</a> is a quite convenient way to build assets, widely used in the NodeJS world and for static</p>]]></description><link>https://blog.tchap.me/a-sensible-deployerphp-config-for-sf4/</link><guid isPermaLink="false">5ea824fb25e7cb317d1336ca</guid><category><![CDATA[php]]></category><category><![CDATA[webpack]]></category><category><![CDATA[symfony]]></category><category><![CDATA[deployer]]></category><dc:creator><![CDATA[tchap@tchap.me]]></dc:creator><pubDate>Fri, 13 Apr 2018 16:08:06 GMT</pubDate><media:content url="https://blog.tchap.me/content/images/2018/04/sf.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.tchap.me/content/images/2018/04/sf.png" alt="A sensible DeployerPHP config for Sf4"><p>In my recent PHP projects, I tend to use Symfony 4, and <a href="https://www.npmjs.com/package/@symfony/webpack-encore">Webpack Encore</a> to build my assets. I find it to be a strong set of base tools for PHP apps.</p>
<p><a href="https://webpack.js.org/">Webpack</a> is a quite convenient way to build assets, widely used in the NodeJS world and for static sites, eventhough it can be <em>slow</em> sometimes. Webpack Encore is a wrapper around Webpack that simplifies its API, a bit like Webpacker.</p>
<p><a href="http://deployer.org">Deployer</a> is an atomic deployment tool that is quite easy to use, robust, and highly configurable.</p>
<p>But fiddling with the conf to have something ready for Symfony 4 + Webpack took me a bit of time, so here is my <code>deploy.php</code> file with comments inside so you can hopefully benefit from it. There are two options for asset build : either local or remote. I prefer local since it's faster, and you don't need to have your building toolchain on the production server, but sometimes it's not possible (when building changes the names of the assets if you version it for instance). Use what suits you the best.</p>
<p>It's also available as a gist <a href="https://gist.github.com/tchapi/354630fadaa5bb6d54fc940801bdaa02">here</a>.</p>
<pre><code>&lt;?php

namespace Deployer;

require 'recipe/symfony4.php';

set('ssh_type', 'native');
set('ssh_multiplexing', true);

// Configuration

// For the repository, master is implied
set('repository', 'git@github.com:user/repo.git');

// Set shared dirs and dirs for Symfony 4
// I share sessions so that atomic builds do not &quot;logout&quot; all users 
// if users I have. This may be a problem if your deployment
// modifies your session system somehow, so be careful
// public/uploads is my standard upload directory for files
// and must be shared between deploys obviously.
set('shared_dirs', ['var/log', 'var/sessions', 'public/uploads']);
set('writable_dirs', ['var', 'public/uploads']);

// Paths to clear
// To avoid leaving unwanted access to these files in production,
// I simply clear what I don't need to run the app, and I run the 
// clear:path _after_ everything has been built.
set('clear_paths', [
  './README.md',
  './.gitignore',
  './.git',
  './.php_cs',
  './.env.dist',
  './.env',
  './.eslintrc',
  './.babelrc',
  '/assets',
  '/tests',
  './package.json',
  './package-lock.json',
  './symfony.lock',
  './webpack.config.js',
  './postcss.config.js',
  './phpunit.xml',
  './phpunit.xml.dist',
  './deploy.php',
  './psalm.xml',
  './composer.phar',
  './composer.lock',
  // We keep composer.json as it's needed by 
  // the Kernel now in Symfony 4
]);

// Set env, else composer will fail
// This is new since Sf3.4 I think, where we use a .env
// file instead of the parameters.yml file. Without these
// parameters, deployer will choke on deploy:vendors
set('env', function () {
    return [
        'APP_ENV' =&gt; 'prod',
        'MAILER_URL' =&gt; 'null://localhost',
        // Add more if you have other parameters in your .env
    ];
});

// Servers
// This is easy, just the server with a stage name so you can call
// `deploy production`
host('production')
    -&gt;hostname('myhost.com')
    -&gt;user('my_user')
    -&gt;forwardAgent()
    -&gt;set('deploy_path', '/var/www/my_project');

set('default_stage', 'production');
set('http_user', 'www-data');

// Tasks
// If you can / want to build assets locally and then upload, 
// if for instance you don't have the build tools on your frontend
// server.
desc('Build CSS/JS and deploy local built files');
task('deploy:build_local_assets', function () {
    runLocally('npm install');
    runLocally('npm run build');
    upload('./public/build', '{{release_path}}/public/.');
});

// If you want to build remotely
// For remote assets build, we need to know which npm to use
set('bin/npm', function () {
    return (string)run('which npm');
});
desc('Build CSS/JS remotely');
task('deploy:build_remote_assets', function() {
  run(&quot;cd {{release_path}} &amp;&amp; {{bin/npm}} install &amp;&amp; {{bin/npm}} run build&quot;);
});

// A simple task to restart the PHP FPM service, 
// if you use it of course
desc('Restart PHP-FPM service');
task('php-fpm:restart', function () {
    // The user must have rights for restart service
    // Change with your exact service version
    run('sudo systemctl restart php7.1-fpm.service');
});
after('deploy:symlink', 'php-fpm:restart');

// If deploy fails, automatically unlock
after('deploy:failed', 'deploy:unlock');

/**
 * The main task - it's basically the same as the symfony4
 * one but with rearranged tasks (especialy for clear_paths)
 * and added tasks (assets, cache, etc)
 */
task('deploy', [
    'deploy:info',
    'deploy:prepare',
    'deploy:lock',
    'deploy:release',
    'deploy:update_code',
    'deploy:shared',
    'deploy:vendors',
    'deploy:build_local_assets',  // Choose which version
    'deploy:build_remote_assets', // you prefer
    'deploy:cache:clear',
    'deploy:cache:warmup',
    'deploy:writable',
    'deploy:clear_paths',
    'deploy:symlink',
    'deploy:unlock',
    'cleanup',
])-&gt;desc('Deploy');

// Display success message on completion
after('deploy', 'success');
</code></pre>
<hr>
<p>If you have some ideas to improve this or some comments, as usual, do not hesitate !</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Font Customiser for Adafruit's fontconvert]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><strong>We love Adafruit.</strong></p>
<p>We love their products, libraries, their tools, ... well basically everything they do and share to the public.</p>
<p>I've been using their <strong>Adafruit GFX library</strong> for a while now with every kind of display available on their shop : from the low-power <a href="https://www.adafruit.com/product/3502">Sharp</a> to the SPI-enabled <a href="https://www.adafruit.com/product/1480">color TFT</a>s,</p>]]></description><link>https://blog.tchap.me/adafruit-gfx-font-customiser/</link><guid isPermaLink="false">5ea824fb25e7cb317d1336c7</guid><category><![CDATA[C]]></category><category><![CDATA[arduino]]></category><category><![CDATA[code]]></category><category><![CDATA[adafruit]]></category><dc:creator><![CDATA[tchap@tchap.me]]></dc:creator><pubDate>Fri, 22 Dec 2017 12:36:09 GMT</pubDate><media:content url="https://blog.tchap.me/content/images/2017/12/code.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.tchap.me/content/images/2017/12/code.png" alt="Font Customiser for Adafruit's fontconvert"><p><strong>We love Adafruit.</strong></p>
<p>We love their products, libraries, their tools, ... well basically everything they do and share to the public.</p>
<p>I've been using their <strong>Adafruit GFX library</strong> for a while now with every kind of display available on their shop : from the low-power <a href="https://www.adafruit.com/product/3502">Sharp</a> to the SPI-enabled <a href="https://www.adafruit.com/product/1480">color TFT</a>s, and some led matrix panels too ...</p>
<p>The provided fonts are easy to use and provide a wide range of possibilities, but if you want to design specific screens, you clearly want to use your own font.</p>
<p>For a recent project, I needed to use <strong>Roboto Thin</strong> and <strong>Roboto Light</strong> but I ran into some problems when converting the fonts.</p>
<h3 id="convertingafontforusewiththelib">Converting a font for use with the lib</h3>
<p>It's pretty much explained in the <code>/fontconvert</code> part of the library repository <a href="https://github.com/adafruit/Adafruit-GFX-Library/blob/master/fontconvert/makefonts.sh">here</a>, but let me break it out again.</p>
<p>You need a C compiler, the TTF file or your font and a text editor to proceed.</p>
<p>First, set the dpi of the font in the file first. This is important since it dictates what the size of the font is really compared to the dot size of the screen.</p>
<p>It's in the <code>fontconvert.c</code> file, line <a href="https://github.com/adafruit/Adafruit-GFX-Library/blob/master/fontconvert/fontconvert.c#L27">27</a> :</p>
<pre><code class="language-c">...
#include &quot;../gfxfont.h&quot; // Adafruit_GFX font structures

#define DPI 141 // Approximate res. of Adafruit 2.8&quot; TFT

// Accumulate bits for output, with periodic hexadecimal byte write
void enbit(uint8_t value) {
...
</code></pre>
<p>It's the <code>#define DPI 141</code> that you want to change. To calculate yours, it's quite simple. For instance, for the Sharp display, we have the following info</p>
<ul>
<li>resolution is 168x144 pixels (height x width)</li>
<li>width is 1.3''</li>
</ul>
<p>So we will have (we'll use the width since pixels are square) :</p>
<pre><code>DPI = 144 / 1.3 = 110.7 ≃ 111 
</code></pre>
<p>So you just use</p>
<pre><code>#define DPI 111
</code></pre>
<p>Now you can build the tool thanks to its makefile :</p>
<pre><code>cd fontconvert
make
</code></pre>
<p>Now you can use the fontconvert tool like this</p>
<pre><code>./fontconvert RobotoThin.ttf 12 &gt; RotoboThin_Fixed12pt7b.h
</code></pre>
<h3 id="customizingrepairingtheoutput">Customizing / repairing the output</h3>
<p>But sometimes, the created file is not &quot;pixel perfect&quot; in the sense that some pixels are obviously missing.</p>
<p>For example, for the RobotoThin variant in 7pt for a DPI of 141, I get a half-baked <strong>p</strong> and <strong>E</strong> :</p>
<p><img src="https://blog.tchap.me/content/images/2017/12/Screen-Shot-2017-12-19-at-09.40.02.png" alt="Font Customiser for Adafruit's fontconvert"></p>
<p>This is likely due to <strong>freetype</strong> making (bad) choices on how to approximate sub-pixels, but we end up with a font that's not usable.</p>
<p>The <em>real</em> problem is that the resulting header file is hard to grasp, and hard to modify to add a single missing pixel.</p>
<p><strong>Worry not, though ! :)</strong></p>
<p>I created a font customiser to do <em>just</em> that : &quot;decompile&quot; the header file, display it in a handy editable format where you can change pixel, add rows or columns for each glyph, and then &quot;compile&quot; it again to a header file.</p>
<p><a href="https://tchapi.github.io/Adafruit-GFX-Font-Customiser/">https://tchapi.github.io/Adafruit-GFX-Font-Customiser/</a></p>
<p><img src="https://blog.tchap.me/content/images/2017/12/Screen-Shot-2017-12-05-at-20.45.11.png" alt="Font Customiser for Adafruit's fontconvert"></p>
<p>It's pretty straight forward to operate :</p>
<p>Paste the content of a converted file in the input textarea, and click on <strong>Extract</strong>. It takes a few seconds as the font is parsed and rendered as glyphs on the page.</p>
<p>When it's done, you can edit each glyph :</p>
<ul>
<li>click on a pixel to toggle its state (lit / unlit)</li>
<li>click the buttons &quot;+Row&quot; or &quot;+Col&quot; to add a column or a row to the glyph. The columns are added on the right and at the bottom</li>
</ul>
<p>Once you're happy with the result, click <strong>Process</strong> and you have the resulting font in the output textarea. 🎉</p>
<hr>
<p>As usual, it's all open-source on <a href="https://github.com/tchapi/Adafruit-GFX-Font-Customiser">Github</a>. The code (<em>a single HTML page with JS</em>) is clearly not perfect and a bit messy, but hey, it does the work. PRs welcome of course.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Tunnel all the things]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>When you need to take control over a remote server or device, problems start to arise. Whether the device is behind a corporate firewall, a restrictive set-top box, or is just not publicly available from the Internet, it's always difficult to gain and keep access over the lifespan of the</p>]]></description><link>https://blog.tchap.me/tunnel-all-the-things/</link><guid isPermaLink="false">5ea824fb25e7cb317d1336c6</guid><category><![CDATA[node.js]]></category><category><![CDATA[programming]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[tchap@tchap.me]]></dc:creator><pubDate>Fri, 10 Nov 2017 09:53:09 GMT</pubDate><media:content url="https://blog.tchap.me/content/images/2017/11/tunnel.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.tchap.me/content/images/2017/11/tunnel.png" alt="Tunnel all the things"><p>When you need to take control over a remote server or device, problems start to arise. Whether the device is behind a corporate firewall, a restrictive set-top box, or is just not publicly available from the Internet, it's always difficult to gain and keep access over the lifespan of the service the device is supposed to provide.</p>
<p><strong>SSH tunnelling</strong> is one option for this that is dead simple, super powerful, and yet not very well known.</p>
<p><a href="https://ngrok.com/">Ngrok</a> and <a href="https://localtunnel.github.io/www/">LocalTunnel</a> provide this functionality somehow, and allow global access to &quot;local&quot; devices through a SSL tunnel. <a href="https://ngrok.com/product/ngrok-link">Ngrok link</a> allows remote management of devices. All this services rely on a private central server that the SSH tunnel goes through</p>
<p>What I'm presenting here is a much bare bone solution that only relies on a simple SSH tunnel to gain and keep access to a remote device through a central public server.</p>
<h3 id="whatdoweneed">What do we need</h3>
<p>For the SSH tunnel to work, the only requirement is a server, publicly accessible, that runs a SSH daemon, and a user that can access this machine. We'll use the user <code>foo</code>, and the server will be <code>central.example.org</code>.</p>
<blockquote>
<p>The user just needs to exist and be able to login.</p>
</blockquote>
<blockquote>
<p>To create it, a simple <code>sudo useradd foo</code> and <code>passwd foo</code> afterwards. For connecting, it's a good idea to add the connecting device's public key to the <code>~/.ssh/authorized_keys</code> file of this <code>foo</code> user so that the tunnel creation can be non-interactive.</p>
</blockquote>
<blockquote>
<p>It will be mandatory if you use that technique to access remote devices (<em>on which, by definition, you have no way to interactively input a password ...</em>)</p>
</blockquote>
<p>Of course you need some kind of device that we will access and manage remotely;</p>
<p>We're going to use basic Linux functionality and some NodeJS for easy creation of the tunnel.</p>
<h3 id="makinglocalglobal">Making local global</h3>
<p>The first easy use-case is to make a local port available to the world thanks to SSH. This is easily done.</p>
<p>Suppose you have a local machine on which you develop, and you have a node service running on port <code>3000</code>.</p>
<p>To make that port available to anyone (<em>for remote testing for instance</em>), just create a tunnel that forwards your local port to a random port on the central server :</p>
<pre><code>ssh foo@central.example.org \
  -o PasswordAuthentication=no \
  -o ServerAliveInterval=30 \
  -N -R 0:localhost:3000
</code></pre>
<p>The output will be something like :</p>
<pre><code>Allocated port 42671 for remote forward to localhost:3000
</code></pre>
<p>The port is allocated randomly. You can use a predefined port but make sure that you know that this port is not used somewhere else :</p>
<pre><code>ssh foo@central.example.org \
  -o PasswordAuthentication=no \
  -o ServerAliveInterval=30 \
  -N -R 13245:localhost:3000
</code></pre>
<p>The output will be empty, don't expect anything — you already know the port you're forwarding to !</p>
<p>As long as the process is alive, the forwarding is effective. Just <kbd>CTRL</kbd> + <kbd>C</kbd> to stop the whole thing.</p>
<h3 id="accessingremotedevices">Accessing remote devices</h3>
<p>The exact same principle can be used to gain remote access to your devices in the wild. The tunnel must be initiated on the remote device, to your central server.</p>
<blockquote>
<p>If you choose to use a random port number, there is an extra step for you to retrieve the allocated port, but as your remote device is connected (<em>since it just opened an SSH connection</em>), you can either send the port via mail (quick and dirty) or use any other method to retrieve it (send a GET request to your backend for your devices for instance).</p>
</blockquote>
<p>To use that, I have created a simple Node wrapper that I now use when I need to create a tunnel between remote devices and a central 'command' center.</p>
<p>The full code is available on Github as usual : <a href="https://github.com/tchapi/node-simple-ssh-tunnel">tchapi/node-simple-ssh-tunnel</a></p>
<p>To use, it's pretty simple. It's just a wrapper around the aforementioned SSH <em>stanza</em> :</p>
<pre><code>const tunnel = require('ssh_tunnel')

tunnel.setConfig({
    user: &quot;foo&quot;,
    server: &quot;central.example.org&quot;,
    port: 22,
    timeout: 10000, // in ms
})

tunnel.start(function() {
    console.log(tunnel.getState())
    
    // Later ...
    tunnel.stop()
}, function() {
    console.log(&quot;There has been an error ...&quot;)
})
</code></pre>
<p>And you would get something like that :</p>
<pre><code class="language-shell">$ node examples/index.js
[info] Starting tunnel to foo@central.example.org
[info]  Allocated port 43557 for remote forward to localhost:22
[success] Tunnel active at 43557
{ port: '43557' }
</code></pre>
<p>Now, up to you to send this <code>tunnel.getState()</code> info to somewhere you can retrieve it.</p>
<p><strong>Easy and robust</strong>.</p>
<p>Now, to connect to your remote device from anywhere, you just have to do :</p>
<pre><code>ssh device_user@central.example.org -p 43557
</code></pre>
<p>which is strictly equivalent to :</p>
<pre><code>ssh device_user@remote_device
</code></pre>
<p>... if you were on the same local network as the device.</p>
<p>?</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[“Public display mode“ for embedded boards]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I've been working on information displays for quite a while and one major feature of these displays is that they are required to display as little as possible of the underlying system they are using.</p>
<p><img src="https://blog.tchap.me/content/images/2017/09/bsod-columns_3471429k.jpg" alt></p>
<center><small><em>oops</em></small></center>
<p>No one wants to see a BSOD, a boot sequence or any kind of</p>]]></description><link>https://blog.tchap.me/public-display-mode-embedded-boards/</link><guid isPermaLink="false">5ea824fb25e7cb317d1336c5</guid><category><![CDATA[raspberry]]></category><category><![CDATA[upboard]]></category><category><![CDATA[digital signage]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[tchap@tchap.me]]></dc:creator><pubDate>Fri, 15 Sep 2017 13:39:00 GMT</pubDate><media:content url="https://blog.tchap.me/content/images/2017/09/Screen-Shot-2017-09-15-at-16.59.59-1.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.tchap.me/content/images/2017/09/Screen-Shot-2017-09-15-at-16.59.59-1.png" alt="“Public display mode“ for embedded boards"><p>I've been working on information displays for quite a while and one major feature of these displays is that they are required to display as little as possible of the underlying system they are using.</p>
<p><img src="https://blog.tchap.me/content/images/2017/09/bsod-columns_3471429k.jpg" alt="“Public display mode“ for embedded boards"></p>
<center><small><em>oops</em></small></center>
<p>No one wants to see a BSOD, a boot sequence or any kind of error message pop up on a large public screen.</p>
<p>So how do we do that ?</p>
<p>As we rely on different software layers to display the content we really want to display (say, a webpage), we have to make sure that every one of them is not verbose so the user experience is controlled.</p>
<p>In the following article I will focus on two boards : the <a href="https://www.raspberrypi.org/products/raspberry-pi-3-model-b/">Raspberry Pi B 3</a> (ARM) and the <a href="https://up-shop.org/up-boards/2-up-board-2gb-16-gb-emmc-memory.html">Upboard</a> (Intel-based)</p>
<p>The Pi has a standard <a href="https://www.raspberrypi.org/downloads/raspbian/">raspbian</a> jessie lite (should work with stretch, too) on it, based on debian jessie. On the upboard, I decided to go the <a href="http://releases.ubuntu.com/xenial/">Ubuntu Server</a> route, since ubilinux (<em>the distribution that upboard proposes</em>) doesn't have the same community yet. I felt a simple 16.04 was more adequate.</p>
<blockquote>
<p>I didn't install the <a href="https://up-community.org/wiki/Ubuntu#Ubuntu_16.04.1_LTS_.28Xenial_Xerus.29">linux-upboard kernel flavour</a> on the upboard since I didn't need the GPIO support but I guess this should be harmless for our use case here</p>
</blockquote>
<h4 id="silentboot">Silent boot</h4>
<p><strong>First things first : the boot sequence.</strong></p>
<p>On recent systems, it's generally quite easy to get rid of all the output at boot.</p>
<h6 id="raspberrypi">Raspberry Pi</h6>
<p>A few steps are required to have a perfectly blank screen.</p>
<p>Mute locale warnings by setting the locale in <code>/etc/environmnent</code> (<em>you can choose any installed locale</em>):</p>
<pre><code>LC_ALL=en_GB.UTF-8
LANG=en_GB.UTF-8
</code></pre>
<p>Make sure date and time are correctly configured, to avoid timezone warnings :</p>
<pre><code>sudo dpkg-reconfigure tzdata
</code></pre>
<p>Add this in <code>/boot/config.txt</code>:</p>
<pre><code>disable_splash=1
avoid_warnings=1
</code></pre>
<p>And change your <code>/boot/cmdline.txt</code> to something along those lines:</p>
<pre><code>dwc_otg.lpm_enable=0 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait logo.nologo console=null quiet vt.global_cursor_default=0
</code></pre>
<blockquote>
<p>The main part is <code>logo.nologo</code> , <code>console=null</code> and <code>quiet</code>. You should remove the part about the serial console too</p>
</blockquote>
<h6 id="upboard">Upboard</h6>
<p>It's a bit simpler if you use GRUB as the bootloader.</p>
<p>Edit <code>/etc/default/grub</code> and add :</p>
<pre><code>GRUB_CMDLINE_LINUX_DEFAULT=&quot;quiet loglevel=0 logo.nologo console=tty12 --quiet&quot;
GRUB_CMDLINE_LINUX=&quot;quiet loglevel=0 logo.nologo console=tty12 --quiet&quot; #Don't show kernel text
GRUB_HIDDEN_TIMEOUT=0
</code></pre>
<p>And then run  <code>sudo update-grub2</code>.</p>
<p>This will disable the GRUB bootloader menu, and remove all text logs from the screen (hopefully). It appears that some errors may still show but I guess they can be specifically removed by disabling the correct modules or so.</p>
<h6 id="disableloginscreen">Disable login screen</h6>
<p>Next, you want to disable the login screen on <strong>tty1</strong> so no prompt appears after the boot :</p>
<pre><code>sudo systemctl disable getty@tty1
</code></pre>
<p>This is valid for both platforms.</p>
<p>That's it ! You should have a perfect black screen all the way, and no output whatsoever (<em>via HDMI, at least</em>).</p>
<h4 id="displayingthings">Displaying things</h4>
<p>What's the best way to get rid of any kind of error messages ? Well, <strong>how about having no window manager in the first place ?</strong></p>
<p>In this scenario, we want to display a single app : a web browser, and expect no user interaction whatsoever.</p>
<p>It makes sense to only use a display server for this application, make it fullscreen, and call it a day.</p>
<p>This allows for a clutter-free screen, no error messages (<em>at least from the X server</em>) and a minimal installation footprint.</p>
<p>And it everything crashes, you end up with a black screen <em>by construction</em> which is quite a good fallback (<em>except if your web app crashes with a 500 or displays giberrish, but that's out of the scope here</em>).</p>
<p>In the following installation, I will configure the display manager to automatically start our chosen browser fullscreen with the relevant options.</p>
<h6 id="installation">Installation</h6>
<p>First, some packages are necessary : <code>xorg</code>, <code>xserver-legacy</code> and <code>xinit</code> :</p>
<pre><code>sudo apt-get update
sudo apt-get install build-essential unclutter xorg xinit xserver-xorg-legacy -y
</code></pre>
<p><code>unclutter</code> hides the mouse cursor, it's a neat little utility.</p>
<h6 id="configuration">Configuration</h6>
<p>Then run this to allow anybody to start X (<em>you don't want to run it as root</em>):</p>
<pre><code>sudo dpkg-reconfigure xserver-xorg-legacy
</code></pre>
<p>You may want to add the user to the <strong>tty</strong> group too :</p>
<pre><code>sudo usermod -a -G tty {yourUserName}
</code></pre>
<h6 id="creatingnecessaryxstartupfiles">Creating necessary X startup files</h6>
<p>Create <code>/home/{yourUserName}/.xserverrc</code> and add the below content:</p>
<pre><code class="language-bash">#!/bin/sh
# Start an X server with power management disabled so that the screen never goes blank
exec /usr/bin/X -s 0 -dpms -nolisten tcp &quot;$@&quot;
</code></pre>
<p>This will allow to start X with the display sleep management shut off, so your display is always on.</p>
<p>Then, to automatically start the browser when X launches, create <code>/home/{yourUserName}/.xsession</code> and add the below content:</p>
<pre><code class="language-bash">#!/bin/sh
exec /usr/bin/chromium-browser %u \
  --kiosk --start-fullscreen --incognito \
  --no-first-run \
  --disable-session-crashed-bubble \
  --disable-infobars \
  --disable-restore-background-contents  \
  --disable-translate  \
  --disable-new-tab-first-run \
  --noerrdialogs \
  --no-sandbox \
  --user-data-dir=/tmp \
  --window-size=1600,900 \
  http://localhost:3000 
</code></pre>
<p>Change the url to match yours, of course, and even if the &quot;start-fullscreen&quot; flag is on, it seems to be a bit buggy since we don't have a window manager, so I add the actual resolution of the screen to be sure the window takes the full width and height.</p>
<blockquote>
<p>There is a load of options here, that I carefully tested. To my opinion this is the best flag list for this use. Do not hesitate if you have any other input or improvement over this !</p>
</blockquote>
<blockquote>
<p>As well, we're assuming we're using chromium here, but you can replace that with <code>/usr/bin/google-chrome</code>. See below for the two options</p>
</blockquote>
<p>You can then run directly the whole thing with :</p>
<pre><code>/usr/bin/startx -- /home/admin/.xserverrc -nocursor
</code></pre>
<p>or make a unit file to start on boot directly via <strong>systemd</strong> :</p>
<pre><code>[Unit]
Description=Public Display device
After=network-online.target
Before=multi-user.target
DefaultDependencies=no

[Service]
User={yourUserName}
ExecStartPre=/bin/sleep 5
ExecStart=/usr/bin/startx -- /home/{yourUserName}/.xserverrc -nocursor
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
</code></pre>
<p>Now, for the choice of browser !</p>
<h6 id="usingchromium">Using Chromium</h6>
<p>There is apparently a bug in Chromium (a bit related to <a href="https://github.com/RPi-Distro/repo/issues/58">https://github.com/RPi-Distro/repo/issues/58</a> but not exactly) that makes the latest versions crash when runnning in kiosk mode on embedded devices. This seems to be linked to starting Chrome in Xorg directly.</p>
<p>The last known to work version is <strong>48.0.2564.82</strong> afaik, so we need to install this manually.</p>
<p>First remove the previous packages if any :</p>
<pre><code>sudo apt-get remove chromium-codecs-ffmpeg-extra chromium-browser chromium-browser-l10n
</code></pre>
<p>And install the correct version. There are two scenarii, one for the Pi (ARM based), and one for the Upboard.</p>
<p><strong>Raspberry Pi</strong></p>
<p>Get the packages :</p>
<pre><code>wget https://launchpad.net/~canonical-chromium-builds/+archive/ubuntu/stage/+build/8883797/+files/chromium-codecs-ffmpeg-extra_48.0.2564.82-0ubuntu0.15.04.1.1193_armhf.deb https://launchpad.net/~canonical-chromium-builds/+archive/ubuntu/stage/+build/8883797/+files/chromium-browser_48.0.2564.82-0ubuntu0.15.04.1.1193_armhf.deb http://launchpadlibrarian.net/234938396/chromium-browser-l10n_48.0.2564.82-0ubuntu0.15.04.1.1193_all.deb https://launchpad.net/~ubuntu-security/+archive/ubuntu/ppa/+build/8993250/+files/libgcrypt11_1.5.3-2ubuntu4.3_armhf.deb
</code></pre>
<p>Install them :</p>
<pre><code>sudo dpkg -i libgcrypt11_1.5.3-2ubuntu4.3_armhf.deb
sudo dpkg -i chromium-codecs-ffmpeg-extra_48.0.2564.82-0ubuntu0.15.04.1.1193_armhf.deb
sudo dpkg -i chromium-browser_48.0.2564.82-0ubuntu0.15.04.1.1193_armhf.deb
sudo dpkg -i chromium-browser-l10n_48.0.2564.82-0ubuntu0.15.04.1.1193_all.deb
</code></pre>
<p>? BOOM. Done.</p>
<p><strong>Upboard</strong></p>
<p>Get the packages :</p>
<pre><code>wget http://launchpadlibrarian.net/234938404/chromium-browser_48.0.2564.82-0ubuntu0.15.04.1.1193_amd64.deb http://launchpadlibrarian.net/234938396/chromium-browser-l10n_48.0.2564.82-0ubuntu0.15.04.1.1193_all.deb http://launchpadlibrarian.net/234938406/chromium-codecs-ffmpeg-extra_48.0.2564.82-0ubuntu0.15.04.1.1193_amd64.deb
</code></pre>
<p>Install them :</p>
<pre><code>sudo dpkg -i chromium-codecs-ffmpeg-extra_48.0.2564.82-0ubuntu0.15.04.1.1193_amd64.deb
sudo dpkg -i chromium-browser_48.0.2564.82-0ubuntu0.15.04.1.1193_amd64.deb
</code></pre>
<p>At some point you might need to correct dependencies because the <strong>adm64</strong> version depends on a bit more things apparently :</p>
<pre><code>sudo apt install -f
</code></pre>
<p>? BOOM. Done.</p>
<p><strong>Holding packages</strong></p>
<p>Don't forget to block updates on chromium packages :</p>
<pre><code>sudo apt-mark hold chromium-codecs-ffmpeg-extra chromium-browser chromium-browser-l10n
</code></pre>
<blockquote>
<p>To unhold the packages :</p>
<pre><code>sudo apt-mark unhold chromium-codecs-ffmpeg-extra chromium-browser chromium-browser-l10n
</code></pre>
</blockquote>
<h6 id="usinggooglechrome">Using Google Chrome</h6>
<p>Add a new repo list file: <code>sudo vi /etc/apt/sources.list.d/google-chrome.list</code> with the following content :</p>
<pre><code>deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main
</code></pre>
<p>Add the Google signing key :</p>
<pre><code>wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
</code></pre>
<p>And then install Chrome:</p>
<pre><code>sudo apt update &amp;&amp; sudo apt install google-chrome-stable
</code></pre>
<blockquote>
<p>It only works on Intel-based architectures as far as I could test, so Chromium is the only option for now on Raspberry Pi</p>
</blockquote>
<h4 id="allgood">All good</h4>
<p>So far, so good. I have tested this on a stock Raspberry Pi B3 and an Upboard and it runs smoothly. Crashes have happened, and</p>
<p>But maybe I have missed some options / interesting additions ? Do not hesitate to tell me in the comments.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>