<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Savoir Blog]]></title><description><![CDATA[News, announcements, and other things the Savoir team wants to share with the community]]></description><link>https://blog.savoir.dev</link><generator>RSS for Node</generator><lastBuildDate>Wed, 22 Apr 2026 14:37:16 GMT</lastBuildDate><atom:link href="https://blog.savoir.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Rewrite, Rebrand, Release: the last year at Savoir]]></title><description><![CDATA[It's been a while since I last wrote a post on this blog. I love blogging, but being a solo founder means that any time taken to write blog posts is time I'm not investing into my product. I had to make the tough choice to put our blog on hold for so...]]></description><link>https://blog.savoir.dev/rewrite-rebrand-release-the-last-year-at-savoir</link><guid isPermaLink="true">https://blog.savoir.dev/rewrite-rebrand-release-the-last-year-at-savoir</guid><category><![CDATA[Startups]]></category><category><![CDATA[Design]]></category><category><![CDATA[engineering]]></category><dc:creator><![CDATA[Guillaume St-Pierre]]></dc:creator><pubDate>Fri, 05 Jan 2024 15:00:45 GMT</pubDate><content:encoded><![CDATA[<p>It's been a while since I last wrote a post on this blog. I love blogging, but being a solo founder means that any time taken to write blog posts is time I'm not investing into my product. I had to make the tough choice to put our blog on hold for some time. Lots of things changed for Savoir during that pause, I worked with two amazing people on our release plan and rebranding, and I completed a full rebuild of the core Savoir API.</p>
<p>Please join me as I write our obligatory new year blog post where we reminisce on what happened in the last year (and more). There are exciting things at the end you might not want to miss.</p>
<h2 id="heading-rewrite">Rewrite</h2>
<p>In July 2022, I shared my <a target="_blank" href="https://blog.savoir.dev/6-lessons-from-a-technical-founder">6 hard-learned lessons for technical founders</a>. The intro mentioned my decision to fully rewrite the Savoir API. For three months in the summer of 2022, I had the chance to welcome Savoir's first contractor hire. We outlined a roadmap for release and set our priorities. Turns out that a hacked together system written in two months more than a year before wasn't cutting it for release. Getting it released to production would have cost weeks of work for a framework/language I wasn't an expert in.</p>
<p>So I made the tough decision to move to something I know: Golang. I've been working as a Golang developer for more than half a decade, and I'm confident in my abilities to build stuff with it. I had many other possibilities in front of me, many of which played more to my interests than my strengths. I decided to focus on what would bring me the biggest velocity and confidence. I want to build this quickly, and I want to do it right this time.</p>
<p>To avoid the same release pitfalls I faced with the old Elixir version, I chose to move forward with the <a target="_blank" href="https://encore.dev/">Encore framework</a>. This was one of the best decisions I've made in a long time. I am biased as a contributor to the framework and this isn't an Encore blog post, so I won't go into too much detail. Instead, I want to share how valuable it is to not have to worry about compilation, release, or infra. Savoir is a small project and still in its unreleased stage. Playing to my strengths also means understanding where to invest my limited time. By being able to outsource everything release-related to a framework and focus on the features, I saved a huge amount of time and effort.</p>
<p>As I am writing this post, the rewrite has been complete for more than a month now. I've officially started building <em>new features</em>. All the core features I wanted to migrate have been migrated, and it's a lot more stable with many more tests. Testability was one big thing I couldn't get working right with the previous framework, it's now seamless. This means I can (again) focus on building features rather than worrying about testing, or lack thereof.</p>
<p>The rewrite started in earnest in December 2022, which means it took about a year to fully rewrite Savoir. A good achievement given my limited resources, but it still could have gone better. Let me know if you'd like a more in depth post on my lessons learned in the future!</p>
<h2 id="heading-rebrand">Rebrand</h2>
<p>If you've read our previous posts or been a reader for a few years, you might have noticed our blog and website got a makeover back in 2022. In fact, this makeover happened barely a few months after my post describing my journey to <a target="_blank" href="https://blog.savoir.dev/how-i-redesigned-the-savoir-website-3-times">redesigning the website three times</a>. After writing that blog post, I discussed the content with the contractor. I still wasn't satisfied with what I could achieve by myself. We had money saved up for some development help, and we ended up deciding to invest it into hiring a designer instead.</p>
<p>We posted a contract description, made our limited budget clear from the get go, and I started interviewing. Since we're fully bootstrapped, I think it's very important to let designers know how much we can pay them. The last thing I want to do is waste someone's time on a project that doesn't bring them enough. I prefer to reduce my expectations rather than ask someone to work more for less.</p>
<p>We interviewed with three designers/agencies and ended up working with <a target="_blank" href="https://buenas.design/">Buenas Design</a>. I cannot recommend them enough, they took my rough designs and ideas, then created something truly unique and incredible. Our main objective, given our limited budget, was to define a concrete visual identity and create a new brand logo. What they delivered was beyond what I could have ever imagined.</p>
<p>Rather than randomly generated colors that mostly match one another, we now have a well designed color palette with clear guidelines. Rather than a, on-the-nose, robot in a book logo, we now have a studious suricate. I'm still very proud of the logos and websites I designed, especially given I had to learn all those skills, but seeing what we ended up creating with the help of a design was the highlight of my year. We officially switched to the new brand on the 16th of January 2023 with the introduction of <strong>Savant</strong>, our new logo, mascot, and product name. Sadly this meant saying goodbye to SavBot, I'll never forget you.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704422180308/a00b9b33-0ad2-403c-969d-a4682f912647.png" alt="SavBot, the blue robot in a book saying goodbye on a yellow background next to Savant, the blue suricate, saying hello on a light blue background." class="image--center mx-auto" /></p>
<p>At the same time, we fully redesigned our website. The new version has a better flow of information, but also switched to a video format for explaining how Savoir works. We've had great feedback on the video and we're hoping to expand on it in the future.</p>
<p><img src="https://lh7-us.googleusercontent.com/l6mP8Wq4k1SYK36OJe78sTa7fJXxSH87znLYxYojSjTe727R4dUSrHUv_3xKvpsL8zggZcfOsiR2K7zBBGTCzuImF9JwR8hwT1OXzz0641WirPuctqbRR-d5mMIP6KseF_CWNr_ZpsR2SYzDoNjUizk" alt="View of the Savoir website, with the navbar visible, the meet savant message, and Savant the blue suricate waving." /></p>
<p>In my opinion, this was the best time to invest in this level of rebranding. Our logo still isn't that well known, and releasing with our old brand would have meant losing that window of opportunity. If you're in your pre-release stage for your product and are considering investing in a designer, I can wholeheartedly recommend the experience, it's worth it.</p>
<h2 id="heading-release">Release</h2>
<p>This brings us to January 2024 and this post. Savant is fully deployed to our staging environment and we're preparing a "stealth" release to production. We've been using Savant for a few months internally and it has been a great experience, but also a learning opportunity as we realize (and fix) some of the drawbacks of old decisions.</p>
<p>Why a stealth production release, you may ask? One of the most difficult decisions I made for the rewrite was to not migrate the self-serve features of the Savoir API. At least not for the first release. We don't have the resources to commit to a high visibility release, and we can't afford to fail and be forgotten. I learned early that it can be better to release silently than have your loud release be ignored. You can only shout "RELEASE!!" so many times.</p>
<p>For that reason, we'll be working closely with beta testers to get them integrated with Savant. It should only take a couple of clicks, but that hands-on approach will allow us to focus on the features users want as we build our self-serve flow. Another learning opportunity to add to the list!</p>
<p>Savant is now open for beta registrations to anyone interested in working with us to bring it to life. We want to build it for real users, and we're ready to build features for you if you're ready to try Savant out. If that sounds like something you want to be a part of, please sign up on our website or <a target="_blank" href="https://forms.gle/Kqy7VRcubns1xHgp9">directly from here</a>.</p>
<h2 id="heading-whats-next">What's next?</h2>
<p>With our release readiness officialized, my next focus is on building Savant as the best product it can be. I will probably not be as active on our blogs as I once was, but I want to take this opportunity to post more content. To make this possible, I'll be writing about the things we learn while building Savoir. Please let me know in the comment below if there's anything you'd like me to write about next!</p>
<p>See you all in my next post.</p>
<hr />
<p><strong>If what we are building looks interesting to you, please check our features and register for exclusive beta access on our website at</strong> <a target="_blank" href="https://savoir.dev"><strong>savoir.dev</strong></a><strong>. Feel free to also send me a message at info@savoir.dev, I'll be glad to answer any questions you have or give you a preview.</strong></p>
<blockquote>
<p>Savoir is the french word for Knowledge, pronounced sɑvwɑɹ.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[How I redesigned the Savoir website 3 times]]></title><description><![CDATA[As my last #4articlesin4weeks writeathon submission on my personal blog, I posted a small list of tips on logo design for technical people. I was going through the process of redesigning the Savoir website to be more on brand and it felt like the rig...]]></description><link>https://blog.savoir.dev/how-i-redesigned-the-savoir-website-3-times</link><guid isPermaLink="true">https://blog.savoir.dev/how-i-redesigned-the-savoir-website-3-times</guid><category><![CDATA[Design]]></category><category><![CDATA[Story]]></category><category><![CDATA[Startups]]></category><dc:creator><![CDATA[Guillaume St-Pierre]]></dc:creator><pubDate>Thu, 29 Sep 2022 15:30:45 GMT</pubDate><content:encoded><![CDATA[<p>As my last #4articlesin4weeks writeathon submission on my personal blog, I posted a <a target="_blank" href="https://minivera1.hashnode.dev/logo-design-tips-for-developers">small list of tips on logo design for technical people</a>. I was going through the process of redesigning the Savoir website to be more on brand and it felt like the right time to share my own tips. I am not a designer, as the tips article may show, and I have to work with the skills that I have. I now have the chance to work with someone who can challenge my poor design decisions, but this also makes the design process even harder. I can't settle for what looks “good enough”, so getting feedback has been a very good tool for making improvements. Today, I'd like to share the story of how I redesigned the hero banner for the Savoir website and the many mishaps along the way.</p>
<h2 id="heading-the-original-design-wasnt-great">The original design wasn't great</h2>
<p>Imposter syndrome being what it is, I can't help but continuously question my design abilities. It's a fine line between looking at something you made from an objective point of view and unconsciously viewing that same thing more negatively than it really is. I find that line tends to get blurry the more I start at something. In the process of reviewing my own work, I may find a few things that stand out now but don't matter in the long run. For example, did you know the Savoir logo's eyes are not exactly at the same distance from the center of the logo? One is a few millimeters more to the right than the other. I wouldn't fault anyone for not noticing it, I only did with the help of <em>a ruler</em>. It's one of those situations where I had to seriously question if I was looking at this objectively. Did this small mistake make me a bad logo designer?</p>
<p>The same thing goes for the Savoir landing page's design. The initial version, built back in 2020, wasn't ugly, but it wasn't great either. It suffered from a lack of consistency, poor contrast, a bad choice of words that made people think we were selling a newsletter, and a poorly positioned layout that made the site look unprofessional.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1664054433716/ETK4EtNtI.png" alt="The initial version of the Savoir landing page design" class="image--center mx-auto" />
<em>The first iteration of the Savoir landing page</em></p>
<p>The bad layout decision and the bad flow of information were the main reasons why I started thinking of a redesign as soon as the page was hosted on Netlify. The core issue was the centered text and logo in the hero banner, which was then followed by a zig-zag "how it works" section, and a grid based features section. A viewer's eyes would have to jump around a lot, which isn't great for readability. There also was no clear explanation of what the product is about without having to scroll to the later section, which isn't a great first impression. The flow issue is very apparent when you draw a line of the typical "user story".</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1664055161465/rqevO4p1E.png" alt="Savoir User Story.png" class="image--center mx-auto" /></p>
<h2 id="heading-i-had-to-get-started-somewhere">I had to get started somewhere</h2>
<p>It took me more than a year to finally tackle the challenge of improving this design. During this time, I researched and designed. I drew a lot with Inkscape, made quite a few personal websites, and I bookmarked a <em>lot</em> of design inspiration. I also did a lot of work on the product's branding. The first version was mostly built without direction. I found some cool inspiration online after a few minutes of searching and got started. I hadn't considered what personality I wanted or what mood the page should create. I took some basic <a target="_blank" href="https://bulma.io/">Bulma</a> components and built something in a few days.</p>
<p>Going back to the drawing board, I now had a much better idea of the kind of personality and mood I wanted to see on the landing page. First and foremost, it should put a bigger emphasis on the <code>git</code> and chat bot nature of the product at the top of the page, that shouldn’t require any scrolling. After searching for inspiration online, I stumbled upon a thread of GitHub pull request flows. I really liked them; they showed the product flow really well and screamed "Git", which was exactly the kind of mood I wanted the product to convey.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1664056572811/MStxz5zVo.png" alt="The perfect GitHub flow" class="image--center mx-auto" /></p>
<p>I took these images as inspiration and immediately started designing. I soon realized that even with years of using the tool, I never learned how to make a good looking curved line. They are not the easiest thing to do in HTML and I was convinced it would be easier to make in SVG, which I had to learn from scratch.</p>
<p>I strongly believe in failing forward. If I didn't have the ability to build this flow the first time, I would work on it until I got it right. After hours of work (yes, hours), I managed to build a custom pull request icon with some status checks. It wasn't great, but it taught me how to make nice looking curved lines and how to play with gradients. It was a good start. I also decided to draw it vertically instead of horizontally, to break away from the centered hero and instead follow the zig-zag pattern of the rest of the application.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1664056943912/DfLChzf8a.png" alt="Version 2 of the Savoir landing page" class="image--center mx-auto" /></p>
<h2 id="heading-going-too-far-in-one-direction">Going too far in one direction</h2>
<p>Armed with my new skills, I started sketching the exact flow I imagined in my head and began designing. The great thing about this kind of iterative process is how it enables experimentation. I had a slightly better flow and messaging in place on the website already, there was nothing wrong with trying something bigger. I had no idea if my ideas would work in practice. I had to apply them to find out.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1664057557133/ra_BS6j5v.png" alt="The completed pull request flow" class="image--center mx-auto" />
<em>I tried, it took me 5 hours too</em></p>
<p>I don't think I'll make the news by saying that this version went way too far in the GitHub flow direction. With some adjustments, it could definitely look good, but I simply didn't have the skills to make this look better (The chat boxes are especially bad). It also showed me that larger doesn't equal better, the entire image would need to be very big to show the information properly. This would create a lot of empty space in the left column. I could make it horizontal, but suddenly we're back to the flow issues from the first version.</p>
<h2 id="heading-learning-from-my-mistakes">Learning from my mistakes</h2>
<p>As soon as she saw the SVG version of the flow, <a class="user-mention" href="https://hashnode.com/@wajma">Wajma Mohseni</a> asked me: "can we compress it?". I wasn't sure if this could be possible; could I convey the pull request flow without representing it as two branches? Should I stick with the top to bottom branch flow after all? I had a hard time wrapping my head around this, but I decided to trust the advice I got. This taught me two things: 1. the GitHub-like flow is a good idea and shows what the product does at a glance 2. I'm not making the best use of my time by sticking with Inkscape. Don't get me wrong, Inkscape is an amazing tool, but I am a developer. If I want something to look good, I should build it in HTML and CSS, not SVG. Back on the internet I went to search for inspiration.</p>
<p>A constant in all my blog posts is that I really like sketching things down when I design. It helps clear my mind and my inability to draw makes it very easy to focus on the information over the looks. So I did just that. I went to my white board, aligned my monitor so it would show my inspiration board, and started sketching until I hit something I liked. It took some time, but shifting my perspective and having good inspiration really made things click.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1664058560294/UDhy7UgiI.jpg" alt="Whiteboard sketch of the final Savoir version" class="image--center mx-auto" />
<em>Crazy what you can do with 90 degrees turns</em></p>
<p>With this sketch in hand, I decided not to follow my previous instincts and went on CodeSandbox rather than Inkscape. It was time to use raw HTML and CSS (using React), and drop SVGs (At least for now). It went a lot smoother and, after some back and forth, I finally ended up with this.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1664058702711/bHoZmNy1S.png" alt="Final version of the Savoir landing page" class="image--center mx-auto" /></p>
<p>For someone who wrote to <a target="_blank" href="https://blog.savoir.dev/6-lessons-from-a-technical-founder#heading-2-play-to-your-strengths">play to your strengths</a>, it sure did take some time to apply my own tips. Saying it went a lot smoother is an understatement. As a developer with a good amount of frontend development experience, working with web technologies allowed me to create something great very quickly, then iterate on it without hitting my own skill limit. I also had the knowledge required to fix my own mistakes, like only using the <code>px</code> unit for positioning (That <em>really</em> wasn’t responsive).</p>
<p>As to how I built this compressed flow, it uses a CSS grid and lots of relative positioning to generate the arrows, lines, circles, and other components. It was initially built with only absolute positioning, but that made it very hard to scale and be responsive. The grid works amazingly.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1664067197977/2CCDmc7i-.png" alt="The CSS Grid in action" class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Something I learned from this whole process is that I am definitely getting better at designing. Would I have a shot if I applied to be a junior designer? Probably not. It also taught me a lot about my own limits and design process. As we're now starting to look around for help from a designer, it is the right time to reflect on this month-long process, and think about what matters and what I can do better.</p>
<p>Do you also have some design stories you'd like to share? <strong>I'd love to read them in the comment below</strong>.</p>
<hr />
<p><strong>If what we are building looks interesting to you, please check our features and register for exclusive beta access on our website at <a target="_blank" href="https://savoir.dev">savoir.dev</a>. Feel free to also send me a message at info@savoir.dev, I'll be glad to answer any questions you have or give you a preview.</strong></p>
<blockquote>
<p>Savoir is the french word for Knowledge, pronounced sɑvwɑɹ.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[6 lessons from a technical founder]]></title><description><![CDATA[A couple of days ago, I made the tough decision to plan a complete rewrite of the Savoir app, which was originally built with the Phoenix (Elixir) framework. I was faced with many issues whenever I needed to change the code. Increasingly difficult bu...]]></description><link>https://blog.savoir.dev/6-lessons-from-a-technical-founder</link><guid isPermaLink="true">https://blog.savoir.dev/6-lessons-from-a-technical-founder</guid><category><![CDATA[Startups]]></category><category><![CDATA[lessons]]></category><category><![CDATA[learning]]></category><category><![CDATA[Entrepreneurship]]></category><category><![CDATA[business]]></category><dc:creator><![CDATA[Guillaume St-Pierre]]></dc:creator><pubDate>Sun, 24 Jul 2022 21:00:01 GMT</pubDate><content:encoded><![CDATA[<p>A couple of days ago, I made the tough decision to plan a complete rewrite of the Savoir app, which was originally built with the Phoenix (Elixir) framework. I was faced with many issues whenever I needed to change the code. Increasingly difficult bugs to fix, a high difficulty in learning the more advanced parts of Elixir needed to power some features, unmaintained third-party libraries that needed to connect to some of our providers, and increasingly complex and costly infrastructure, to name a few. Elixir is an amazing language, and the Phoenix framework even more so, but it's not the right tool for Savoir.</p>
<p>After weeks of struggle, I decided to aim for a rewrite. I was faced with a cobbled together infrastructure that came with high costs to host and update, plus high difficulty in scaling it. Elixir was a blast to work with and it taught me many lessons, but it is time to say goodbye. I'm very positive about the whole experience and I believe it is the right choice. At the end of the day, I am a technical founder. I want to write code, and I am sure I'm not the only founder who’s more hyped about their technical decisions than their business decisions.</p>
<p>Rather than dwell on the story, I decided to dedicate this post to sharing my lessons with the community. Here are 6 lessons, in no particular order, for technical founders like myself. <strong>Do you have any lessons or experiences you'd like to share? Drop them in the comments</strong> I'd love to read them!</p>
<h1 id="heading-1-dont-get-too-excited-by-tech">1. Don't get too excited by tech</h1>
<p>I am a Golang and JavaScript developer, with sprinkes  other languages like PHP, Python, and C#. I had never used either the language or the framework, and they wouldn’t necessarily be the first choice for a frontend-less, Webhooks-powered chatbots. Not saying it can't do it, but Phoenix is amazing at powering interactive web applications and removing all those features is more work than keeping them. How did I end up choosing Phoenix and Elixir then? </p>
<p>I had completed a functional programming course a few months prior and I was hyped for writing functional code with anything other than Lisp. I was in the mood for learning a new language at the time and the only "project" I had in mind was Savoir. To tell you how deep I went in my search for a “new shiny tool to use”, the other languages I was considering were <a target="_blank" href="https://ziglang.org/">Zig</a> and <a target="_blank" href="https://crystal-lang.org/">Crystal</a>.</p>
<p>I was bored of the tech I was using everyday and decided to go with what hyped me at the time. I built a product I wanted to <em>sell</em> with a technology I barely knew and a stack I barely grasped. It could have worked out, but the result I ended up with teaches me I should have gone with "the boring choice". Of course a new startup project should be <em>fun</em>, but building something for others means having to make tough choices for the sake of your prospective users. Building a product is a balance between making decisions for you and making decisions for your users, and I focused far too much into the "me" camp. </p>
<h1 id="heading-2-play-to-your-strengths">2. Play to your strengths</h1>
<p>Many "startup starting guides" or "startup in zero steps" guides recommend using no-code or zero setup frameworks to build your product. They recommend getting started as fast as possible, acquiring users, then thinking about the technical implications of your choices down the road. These are really good tips. In fact, I strongly recommend looking at frameworks like <a target="_blank" href="https://getzero.dev/">getzero</a> to get started as fast as possible if you're a more technically oriented person. What most of these guides/frameworks omit is that you should probably already be proficient in the platform they recommend before you even start. Building an entire product on <a target="_blank" href="https://bubble.io/">Bubble</a> is more than possible, but in my case, I am a very technical person. My strength lies in building backends, APIs and DevOps workflows.</p>
<p>I think founders, especially solo founders like me, should focus on where they excel and use the tools that make <em>them</em> productive. I tried Bubble before settling on the rewrite, and as amazing as it is, it was clear that I would be faster building an entire backend from scratch in Golang than with Bubble. Creating a product alone is incredibly tough, creating it with technology that doesn’t make you productive, even more so. We have never had so many options for building products, making it harder to choose between all of them. <strong>My recommendation– choose what you know.</strong></p>
<h1 id="heading-3-build-for-your-market">3. Build for your market</h1>
<p>When people come to me for tips on where to start with their business projects, my first piece of advice is to <strong>do your market research</strong>. When I started, I underestimated the value of knowing my market: of knowing who I am selling my product to and what they might want. For technical founders especially, I think good market research is one of the most powerful tools you will have when building your product. I didn't have that, and I ended up building features that were of no interest to my market: I had poorly defined my "minimum viable product" and that led to feature bloat. For example, one "hidden" feature of Savoir is its ability to load the security report from a GitHub organization (where it lists security issues from dependencies in multiple repositories) and generate a documentation website from it. Definitely useful for many people, but when I compared it to recent market research, I realized it had no selling point. My prospective users wouldn't need it.</p>
<p>I often see articles recommending that founders "define their minimum viable product early". This rings very true in my case, even more so when I consider that I did define what "minimum" meant for Savoir. But I did it with faulty data, which led to a faulty definition. <strong>A product should be built for the market you will be selling to</strong>.</p>
<h1 id="heading-4-dont-plan-like-youre-spotify">4. Don't plan like you're Spotify</h1>
<p>The first thing I did when I started building Savoir was not to write code, but to plan my sprints with a tool called <a target="_blank" href="https://codetree.com/">Codetree</a>. I highly recommend them by the way, if you're looking for a good GitHub powered project management tool. I planned my entire feature set through epics, and I would break things down into smaller issues on a bi-weekly basis. I personally really like working in more structured environments. There's something with this level of tracking that makes me feel like I’m part of something. Except that my team at Savoir was just me, and it would be just me for <em>two whole years</em>. In addition to paying for a tool I simply didn't need, it took hours of my limited time to plan my work rather than execute it.</p>
<p>Was this a waste of time? Not completely, I have traces of everything I did and I can track the upcoming work very well given how well laid out my plan is. But it's also true that I didn't need epics, milestones, weekly reports of my throughput, multiple projects, alerts when things weren't being shipped – you name it. And that's only the surface of everything I was doing in my planning sessions. It's my personal belief that people should plan even when alone if that's what makes them productive, you should play to your strengths after all, but it's also important to remember where your project is. Savoir wasn't at a stage where this planning would be useful in any way, and it was my role to know that and act accordingly.</p>
<h1 id="heading-5-know-your-limits">5. Know your limits</h1>
<p>A constant in all these lessons is that I like doing research before jumping on a project. I read a <em>lot</em> of articles and guides on how to run a company as a solo founder. I participated in an accelerator program and reached out to many mentor groups in my area. All good things, but it also led to all those people telling me things like "learn visual design" and "use a CRM", or even "learn how to manage your company's finances yourself". I took these tips to heart and tried to do everything myself, and I see too many founders doing the same thing. We all have limits, and it's very likely that technical founders like me can't manage an entire business, design a landing page, run the business, and hire people by themselves while building their product.</p>
<p>It's important to know where your limits are. There's help everywhere, a lot of it free. Use it. If people recommend using a CRM, it’s because it's a very valuable tool to invest in early in order to track your deals. I also think it's not to force you to learn it all by yourself, but rather as a way to add the word to your vocabulary. Many CRMs also offer support and “courses” to get started in sales. In my case, I had to learn the hard way; <strong>if these tips were too hard to follow up on by myself, it was a good sign that I was reaching my limits.</strong></p>
<h1 id="heading-6-get-help">6. Get help</h1>
<p>This brings me to my last lesson. I strongly believe that a founder's job is not to work 80 hours a week in order to get <em>their</em> vision to market. Rather, I think a founder is someone with an idea and the ability to deliver it. Delivering it doesn't mean you have to do it all yourself, however. As a technical person, I had the chance to grow in an environment with a strong ownership culture. If I was in charge of a feature, I owned getting it out the door. Too often, I would take this to mean that my job was to do everything and get it shipped myself. Yet, ownership means you own getting it out, not necessarily doing it yourself. Your team is there to help you and, in the context of a new startup, your network is your team.</p>
<p>I always hear the phrase "your network is everything" when talking to other founders. It’s very true, but it's also true that you don't need to start with a strong network. Part of building a business is building that network. Like code, it can be built from scratch. Nothing happens in a vacuum and everyone has relied on others to get things done, even if they might not want to admit it. <strong>I learned the hard way how much harder it is to start without a network, but I also learned how many people are willing to help</strong>. I am a very introverted person and it took all I had to reach out to others, but once I did, I never looked back. </p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>I sincerely hope these lessons from my own startup story will help you in your own project. Please share your own lessons in the comments below, but also comment if you have questions or if you disagree with mine. I'm looking forward to our discussions!</p>
<p>Also, be on the lookout for future posts on the rewrite. I’ll have many things to share.</p>
<hr />
<p><strong>If what we are building looks interesting to you, please check our features and register for exclusive beta access on our website at <a target="_blank" href="https://savoir.dev">savoir.dev</a>. Feel free to also send me a message at info@savoir.dev, I'll be glad to answer any questions you have or give you a preview.</strong></p>
<blockquote>
<p>Savoir is the french word for Knowledge, pronounced sɑvwɑɹ.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Building an offline-first app with React and CouchDB]]></title><description><![CDATA[About three years ago, I posted an article on the (now defunct) Manifold blog on creating an offline-first app using React and CouchDB. Besides the fact the post is not available anymore, it was also very outdated given how it was built on a very old...]]></description><link>https://blog.savoir.dev/building-an-offline-first-app-with-react-and-couchdb</link><guid isPermaLink="true">https://blog.savoir.dev/building-an-offline-first-app-with-react-and-couchdb</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[React]]></category><category><![CDATA[CouchDB]]></category><category><![CDATA[pouchdb]]></category><dc:creator><![CDATA[Guillaume St-Pierre]]></dc:creator><pubDate>Mon, 11 Jul 2022 23:57:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1657392317357/mGvDQ5nJP.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>About three years ago, I posted an article on the (now defunct) Manifold blog on creating an offline-first app using React and CouchDB. Besides the fact the post is not available anymore, it was also very outdated given how it was built on a very old version of React. Yet, I think the subject matter of the article is still very much a concern today.</p>
<p>A lot of applications require their users to have a constant network connection to avoid losing their work. There are various strategies, some better than others, to make sure users can keep working, even when offline, by syncing their work once they come back online. The technology has improved a lot in three years and I still think CouchDB is a tool worth considering when building an offline-first application.</p>
<p>Join me again as well explore CouchDB and its features as we build a to-read list, which definitely isn't a to-do list in disguise.</p>
<h1 id="heading-what-is-couchdb">What is CouchDB?</h1>
<p>CouchDB is a NoSQL database built to sync. The CouchDB engine can support multiple replicas (Think of a database server) for the same database and can sync them in real-time with a process not dissimilar to git. That allows us to distribute our applications all over the world without the database being the limiting factor. These replicas are also not limited to servers. CouchDB compatible databases like PouchDB allow you to have synced databases on the browser or on mobile devices. That enables truly offline-first applications, users work on their own local database that happens to sync with a server when possible and required. The sync depends on the exact <a target="_blank" href="https://docs.couchdb.org/en/3.2.0/replication/intro.html#:~:text=3.-,Replication%20Procedure,the%20documents%20to%20the%20destination.">replication protocol chosen</a>, and it can be manually triggered. With PouchDB, that happens when any changes trigger a sync. Of course, a server has to be up for the sync to happen! The replication will pause if the replica is offline, which enables the <em>eventual</em> consistency we'll talk about below.</p>
<p>When you create a document in CouchDB, it creates revision for easy merging and conflict detection with its copies. When the database syncs, CouchDB compares the revisions and changes history, tries to merge the documents, and triggers a merge conflict if it can’t.</p>
<pre><code class="lang-json">{  
   <span class="hljs-attr">"_id"</span>:<span class="hljs-string">"SpaghettiWithMeatballs"</span>,
   <span class="hljs-attr">"_rev"</span>:<span class="hljs-string">"1–917fa2381192822767f010b95b45325b"</span>,
   <span class="hljs-attr">"_revisions"</span>:{  
      <span class="hljs-attr">"ids"</span>:[  
         <span class="hljs-string">"917fa2381192822767f010b95b45325b"</span>
      ],
      <span class="hljs-attr">"start"</span>:<span class="hljs-number">1</span>
   },
   <span class="hljs-attr">"description"</span>:<span class="hljs-string">"An Italian-American delicious dish"</span>,
   <span class="hljs-attr">"ingredients"</span>:[  
      <span class="hljs-string">"spaghetti"</span>,
      <span class="hljs-string">"tomato sauce"</span>,
      <span class="hljs-string">"meatballs"</span>
   ],
   <span class="hljs-attr">"name"</span>:<span class="hljs-string">"Spaghetti with meatballs"</span>
}
</code></pre>
<p>All this is handled through a built-in REST API and a web interface. The web interface can be used to manage all your databases and their documents, as well as user accounts, authentication, and even document attachments. If a merge conflict occurs when a database syncs, this interface gives you the ability to handle those merge conflicts manually. It also has a JavaScript engine for powering views and data validation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1657393223447/A0Kt6Ds-i.png" alt="Fauxton" /></p>
<p>Back in 2019, CouchDB was used to power CouchApps. In short, you could build your entire backend using CouchDB and its JavaScript engine. I was a big fan of CouchApps, but the limitation of CouchDB -- and also of database-only backends -- made CouchApps far less powerful than a more traditional database+application server. As we walk the road to v4 (at the time of writing this article), CouchDB has become closer to an alternative to Firebase or Hasura than an alternative to your backend.</p>
<h2 id="heading-so-should-i-switch-everything-to-couchdb-then">So, should I switch everything to CouchDB then?</h2>
<p>As with everything in software engineering, it <em>depends</em>.</p>
<p>CouchDB works wonders for applications where data consistency doesn't matter as much as <em>eventual</em> consistency. CouchDB cannot promise all your instances will be consistently synced. What it can promise is that data will <em>eventually</em> be consistent, and that at least one instance will always be available. It’s used or was used by huge companies like IBM, United Airlines, NPM, the BBC, and the LHC scientists at CERN (Yes, <em>that</em> CERN). All places that care about availability and resilience.</p>
<p>CouchDB can also work against you in many other cases. It does not care about making sure the data is consistent between instances outside of syncing, so different users may see different data. It is also a NoSQL database, with all the pros and cons that come with it. On top of that, third-party hosting is somewhat inconsistent; you have Cloudant and Couchbase, but outside of those, you are on your own.</p>
<p>There are a lot of things to consider before choosing a database system. If you feel like CouchDB is perfect for you, then it’s time to fasten your seat belt because you’re in for an awesome ride.</p>
<h2 id="heading-what-about-pouchdb">What about PouchDB?</h2>
<p><a target="_blank" href="https://pouchdb.com/">PouchDB</a> is a JavaScript database usable on both the browser and server, heavily inspired by CouchDB. It's a powerful database already thanks to a great API, but its ability to sync with one or more databases makes it a no-brainer for offline capable apps. By enabling PouchDB to sync with CouchDB, we can focus on writing data directly in PouchDB and it will take care of syncing that data with CouchDB, <em>eventually</em>. Our users will keep access to their data, whether the database is online or not.</p>
<h1 id="heading-building-an-offline-first-app">Building an offline-first app</h1>
<p>Now that we know what CouchDB is, let's build an offline-first app with CouchDB, PouchDB, and React. When searching CouchDB + React for the initial article, I found a lot of to-do apps. I thought I was very funny by making the joke that I was creating a to-read app, all while claiming that a list of books to read is <em>totally</em> different to a list of tasks to do. For consistency, let's keep the joke alive. Also, to-read apps are totally different from to-do apps.</p>
<p>All the code for this application is available on GitHub here: https://github.com/SavoirBot/definitely-not-a-todo-list. Feel free to follow along with the code.</p>
<p>The first thing we need is a JavaScript project for our app. We'll use <a target="_blank" href="https://www.snowpack.dev/">Snowpack</a> as our bundler. Open a terminal located in a directory for the project and type <code>npx create-snowpack-app react-couchdb --template @snowpack/app-template-minimal</code>. Snowpack will create a skeleton for our React application and install all dependencies. Once it's done doing its job, type <code>cd react-couchdb</code> to get into the newly created project directory. <code>create-snowpack-app</code> is very similar to <code>create-react-app</code> in how it sets-up your project, but it's a lot less intrusive (You don't even need to use eject at any point).</p>
<p>To finish setting up the project, install all the dependencies with the following command:</p>
<pre><code class="lang-bash">npm install react react-dom pouchdb-browser
</code></pre>
<p>With our project in hand, we now need a CouchDB database. To keep things simple, let's start it in a <a target="_blank" href="https://docs.docker.com/get-docker/">docker container</a> using <code>docker-compose</code>, which will allow us to start and stop it very easily. Create a <code>docker-compose.yaml</code> file and copy this content into it:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># docker-compose.yaml</span>
<span class="hljs-attr">version:</span> <span class="hljs-string">'3'</span>
<span class="hljs-attr">services:</span>
  <span class="hljs-attr">couchserver:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">couchdb</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"5984:5984"</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">COUCHDB_USER=admin</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">COUCHDB_PASSWORD=secret</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">./dbdata:/opt/couchdb/data</span>
</code></pre>
<p>This file defines a CouchDB server with a few variables to set the admin username and password. We also define a volume that will sync the CouchDB data from inside of the container to a local folder called <code>dbdata</code>. This will help keep our data when we close the container.</p>
<p>Type <code>docker compose up -d</code> in a terminal opened in the same folder where you started this project. Once pulled, the container will start, and make your CouchDB database available under <code>http://localhost:5984</code>. Accessing this URL in your browser or with curl should return a JSON welcome message. To make our local application work, we have to configure CORS on our database. Access the CouchDB dashboard under <code>http://localhost:5984/_utils</code> in your browser. Use the configured admin username and password, then click on the <strong>Settings</strong> tab, followed by the <strong>CORS</strong> tab, then click on <strong>Enable CORS</strong> and select <strong>All domains ( * )</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1657396110869/VhH1CcUcl.png" alt="CORS configured" /></p>
<h2 id="heading-configuring-pouchdb-for-our-app">Configuring PouchDB for our app</h2>
<p>For this project, we'll be using a few hooks to configure PouchDB and fetch our to-read items. Let's start by configuring PouchDB itself. Create a directory called <code>hooks</code> and then create a file called <code>usePouchDB.js</code> in this directory, with this code.</p>
<pre><code class="lang-js"><span class="hljs-comment">// hooks/usePouchDB.js</span>
<span class="hljs-keyword">import</span> { useMemo } <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>;
<span class="hljs-keyword">import</span> PouchDB <span class="hljs-keyword">from</span> <span class="hljs-string">'pouchdb-browser'</span>;

<span class="hljs-keyword">const</span> remoteUrl = <span class="hljs-string">'http://localhost:5984/reading_lists'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> usePouchDB = <span class="hljs-function">() =&gt;</span> {
    <span class="hljs-comment">// Create the local and remote databases for syncing</span>
    <span class="hljs-keyword">const</span> [localDb, remoteDb] = useMemo(
        <span class="hljs-function">() =&gt;</span> [<span class="hljs-keyword">new</span> PouchDB(<span class="hljs-string">'reading_lists'</span>), <span class="hljs-keyword">new</span> PouchDB(remoteUrl)],
        []
    );

    <span class="hljs-keyword">return</span> {
        <span class="hljs-attr">db</span>: localDb,
    };
};
</code></pre>
<p>This hook uses the <code>useMemo</code> hook from React to create two new instances of PouchDB. The first instance is a local database, installed in the browser, called <code>reading_lists</code>. The second instance is a remote instance, which instead connects to our CouchDB container. Since we only need the local instance in our application, we return an object with that local database only.</p>
<p>Let's now configure the synchronization for those two databases. Go back to <code>usePouchDB.js</code> and update the code with these changes.</p>
<pre><code class="lang-js"><span class="hljs-comment">// hooks/usePouchDB.js</span>
<span class="hljs-keyword">import</span> { useMemo, useEffect } <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>;
<span class="hljs-keyword">import</span> PouchDB <span class="hljs-keyword">from</span> <span class="hljs-string">'pouchdb-browser'</span>;

<span class="hljs-keyword">const</span> remoteUrl = <span class="hljs-string">'http://localhost:5984/reading_lists'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> usePouchDB = <span class="hljs-function">() =&gt;</span> {
    <span class="hljs-comment">// Previous code omitted for brevity</span>
    <span class="hljs-keyword">const</span> [localDb, remoteDb] = useMemo(...);

    <span class="hljs-comment">// Start the sync in a separate effect, cancel on unmount</span>
    useEffect(<span class="hljs-function">() =&gt;</span> {
        <span class="hljs-keyword">const</span> canceller = localDb
            .sync(remoteDb, {
                <span class="hljs-attr">live</span>: <span class="hljs-literal">true</span>,
                <span class="hljs-attr">retry</span>: <span class="hljs-literal">true</span>,
            });

        <span class="hljs-keyword">return</span> <span class="hljs-function">() =&gt;</span> {
            canceller.cancel();
        };
    }, [localDb, remoteDb]);

    <span class="hljs-keyword">return</span> {
        <span class="hljs-attr">db</span>: localDb,
    };
};
</code></pre>
<p>We added a <code>useEffect</code> hook to start the two-way synchronization between the local and remote databases. The sync uses the <code>live</code> and <code>retry</code> option, which causes PouchDB to stay connected with the remote database rather than only sync once, and retry if the sync could not happen. This effect returns a function which will cancel the sync if the component happens to unmount while syncing.</p>
<p>It would be nice to show a small message to our users whenever the CouchDB database is disconnected or unavailable. PouchDB's sync provides events we can listen to like <code>paused</code> and <code>active</code>, which the doc mentions may trigger when the database is unavailable. However, these hooks are only related to the act of syncing the data. If nothing needs to be synced, the sync will trigger the <code>paused</code> event regardless of the state of the remote database and then ignore the state of the remote database. Instead, we need to use the <code>info</code> method on the database on a regular interval to check the status of the remote database.</p>
<pre><code class="lang-js"><span class="hljs-comment">// hooks/usePouchDB.js</span>
<span class="hljs-keyword">import</span> { useMemo, useEffect, useState } <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>;
<span class="hljs-keyword">import</span> PouchDB <span class="hljs-keyword">from</span> <span class="hljs-string">'pouchdb-browser'</span>;

<span class="hljs-keyword">const</span> remoteUrl = <span class="hljs-string">'http://localhost:5984/reading_lists'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> usePouchDB = <span class="hljs-function">() =&gt;</span> {
    <span class="hljs-keyword">const</span> [alive, setAlive] = useState(<span class="hljs-literal">false</span>);

    <span class="hljs-comment">// Previous code omitted for brevity</span>
    <span class="hljs-keyword">const</span> [localDb, remoteDb] = useMemo(...);
    useEffect(...);

    <span class="hljs-comment">// Create an interval after checking the status of the database for the</span>
    <span class="hljs-comment">// first time</span>
    useEffect(<span class="hljs-function">() =&gt;</span> {
        <span class="hljs-keyword">const</span> cancelInterval = <span class="hljs-built_in">setInterval</span>(<span class="hljs-function">() =&gt;</span> {
            remoteDb
                .info()
                .then(<span class="hljs-function">() =&gt;</span> {
                    setAlive(<span class="hljs-literal">true</span>);
                })
                .catch(<span class="hljs-function">() =&gt;</span> {
                    setAlive(<span class="hljs-literal">false</span>);
                });
            }, <span class="hljs-number">1000</span>)
        });

        <span class="hljs-keyword">return</span> <span class="hljs-function">() =&gt;</span> {
            <span class="hljs-built_in">clearTimeout</span>(cancelInterval);
        };
    }, [remoteDb]);

    <span class="hljs-keyword">return</span> {
        <span class="hljs-attr">db</span>: localDb,
        ready,
        alive,
    };
};
</code></pre>
<p>We added the state hook for the variable <code>alive</code>, which will track if the remote database is available. Next, we added another <code>useEffect</code> hook to set up an interval that will call the info method every second to check if the database is still alive. Like the previous <code>useEffect</code>, we need to make sure to cancel the interval when the component unmounts to avoid memory leaks.</p>
<h2 id="heading-fetching-all-the-documents">Fetching all the documents</h2>
<p>With our PouchDB hook, we are ready to create our next hook for fetching all the to-read documents from the local database. Let's create another file in the <code>hooks</code> directory called <code>useReadingList.js</code> for the documents fetching logic.</p>
<pre><code class="lang-js"><span class="hljs-comment">// hooks/useReadingList.js</span>
<span class="hljs-keyword">import</span> { useEffect, useState } <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> useReadingList = <span class="hljs-function">(<span class="hljs-params">db, isReady</span>) =&gt;</span> {
    <span class="hljs-keyword">const</span> [loading, setLoading] = useState(<span class="hljs-literal">true</span>);
    <span class="hljs-keyword">const</span> [documents, setDocuments] = useState([]);

    <span class="hljs-comment">// Function to fetch the data from pouchDB with loading state</span>
    <span class="hljs-keyword">const</span> fetchData = <span class="hljs-function">() =&gt;</span> {
        setLoading(<span class="hljs-literal">true</span>);

        db.allDocs({
            <span class="hljs-attr">include_docs</span>: <span class="hljs-literal">true</span>,
        }).then(<span class="hljs-function"><span class="hljs-params">result</span> =&gt;</span> {
            setLoading(<span class="hljs-literal">false</span>);
            setDocuments(result.rows.map(<span class="hljs-function"><span class="hljs-params">row</span> =&gt;</span> row.doc));
        });
    };

    <span class="hljs-comment">// Fetch the data on the first mount, then listen for changes (Also listens to sync changes)</span>
    useEffect(<span class="hljs-function">() =&gt;</span> {
        fetchData();

        <span class="hljs-keyword">const</span> canceler = db
            .changes({
                <span class="hljs-attr">since</span>: <span class="hljs-string">'now'</span>,
                <span class="hljs-attr">live</span>: <span class="hljs-literal">true</span>,
            })
            .on(<span class="hljs-string">'change'</span>, <span class="hljs-function">() =&gt;</span> {
                fetchData();
            });

        <span class="hljs-keyword">return</span> <span class="hljs-function">() =&gt;</span> {
            canceler.cancel();
        };
    }, [db]);

    <span class="hljs-keyword">return</span> [loading, documents];
};
</code></pre>
<p>This hook does a few things. First, we create some state variables for keeping the loading state and our fetched documents. Next, we define a function to fetch the documents from the database using <code>allDocs</code>, then adding the documents to our state variables once loaded. We use the <code>include_docs</code> option for the <code>allDocs</code> function to make sure we fetch the entire document. By default, <code>allDocs</code> will only return the ID and revision. <code>include_docs</code> makes sure we get all the data.</p>
<p>We then create a <code>useEffect</code> hook which starts the data fetching process, then listen to changes from the database. Whenever we change something through the app, or the synchronization changes data in the local database, the <code>change</code> event will be triggered and we'll fetch the data again. The <code>live</code> option makes sure this keeps happening for the entire lifecycle of the application, or until the listener is cancelled when the component unmounts.</p>
<h1 id="heading-putting-it-all-together">Putting it all together</h1>
<p>With our hooks ready, we now need to build the React application. First, open the <code>index.html</code> file created by snowpack and replace <code>&lt;h1&gt;Welcome to Snowpack!&lt;/h1&gt;</code> with <code>&lt;div id="root"&gt;&lt;/div&gt;</code>. Next, rename the <code>index.js</code> file created by snowpack to <code>index.jsx</code> and replace the content of that file with this code:</p>
<pre><code class="lang-jsx"><span class="hljs-comment">// index.jsx</span>
<span class="hljs-keyword">import</span> React <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>;
<span class="hljs-keyword">import</span> { createRoot } <span class="hljs-keyword">from</span> <span class="hljs-string">'react-dom/client'</span>;

<span class="hljs-keyword">const</span> App = <span class="hljs-function">() =&gt;</span> <span class="hljs-literal">null</span>;

createRoot(<span class="hljs-built_in">document</span>.getElementById(<span class="hljs-string">'root'</span>)).render(<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">App</span> /&gt;</span></span>);
</code></pre>
<p>You can now start the snowpack app with <code>npm run start</code>, this should start the application, give you a URL to open in your browser, and show you a blank screen (normal since we return <code>null</code> from our app!). Let's start building our <code>App</code> component.</p>
<pre><code class="lang-jsx"><span class="hljs-comment">// index.jsx</span>
<span class="hljs-comment">// rest of the code remove for brevity</span>
<span class="hljs-keyword">import</span> { usePouchDB } <span class="hljs-keyword">from</span> <span class="hljs-string">'../hooks/usePouchDB'</span>;
<span class="hljs-keyword">import</span> { useReadingList } <span class="hljs-keyword">from</span> <span class="hljs-string">'../hooks/useReadingList'</span>;

<span class="hljs-keyword">const</span> App = <span class="hljs-function">() =&gt;</span> {
    <span class="hljs-keyword">const</span> { db, ready, alive } = usePouchDB();
    <span class="hljs-keyword">const</span> [loading, documents] = useReadingList(db);

    <span class="hljs-keyword">return</span> (
        <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span>Definitely not a todo list<span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span>
            {!alive &amp;&amp; (
                <span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>
                    <span class="hljs-tag">&lt;<span class="hljs-name">h2</span>&gt;</span>Warning<span class="hljs-tag">&lt;/<span class="hljs-name">h2</span>&gt;</span>
                    The connection with the database has been lost, you can
                    still work on your documents, we will sync everything once
                    the connection is re-established.
                <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>
            )}
            {loading &amp;&amp; <span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>loading...<span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>}
            {documents.length ? (
                <span class="hljs-tag">&lt;<span class="hljs-name">ul</span>&gt;</span>
                    {documents.map(doc =&gt; (
                        <span class="hljs-tag">&lt;<span class="hljs-name">li</span> <span class="hljs-attr">key</span>=<span class="hljs-string">{doc._id}</span>&gt;</span>
                            {doc.name}
                        <span class="hljs-tag">&lt;/<span class="hljs-name">li</span>&gt;</span>
                    ))}
                <span class="hljs-tag">&lt;/<span class="hljs-name">ul</span>&gt;</span>
            ) : (
                <span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>No books to read added, yet<span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>
            )}
        <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span></span>
    );
};
</code></pre>
<p>The application loads our PouchDB hook, followed by our hook loading all our to-read items. We'll then return a basic HTML structure that can show a warning message if the database happens to disconnect, a loading message when we're fetching the documents, and finally the to-read items from the database. The <code>_id</code> property is the internal unique ID property in CouchDB/PouchDB, which makes a perfect <code>key</code> for our list items.</p>
<p>Showing all the items is pretty nice, but to be able to show any items, we need a way to add new to-read items to our database. Let's go back to our <code>index.jsx</code> file and add this code in these.</p>
<pre><code class="lang-jsx"><span class="hljs-comment">// index.jsx</span>
<span class="hljs-keyword">import</span> React, { useState } <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>;
<span class="hljs-comment">// rest of the code remove for brevity</span>

<span class="hljs-keyword">import</span> { usePouchDB } <span class="hljs-keyword">from</span> <span class="hljs-string">'../hooks/usePouchDB'</span>;
<span class="hljs-keyword">import</span> { useReadingList } <span class="hljs-keyword">from</span> <span class="hljs-string">'../hooks/useReadingList'</span>;

<span class="hljs-comment">// Component to add new books with a controlled input</span>
<span class="hljs-keyword">const</span> AddReadingElement = <span class="hljs-function">(<span class="hljs-params">{ handleAddElement }</span>) =&gt;</span> {
    <span class="hljs-keyword">const</span> [currentName, setCurrentName] = useState(<span class="hljs-string">''</span>);

    <span class="hljs-keyword">const</span> addBook = <span class="hljs-function">() =&gt;</span> {
        <span class="hljs-keyword">if</span> (currentName) {
            <span class="hljs-comment">// If the currentName has data, clear it and add a new element.</span>
            handleAddElement(currentName);
            setCurrentName(<span class="hljs-string">''</span>);
        }
    };

    <span class="hljs-keyword">return</span> (
        <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">h2</span>&gt;</span>Add a new book to read<span class="hljs-tag">&lt;/<span class="hljs-name">h2</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">label</span> <span class="hljs-attr">htmlFor</span>=<span class="hljs-string">"new_book"</span>&gt;</span>Book name<span class="hljs-tag">&lt;/<span class="hljs-name">label</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">input</span>
                <span class="hljs-attr">type</span>=<span class="hljs-string">"text"</span>
                <span class="hljs-attr">id</span>=<span class="hljs-string">"new_book"</span>
                <span class="hljs-attr">value</span>=<span class="hljs-string">{currentName}</span>
                <span class="hljs-attr">onChange</span>=<span class="hljs-string">{event</span> =&gt;</span> setCurrentName(event.target.value)}
            /&gt;
            <span class="hljs-tag">&lt;<span class="hljs-name">button</span> <span class="hljs-attr">onClick</span>=<span class="hljs-string">{addBook}</span>&gt;</span>Add<span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span></span>
    );
};

<span class="hljs-keyword">const</span> App = <span class="hljs-function">() =&gt;</span> {
    <span class="hljs-keyword">const</span> { db, ready, alive } = usePouchDB();
    <span class="hljs-keyword">const</span> [loading, documents] = useReadingList(db);

    <span class="hljs-keyword">const</span> handleAddElement = <span class="hljs-function"><span class="hljs-params">name</span> =&gt;</span> {
        <span class="hljs-comment">// post sends a document to the database and generates the unique ID for us</span>
        db.post({
            name,
            <span class="hljs-attr">read</span>: <span class="hljs-literal">false</span>,
        });
    };

    <span class="hljs-keyword">return</span> (
        <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>
            {/* rest of the code remove for brevity */}
            <span class="hljs-tag">&lt;<span class="hljs-name">AddReadingElement</span> <span class="hljs-attr">handleAddElement</span>=<span class="hljs-string">{handleAddElement}</span> /&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span></span>
    );
};
</code></pre>
<p>We added a new component to this file for adding new books to read. A separate component helps make the structure a bit clearer, feel free to extract it in another file. This component uses a state hook to control an input, and then triggers the <code>post</code> method on the local database when the <strong>Add</strong> button is clicked.</p>
<p>Go back to your browser and try adding a few books to read, they should show up in the list when the button is clicked.</p>
<p>Finally, it would be great to be able to set books as read or delete some books we don't want in our list anymore. Open the <code>index.jsx</code> file again and add this code in there.</p>
<pre><code class="lang-jsx"><span class="hljs-comment">// index.jsx</span>
<span class="hljs-comment">// rest of the code remove for brevity</span>
<span class="hljs-keyword">const</span> App = <span class="hljs-function">() =&gt;</span> {
    <span class="hljs-keyword">const</span> { db, ready, alive } = usePouchDB();
    <span class="hljs-keyword">const</span> [loading, documents] = useReadingList(db);

    <span class="hljs-comment">// rest of the code remove for brevity</span>
    <span class="hljs-keyword">const</span> handleAddElement = <span class="hljs-function"><span class="hljs-params">name</span> =&gt;</span> ...;

    <span class="hljs-comment">// The remove method removes a document by _id and rev. The best way to send</span>
    <span class="hljs-comment">// both is to send the document to the remove method</span>
    <span class="hljs-keyword">const</span> handleRemoveElement = <span class="hljs-function"><span class="hljs-params">element</span> =&gt;</span> {
        db.remove(element);
    };

    <span class="hljs-comment">// The remove method updates a document, replacing all fields from that document.</span>
    <span class="hljs-comment">// like _id and rev, it needs both to find the document.</span>
    <span class="hljs-keyword">const</span> handleToggleRead = <span class="hljs-function"><span class="hljs-params">element</span> =&gt;</span> {
        db.put({
            ...element,
            <span class="hljs-attr">read</span>: !element.read,
        });
    };

    <span class="hljs-keyword">return</span> (
        <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>
            {/* rest of the code remove for brevity */}
            {documents.length ? (
                <span class="hljs-tag">&lt;<span class="hljs-name">ul</span>&gt;</span>
                    {documents.map(doc =&gt; (
                        <span class="hljs-tag">&lt;<span class="hljs-name">li</span> <span class="hljs-attr">key</span>=<span class="hljs-string">{doc._id}</span>&gt;</span>
                            <span class="hljs-tag">&lt;<span class="hljs-name">input</span>
                                <span class="hljs-attr">type</span>=<span class="hljs-string">"checkbox"</span>
                                <span class="hljs-attr">checked</span>=<span class="hljs-string">{doc.read}</span>
                                <span class="hljs-attr">onChange</span>=<span class="hljs-string">{()</span> =&gt;</span> handleToggleRead(doc)}
                                id={doc._id}
                            /&gt;
                            <span class="hljs-tag">&lt;<span class="hljs-name">label</span> <span class="hljs-attr">htmlFor</span>=<span class="hljs-string">{doc._id}</span>&gt;</span>{doc.name}<span class="hljs-tag">&lt;/<span class="hljs-name">label</span>&gt;</span>
                            <span class="hljs-tag">&lt;<span class="hljs-name">button</span>
                                <span class="hljs-attr">onClick</span>=<span class="hljs-string">{()</span> =&gt;</span> handleRemoveElement(doc)}
                            &gt;
                                Delete
                            <span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span>
                        <span class="hljs-tag">&lt;/<span class="hljs-name">li</span>&gt;</span>
                    ))}
                <span class="hljs-tag">&lt;/<span class="hljs-name">ul</span>&gt;</span>
            ) : (
                <span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>No books to read added, yet<span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>
            )}
            {/* rest of the code remove for brevity */}
        <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span></span>
    );
};
</code></pre>
<p>We added two functions in our <code>App</code>. The update method uses the <code>put</code> method to update a document. The <code>post</code> method on the local database creates a document without a unique ID and generates it once the element is inserted. <code>put</code> can both update and insert, but it requires an ID and revision to select the document to <code>put</code>. In our case, we use it using the existing document, toggling the <code>read</code> property. The second function uses the <code>remove</code> method with the document, which makes sure PouchDB can find the document and delete it.</p>
<p>Finally, we replaced the list of documents to add a checkbox and a button. When the checkbox is toggled, the update method will fire and toggle the <code>read</code> property. The button will fire the remove method to delete the element when clicked.</p>
<p>Go back to your browser and try toggling the checkboxes or deleting elements. It should work without any issues.</p>
<h2 id="heading-testing-the-offline-first-capabilities">Testing the offline-first capabilities</h2>
<p>Now, it's time to test the app while the database is offline. Open a new terminal where your project is located (so as not to kill the <code>npm run start</code> command) and type <code>docker compose stop couchserver</code>. You should immediately see the warning message appear in the React app. Yet, you should still be able to interact with the app and add/change/delete documents. Type <code>docker compose start couchserver</code> to restart the database and reload the page once the warning message disappears. Every change you made should still be in the app, and you should be able to see the change in the CouchDB dashboard.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>We now have a functional app with an offline-first focus. Regardless of the state of the database, our users can keep adding books to read and set their read state. The message is an added bonus which helps our users know not to clear their cache until we have properly synced the app.</p>
<p>Of course, acting on the database directly from the client may not be the best solution for most apps. Especially if we sync that data without any validation from the database. Please let me know in the comments below if you'd like a second post in this series implementing a backend for validating and syncing data in an offline-first application.</p>
<hr />
<p><strong>I'd love to hear your thoughts - please comment, share or follow.</strong></p>
<p><strong>We are building up Savoir, so keep an eye out for features and updates on our <a target="_blank" href="https://www.savoir.dev/?utm_source=blog">website</a> at savoir.dev. If you'd like to subscribe for updates or beta testing, send me a message at info@savoir.dev!</strong></p>
<blockquote>
<p>Savoir is the french word for Knowledge, pronounced sɑvwɑɹ.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Functions for platforms with Kubernetes and Fission.io]]></title><description><![CDATA[Back in May, Cloudflare released a blog post announcing their new product: workers for platforms. It is a natural evolution of their existing edge workers product and opens the door to a new kind of developer experience for third-party integrations. ...]]></description><link>https://blog.savoir.dev/functions-for-platforms-with-kubernetes-and-fissionio</link><guid isPermaLink="true">https://blog.savoir.dev/functions-for-platforms-with-kubernetes-and-fissionio</guid><category><![CDATA[Developer Tools]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[serverless]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Guillaume St-Pierre]]></dc:creator><pubDate>Mon, 13 Jun 2022 14:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Back in May, Cloudflare released a blog post announcing their new product: <a target="_blank" href="https://blog.cloudflare.com/workers-for-platforms/">workers for platforms</a>. It is a natural evolution of their existing edge workers product and opens the door to a new kind of developer experience for third-party integrations. We only have to look at Slack and their Slack applications to see how valuable integrations have become. Entire suites of productivity tools are being sold as Slack apps. Yet, one limiting factor of integrations is the need for users to set up their own infrastructure and maintain it. If you want to build a Slack application, Slack doesn't give you a small part of their infrastructure to use, you have to build your own. With this announcement, Cloudflare tries to solve this problem by giving its users the ability to integrate with Cloudflare, through Cloudflare.</p>
<h1 id="heading-why-does-this-matter">Why does this matter?</h1>
<p>With products like Slack, it is clear that the investment to build and maintain a Slack application on your own cloud infrastructure is worth it, because their user base makes sure you’ll see the growth needed to justify the costs. Products like GitHub or Discord are in the same category; their integration platforms have been successful regardless of how many resources are needed to get one going. It's exactly why Savoir is a GitHub application and why we are considering creating a Slack application as well.</p>
<p>But what happens if you're a smaller product without that ability to ensure a return on investment? Using Savoir again as an example, we're a new company with a yet-to-be-successful product. For us, it is clear that integrations would be very valuable. What if we could give users ways to react to Webhooks that triggers changes in content, and even update that content programmatically? What if you could sync content tracked with Savoir on platforms like GitBook, Readme, or GraphCMS? We know we do not have the resources to compete with these platforms and it makes a lot more sense to focus on what makes us unique: <strong>code-level tracking of your documentation.</strong> Clearly, integrations are the way to go.</p>
<p>To build integrations prior to the Cloudflare post, we'd have two options: build the integrations individually ourselves (and hope we build it the way users want), or ask our users to take the cost of hosting their own custom integration without being able to promise them the growth they need to make their money back. <strong>Workers for platforms create a third choice: we give our users a way to create integrations, and we execute them. </strong>We can then focus on giving our users the best DX possible and they can focus on building an integration that matches their needs. It also still leaves the door open for our own integrations - we have the perfect opportunity to dogfood our own integration platform.</p>
<h1 id="heading-functions-for-platforms">Functions for platforms</h1>
<p>Long story short, we're very excited about the potential of workers for platforms. So excited, in fact, that I decided to try building a prototype clone based <em>only</em> on the information contained in the announcement blog post. With this long-winded introduction behind us, let's now go through the process of building that prototype and learning about Fission.io and Kubernetes in the process.</p>
<p>Workers for platforms are described as isolated and secure JavaScript environments where users can upload and execute JavaScript functions in a <a target="_blank" href="https://v8.dev/">V8</a> environment. There is no Node.js - it runs in a pure JavaScript environment as if it was running in a browser, but without the DOM. This is not a Serverless Function in the sense that we have to answer to triggers like with a more traditional serverless environment (commonly implemented as express servers). Rather, Cloudflare gives us a set of functions we can use to listen to events and they'll execute the function whenever an event we listen to happens. Let's try building that.</p>
<p>I uploaded a working version of this project on GitHub, feel free to follow along with the code there if anything doesn't work as described in this post: https://github.com/Minivera/functions-for-platforms.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>After doing a lot of research, I ended up settling on the <a target="_blank" href="https://fission.io/">Fission.io</a> framework to support this project. Fission is an open-source Serverless framework running in kubernetes. Think AWS Lambdas, but <em>we</em> are in control of every part of the infrastructure. Kubernetes gives us the power to define the environments the containers will be executed in, and any other resources they need. This gives us the control we need to be able to create our very own environment for executing arbitrary JavaScript through the V8 engine. Each function can be isolated as much as we need to and Fission is really great at giving us the ability to quickly create multiple environments. </p>
<p>Since Fission is built on Kubernetes, it will take care of a lot of the heavy lifting and allow us to focus on what we want. I'll make sure to explain everything I'm doing, but this post won't go into too much detail about Kubernetes. You will need Node.js installed on your machine. I recommend going with the most recent LTS version (version 16 at the time of writing this article).</p>
<p>To be able to use Fission, we first need to set up a kubernetes cluster. A cluster is the "cloud" environment where all the resources are created and managed. It's like your very own specialized GCP or AWS running only containers. I'll be using <a target="_blank" href="https://minikube.sigs.k8s.io/docs/start/">Minikube</a> to manage a cluster locally in this post, as I've found it to be the most compatible with Fission. It is a great tool with lots of utilities, and it runs the entire cluster inside of another docker container, which makes it very easy to clean up. Let's get started with setting up Minikube on our machine.</p>
<ol>
<li>First, Install <a target="_blank" href="https://docs.docker.com/get-docker/">docker</a>, based on your OS. As said previously, Minikube runs the cluster inside of a Docker container. Docker and Kubernetes are very complementary tools, so Docker will likely be useful even if you're not using Minikube.</li>
<li>Install <a target="_blank" href="https://kubernetes.io/docs/tasks/tools/">kubectl</a> and <a target="_blank" href="https://helm.sh/docs/intro/install/">helm</a> to be able to manipulate a kubernetes cluster. <code>kubectl</code> is the official Kubernetes CLI tool and <code>helm</code> is a utility deployment tool for creating and deploying kubernetes applications. We will not be using <code>helm</code> directly in this post, it is a dependency of Fission.</li>
<li>Install the <a target="_blank" href="https://fission.io/docs/installation/#install-fission-cli">fission CLI</a>, it will use <code>kubectl</code> and <code>helm</code> to set up Fission automatically for us.</li>
<li>Finally, Install <a target="_blank" href="https://minikube.sigs.k8s.io/docs/start/">Minikube</a>.</li>
</ol>
<p>Once everything is installed, start the Minikube cluster using the command <code>minikube start</code> in any terminal. Minikube will download a few docker images and start the cluster. Once completed, run <code>eval $(minikube -p minikube docker-env)</code> in the same terminal. This tells that terminal session to run any docker command inside the Minikube cluster, allowing us to do things like pushing or pulling images inside of the cluster. Without this command, we wouldn't be able to use our custom V8 image locally, as the cluster cannot access our local docker registry (It runs inside a container). Note that this command only works for the current terminal session, if you close that terminal, you'll have to run it again.</p>
<p>The final step is to install Fission itself on our new cluster. There are a few ways to <a target="_blank" href="https://fission.io/docs/installation/">install Fission</a>, we'll be using <code>helm</code> and installing it on Minikube. Run the command below -- copied from the official docs -- to get Fission installed and ready to start.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> FISSION_NAMESPACE=<span class="hljs-string">"fission"</span>
kubectl create namespace <span class="hljs-variable">$FISSION_NAMESPACE</span>
kubectl create -k <span class="hljs-string">"github.com/fission/fission/crds/v1?ref=v1.16.0"</span>
helm repo add fission-charts https://fission.github.io/fission-charts/
helm repo update
helm install --version v1.16.0 --namespace <span class="hljs-variable">$FISSION_NAMESPACE</span> fission \
  --<span class="hljs-built_in">set</span> serviceType=NodePort,routerServiceType=NodePort \
  fission-charts/fission-all
</code></pre>
<p>We're now ready to get started!</p>
<h2 id="heading-uploading-functions">Uploading functions</h2>
<p>One important thing I noted from the Cloudflare blog post is how important speed is to their implementation. It's clear they wanted their integrations to be as fast as any other worker function running on their platform. We won't get into running this project at the edge or avoiding performance loss from Fission, but we do want to do as much as possible to improve performance.</p>
<p>For this reason, we'll be using a network disk over something like a CDN for uploading the JavaScript files. Executing these files will only require a direct file system access, which should be much faster than having to do the round trip to some CDN server. We'll be using YAML specification files to manage our infrastructure and applying them with <code>kubectl</code>. While we could only use the CLI command, I find that specification files are much more expressive and configurable. Looking at the official <a target="_blank" href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/">Kubernetes docs</a>, we find a special kind of resource called a "persistent volume". A persistent volume is like a docker volume, but the files created in that volume are persistent rather than ephemeral. With Fission, our containers will be started and stopped constantly, so this persistent volume is a great way to share files between the containers.</p>
<p>Since the only thing we need from Kubernetes is to manage this volume, we'll keep the specification files simple. Create a new directory called <code>kubernetes</code> and then create a file named <code>code-volume.yaml</code> in that directory. Copy this YAML into that file.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># kubernetes/code-volume.yaml</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">PersistentVolume</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">code-volume</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">fission-function</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">storageClassName:</span> <span class="hljs-string">manual</span>
  <span class="hljs-attr">accessModes:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">ReadWriteOnce</span>
  <span class="hljs-attr">capacity:</span>
    <span class="hljs-attr">storage:</span> <span class="hljs-string">5Gi</span>
  <span class="hljs-attr">hostPath:</span>
    <span class="hljs-attr">path:</span> <span class="hljs-string">/data/code-volume/</span>
<span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">PersistentVolumeClaim</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">code-volume-claim</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">fission-function</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">storageClassName:</span> <span class="hljs-string">manual</span>
  <span class="hljs-attr">accessModes:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">ReadWriteOnce</span>
  <span class="hljs-attr">resources:</span>
    <span class="hljs-attr">requests:</span>
      <span class="hljs-attr">storage:</span> <span class="hljs-string">3Gi</span>
</code></pre>
<p>This YAML specification file defines the volume itself - called <code>code-volume</code> - directly in the kubernetes namespace for the Fission functions containers. In short, namespaces allow us to isolate parts of the cluster for easier management, and Fission uses them a lot. We want our volume to be as close as possible to where the functions will be executed (since we'll have to connect this disk to the containers used by the function), that's why we create it directly in that namespace. It's a very small disk at 5 Gigabytes, but that's enough for testing things out.</p>
<p>The second element created is a persistent volume claim named <code>code-volume-claim</code>. Individual containers use this claim to request access to the persistent volume and it allows us to define the base permissions and access. A persistent volume, in Kubernetes, is a resource in the cluster. A persistent volume claim consumes that resource and defines its access. In our case, we're telling Kubernetes to give us access to 3 Gigabyte out of the 5 available in read-write mode, and that only one node can read or write at a time through the <code>ReadWriteOnce</code> access mode. In a real-world situation, these constraints would likely lead to access locks and prevent concurrent access. This is fine for a prototype, but we'd have to manage access properly if we were to deploy this in production.</p>
<p>Let's create these resources now. Run the command <code>kubectl apply -f ./kubernetes/code-volume.yaml</code> in your terminal. This will tell <code>kubectl</code> to take the specification file we just created and apply its content on our cluster. If we ever change this file, running the same command will update the cluster by applying any changed properties without recreating everything. Pretty useful.</p>
<p>Specification files like these are very useful and make the commands much easier to run, since we don't have to hope for the best with command arguments. Fission also supports specification files; any Fission CLI command can be appended with <code>--spec</code> to create a specification file in the <code>specs</code> directory. We can then run <code>fission spec apply --wait</code> to apply the specification files on the cluster like we would with <code>kubectl</code>.</p>
<p>For the rest of this blog post, we'll be using Fission specification files over command lines as it will make things a lot easier for us. Let's start by creating the spec folder itself. Run the command <code>fission spec init</code> to initialize that folder, Fission will add a few files in there. We can now start creating the environment and the function for uploading scripts. Create a <code>env-nodejs.yaml</code> file in this directory, copy this YAML into that new file.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># specs/env-nodejs.yaml</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">fission.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Environment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">creationTimestamp:</span> <span class="hljs-literal">null</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nodejs</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">builder:</span>
    <span class="hljs-attr">command:</span> <span class="hljs-string">build</span>
    <span class="hljs-attr">container:</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">""</span>
      <span class="hljs-attr">resources:</span> {}
    <span class="hljs-attr">image:</span> <span class="hljs-string">fission/node-builder</span>
  <span class="hljs-attr">imagepullsecret:</span> <span class="hljs-string">""</span>
  <span class="hljs-attr">keeparchive:</span> <span class="hljs-literal">false</span>
  <span class="hljs-attr">poolsize:</span> <span class="hljs-number">3</span>
  <span class="hljs-attr">resources:</span> {}
  <span class="hljs-attr">runtime:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">fission/node-env</span>
    <span class="hljs-attr">podspec:</span>
      <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nodejs</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">fission/node-env:latest</span>
          <span class="hljs-attr">volumeMounts:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">code-volume</span>
              <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/etc/code</span>
      <span class="hljs-attr">volumes:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">code-volume</span>
          <span class="hljs-attr">persistentVolumeClaim:</span>
            <span class="hljs-attr">claimName:</span> <span class="hljs-string">code-volume-claim</span>
  <span class="hljs-attr">version:</span> <span class="hljs-number">2</span>
</code></pre>
<p>This YAML defines a Node.js environment. In fission, environments are the definition for the containers where each function will be executed. The containers run in a unit called a pod, Fission will create an arbitrary number of pods and scale them based on demand. You can see pods as like docker-compose files that run a few containers with shared resources. All pods run in a node, which is like a virtual machine with a bunch of docker-compose files (the pods) running on it. Fission will take care of creating the pods and will balance requests on all the pods automatically. </p>
<p>This file was created by running the command <code>fission environment create --name nodejs --image fission/node-env --spec</code>, with a few modifications. We're giving the name <code>nodejs</code> to our environment in line 6, then we tell Fission to use the official Node.js image in line 14 to build the container and the official Node.js runtime image for the function in line 20. Finally, we define a custom <code>podspec</code> object in line 21 where we mount our persistent volume as a <a target="_blank" href="https://docs.docker.com/storage/volumes/">docker volume</a>, meaning each container will now have a directory named <code>/etc/code</code> where they can access the content of the persistent volume. <code>podspec</code> is a very powerful tool that gives us the ability to configure the containers in the pod. We could, for example, add a second container running redis if we ever needed an ephemeral redis database.</p>
<p>We'll be using Node.js to upload code to our persistent volume. Fission supports complete NPM projects with modules, but our code only needs the base Node.js modules, so we'll keep things simple by creating a single script. Create a <code>src</code> directory and add a <code>function-upload.js</code> file in that directory, copy the following code in there.</p>
<pre><code class="lang-js"><span class="hljs-comment">// src/function-upload.js</span>
<span class="hljs-keyword">const</span> { promises } = <span class="hljs-built_in">require</span>(<span class="hljs-string">'fs'</span>);
<span class="hljs-keyword">const</span> path = <span class="hljs-built_in">require</span>(<span class="hljs-string">'path'</span>);

<span class="hljs-comment">// Path to the persistent volume</span>
<span class="hljs-keyword">const</span> diskPath = <span class="hljs-string">`/etc/code`</span>;

<span class="hljs-comment">// Function to hash a string into a short string.</span>
<span class="hljs-keyword">const</span> hashContent = <span class="hljs-function">(<span class="hljs-params">content</span>) =&gt;</span> {
    <span class="hljs-keyword">let</span> hash = <span class="hljs-number">0</span>;
    <span class="hljs-keyword">if</span> (content.length === <span class="hljs-number">0</span>) {
        <span class="hljs-keyword">return</span> hash;
    }

    <span class="hljs-keyword">let</span> chr;
    <span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> i = <span class="hljs-number">0</span>; i &lt; content.length; i++) {
        chr = content.charCodeAt(i);
        hash = (hash &lt;&lt; <span class="hljs-number">5</span>) - hash + chr;
        hash |= <span class="hljs-number">0</span>;
    }

    <span class="hljs-keyword">return</span> hash;
};

<span class="hljs-comment">// Fission will execute the function we export in the Node.js environment. </span>
<span class="hljs-comment">// Content contains things like the body and headers</span>
<span class="hljs-built_in">module</span>.exports = <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> (<span class="hljs-params">context</span>) </span>{
    <span class="hljs-comment">// This function expects a raw body (no JSON) for uploading a JavaScript script</span>
    <span class="hljs-keyword">const</span> fileContent = context.request.body;

    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Received function "<span class="hljs-subst">${fileContent}</span>", hashing.`</span>);
    <span class="hljs-comment">// Create a file name based on the file content. We hash that content so </span>
    <span class="hljs-comment">// the same content will always have the same name.</span>
    <span class="hljs-keyword">const</span> fileHash = hashContent(fileContent);

    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Writing file <span class="hljs-subst">${fileHash}</span>.js to the persistent volume`</span>);
    <span class="hljs-comment">// Write the file content onto the persistent volume.</span>
    <span class="hljs-keyword">await</span> promises.writeFile(path.join(diskPath, <span class="hljs-string">`<span class="hljs-subst">${fileHash}</span>.js`</span>), fileContent);

    <span class="hljs-keyword">return</span> {
        <span class="hljs-attr">status</span>: <span class="hljs-number">200</span>,
        <span class="hljs-attr">body</span>: {
            <span class="hljs-attr">message</span>: <span class="hljs-string">'function successfully uploaded'</span>,
            <span class="hljs-comment">// We return the filename so we can execute it later</span>
            <span class="hljs-attr">id</span>: fileHash,
        },
    };
}
</code></pre>
<p>This function defines the base API code for uploading a script. We receive that script as the raw string body of a POST call, then hash that content to generate a unique file name. Finally, we create a file on the persistent volume to host that file and return the hashed name for executing that function in our V8 function.</p>
<p>Run <code>fission function create --name function-upload --env nodejs --code src/function-upload.js --spec</code> to create the function spec followed by <code>fission httptrigger create --url /upload  --method POST --name upload-js --function function-upload --spec</code> to create the HTTP trigger spec. Run <code>fission spec apply --wait</code> to apply the newly created spec files onto the cluster.</p>
<p>In Fission, a function is the definition for executing code in an environment. In this case, we tell Fission to run our code for uploading scripts inside the Node.js environment. A trigger is what causes a function to execute. There are multiple trigger types in Fission, but since this is an API, we'll be using HTTP triggers. This tells Fission to run the function whenever an HTTP call is sent to the URL specified, <code>/upload</code> in our case.</p>
<p>Feel free to test the function execution with <code>curl</code>. To do so, export the Fission router URL (the entrypoint to call HTTP triggers) as an environment variable in your terminal with this command </p>
<pre><code>export FISSION_ROUTER<span class="hljs-operator">=</span>$(minikube ip):$(kubectl <span class="hljs-operator">-</span>n fission get svc router <span class="hljs-operator">-</span>o jsonpath<span class="hljs-operator">=</span><span class="hljs-string">'{...nodePort}'</span>)
</code></pre><p>In the same terminal, try running this command to send a POST request to our upload function. This should execute and return a JSON payload with the file id.</p>
<pre><code>curl <span class="hljs-operator">-</span>XPOST <span class="hljs-string">"http://$FISSION_ROUTER/upload"</span> <span class="hljs-operator">-</span>H <span class="hljs-string">"Content-Type: text/plain"</span> <span class="hljs-operator">-</span>d <span class="hljs-string">'console.log("Hello, World!")'</span>
</code></pre><p>In MacOS or Windows environments, you may need a load balancer. Check the <a target="_blank" href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#loadbalancer-access">official docs from Minikube</a> to set it up. The repository for the project has the kubernetes specification file ready in the <code>kubernetes</code> folder if needed.</p>
<h2 id="heading-executing-functions">Executing functions</h2>
<p>Now that we have everything ready to upload code, we need a way to execute that code in a secure V8 environment. The Cloudflare blog post is clear that V8 was the key to unlocking a secure and isolated environment, so we'll be following in their footsteps. This means we can't use the default Node.js environment from Fission, we'll have to build our own. Thankfully, Fission has a binary environment we can partially reuse for this.</p>
<p>To build our own environment, we need to create a container image to use as the runtime. Fission also has a concept of builders, images that build environments based on the code given (For example, the Node.js environment will install dependencies defined in a <code>package.json</code> file). We only need to define our own runtime, we can reuse the binary builder since we'll be using a bash file as our function script. Let's start from the base binary image and change the Dockerfile to add V8 in that image.</p>
<p>Download the content of this folder in the official Fission environment repository: https://github.com/fission/environments/tree/master/binary and copy the two <code>.go</code> files and the <code>Dockerfile</code> into a newly created <code>image</code> directory. Open the <code>Dockerfile</code> and replace its content with this code:</p>
<pre><code class="lang-Dockefile"># image/Dockerfile
# First stage, to copy v8 into the cache. V8 is built for Debian
FROM andreburgaud/d8 as v8

RUN ls /v8

# Second stage, build the Fission server for Debian
FROM golang:buster as build

WORKDIR /binary
COPY *.go /binary/

RUN go mod init github.com/fission/environments/binary
RUN go mod tidy

RUN go build -o server .

# Third stage, copy everything into a slim Debian image
FROM debian:stable-slim

RUN mkdir /v8

COPY --from=v8 /v8/* /v8

WORKDIR /app

RUN apt-get update -y &amp;&amp; \
    apt-get install coreutils binutils findutils grep -y &amp;&amp; \
    apt-get clean

COPY --from=build /binary/server /app/server

EXPOSE 8888
ENTRYPOINT ["./server"]
</code></pre>
<p>This Dockerfile has three stages. First, we download and test an image found on the Docker registry called <code>andreburgaud/d8</code>. This image has V8 prebuilt for Debian (You might need to build V8 on MacOS) so we can save the few hours it takes to build it. In the second stage, we copy the official <code>.go</code> files from the Fission binary environment and build them for Debian. These go files create a server that takes in commands from the triggers and executes a binary file in response, that’s how Fission supports any binary. Finally, the third stage puts everything together by setting up the container as defined in the original binary <code>Dockerfile</code> and copying V8 to a specific directory so it's available.</p>
<p>Run the command <code>docker build --no-cache --tag=functions/v8-env .</code> in the same terminal where you previously ran <code>eval $(minikube -p minikube docker-env)</code>. This will build the image and tag it under <code>functions/v8-env</code> <em>inside</em> the Minikube cluster, so Fission can access it.</p>
<p>Time to create our environment! Go to the <code>specs</code> directory again and create a <code>env-v8.yaml</code> file. Copy this YAML into it.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># specs/env-v8.yaml</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">fission.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Environment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">creationTimestamp:</span> <span class="hljs-literal">null</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">v8</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">builder:</span>
    <span class="hljs-attr">command:</span> <span class="hljs-string">build</span>
    <span class="hljs-attr">container:</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">""</span>
      <span class="hljs-attr">resources:</span> {}
    <span class="hljs-attr">image:</span> <span class="hljs-string">fission/binary-builder:latest</span>
  <span class="hljs-attr">imagepullsecret:</span> <span class="hljs-string">""</span>
  <span class="hljs-attr">keeparchive:</span> <span class="hljs-literal">false</span>
  <span class="hljs-attr">poolsize:</span> <span class="hljs-number">3</span>
  <span class="hljs-attr">resources:</span> {}
  <span class="hljs-attr">runtime:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">functions/v8-env:latest</span>
    <span class="hljs-attr">podspec:</span>
      <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">v8</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">functions/v8-env:latest</span>
          <span class="hljs-attr">volumeMounts:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">code-volume</span>
              <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/etc/code</span>
              <span class="hljs-attr">readOnly:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">volumes:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">code-volume</span>
          <span class="hljs-attr">persistentVolumeClaim:</span>
            <span class="hljs-attr">claimName:</span> <span class="hljs-string">code-volume-claim</span>
  <span class="hljs-attr">version:</span> <span class="hljs-number">2</span>
</code></pre>
<p>This environment is very similar to the Node.js environment, except we use the <code>fission/binary-builder</code> official image from Fission to build the container and our custom  <code>functions/v8-env</code> for the container runtime. Like with the Node.js environment, we also connect the persistent volume, but in <code>readOnly</code> mode this time. We don't want our users to be able to write things to the volume from their own scripts.</p>
<p>Next, go to the <code>src</code> directory and create a <code>function.sh</code> file. Copy this code into that new file.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># src/function.sh</span>
<span class="hljs-comment">#!/bin/sh</span>

file_id=<span class="hljs-string">"<span class="hljs-subst">$(/bin/cat -)</span>"</span>

<span class="hljs-built_in">printf</span> <span class="hljs-string">"executing /etc/code/%s.js with /v8/d8\n\n"</span> <span class="hljs-string">"<span class="hljs-variable">$file_id</span>"</span>
<span class="hljs-built_in">printf</span> <span class="hljs-string">"output is: \n"</span>

 <span class="hljs-comment"># This will not print errors but instead causes the process to crash, for now</span>
/v8/d8 <span class="hljs-string">"/etc/code/<span class="hljs-variable">$file_id</span>.js"</span>
</code></pre>
<p>When the go server from the official Fission binary environment runs a script or binary, it provides the request body in the standard input stream (accessible by reading from it with <code>/bin/cat -</code>). To avoid having to parse JSON or headers, we'll take the file name from the previous upload function as the raw body and execute the JS file from the persistent volume in V8 directly.</p>
<p>Let's create the function and trigger now. Run <code>fission function create --name run-js --env v8 --code src/function.sh --spec</code> followed by <code>fission httptrigger create --url /execute  --method POST --name run-isolated --function run-js --spec</code> to create the two spec files. Run <code>fission spec apply --wait</code> to apply the newly created spec files onto the cluster.</p>
<p>We now have a function that can load a JS script uploaded through our Node.js function and execute it in an isolated and controllable V8 environment. We can control how many resources the function has through the environment specification file, but also how much time it is allowed to run. We have total control over how much power we give our users thanks to Fission and Kubernetes.</p>
<h2 id="heading-testing-the-functions">Testing the functions</h2>
<p>The final stage is to test what we just built! If you haven't done so already, export the Fission router URL as an environment variable in your terminal with this command.</p>
<pre><code>export FISSION_ROUTER<span class="hljs-operator">=</span>$(minikube ip):$(kubectl <span class="hljs-operator">-</span>n fission get svc router <span class="hljs-operator">-</span>o jsonpath<span class="hljs-operator">=</span><span class="hljs-string">'{...nodePort}'</span>)
</code></pre><p>In the same terminal where you exported the router URL, run this command to send a POST request to the upload function. It should return a JSON payload with the file id under the <code>id</code> property.</p>
<pre><code>curl <span class="hljs-operator">-</span>XPOST <span class="hljs-string">"http://$FISSION_ROUTER/upload"</span> <span class="hljs-operator">-</span>H <span class="hljs-string">"Content-Type: text/plain"</span> <span class="hljs-operator">-</span>d <span class="hljs-string">'console.log("Hello, World!")'</span>
</code></pre><p>Copy the ID and run <code>curl -XPOST -k -d "&lt;ID&gt;" "http://$FISSION_ROUTER/execute"</code>, replacing <code>&lt;ID&gt;</code> with it. You should see the words <code>Hello, World!</code> appear in your terminal. That means it worked!</p>
<p>What happened here exactly? When you sent the first request, the Fission router sent it to our upload function running in a Node.js container, which then creates that file in our persistent volume. The second request is then routed to our execution function running in our custom V8 containers, which loads the same file based on its file id from the volume and runs it, printing the result to the standard output. The Fission binary env is set up in such a way that any output is sent back as the result of the HTTP call.</p>
<p>Feel free to test this with more complex code, the code should print whatever you ask it to log. The next step for this prototype would be to provide environment variables and functions to our users so they can react to events triggered by our system, but that will be for another day!</p>
<h1 id="heading-where-to-go-from-here">Where to go from here?</h1>
<p>In this post, we built a prototype for cloning the workers for platforms product from Cloudflare. Our implementation shows some promise, but it is also very limited and flawed. Due to the selection of frameworks and technologies, and also to the nature of this project which is based entirely on a single blog post, this prototype has a few flaws worth talking about.</p>
<p>First, anyone can technically access anyone else's scripts. A user could potentially write a script that loops through all the files in the persistent volume and prints the code of each file, potentially leaking any secrets saved directly in there (even without Node.js' <code>fs</code> module). We'll have to make sure each function can only see its script and nothing else.</p>
<p>The next issue is access. At the moment, anyone can access the two endpoints and do pretty much anything they want if we were to deploy it to the cloud. The first step towards deploying this is to make sure these two endpoints are only accessible to other internal services and secured. We'd have to create a separate service to route requests to our Fission cluster or something similar to abstract the implementation.</p>
<p>Finally, there is the issue of performance. This prototype isn't configured to run at the edge, but it also has some performance issues that would need to be improved to satisfy the requirements outlined in the Cloudflare blog post. The serverless nature of this project means we'll have to deal with cold starts and limited resources in our kubernetes clusters.</p>
<p>These optimizations are far beyond the scope of this first post, but maybe we can continue exploring in a future post! In any case, I hope you enjoyed this long post and I'm very much looking forward to seeing where the community takes workers-for-platforms. </p>
<p>Please check out the repository where I uploaded a working version of the prototype here: https://github.com/Minivera/functions-for-platforms. Contributions are welcome.</p>
<hr />
<p><strong>I'd love to hear your thoughts - please comment, share and follow.</strong></p>
<p><strong>We are building up Savoir, so keep an eye out for features and updates on our <a target="_blank" href="https://www.savoir.dev/?utm_source=blog">website</a> at savoir.dev. If you'd like to subscribe for updates or beta testing, send me a message at info@savoir.dev!</strong></p>
<blockquote>
<p>Savoir is the french word for Knowledge, pronounced sɑvwɑɹ.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[GraphQL API design: lessons from building a dashboard]]></title><description><![CDATA[One of the biggest challenges I've encountered when building a GraphQL API was how to best design the schema. Regardless of the language or framework, there's a resource somewhere to help me write the code to power that API. Yet, when it comes to the...]]></description><link>https://blog.savoir.dev/graphql-api-design-lessons-from-building-a-dashboard</link><guid isPermaLink="true">https://blog.savoir.dev/graphql-api-design-lessons-from-building-a-dashboard</guid><category><![CDATA[GraphQL]]></category><category><![CDATA[dashboard]]></category><category><![CDATA[APIs]]></category><category><![CDATA[Pagination]]></category><dc:creator><![CDATA[Guillaume St-Pierre]]></dc:creator><pubDate>Sun, 01 May 2022 21:02:02 GMT</pubDate><content:encoded><![CDATA[<p>One of the biggest challenges I've encountered when building a GraphQL API was how to best design the schema. Regardless of the language or framework, there's a resource somewhere to help me write the code to power that API. Yet, when it comes to the schema's structure, I draw a blank. Should I make an API that mirrors my data model? How should I structure my schema to make querying as simple and efficient as possible? Where do I draw the line between overly nesting fields and creating a flat schema with only root queries?</p>
<p>These questions came back to haunt me as I was designing the GraphQL API to power the Savoir dashboard (But also other clients in the future). I ended up going for a domain and consumer-oriented approach which I think works really well for dashboard-type applications. I want to share the story of how I designed this API and the lessons I learned. Hopefully, it may be useful for your own future projects.</p>
<h2 id="heading-define-the-consumer-needs-first">Define the consumer needs first</h2>
<p><a target="_blank" href="https://www.savoir.dev/?utm_source=blog">Savoir</a> is a GitHub application for tracking the code's documentation status, it fetches data from GitHub and associates that data to documentation content created by users. Commits and status checks are associated with content and activity entries. I knew the API should surface that data somehow, but not to which extent. It was very likely that a status check's annotation was not a field I would end up needing for this dashboard, in the same way that users owning organizations rather than the other way around is not a pattern I'd need to surface. The first thing I needed to define was "what data will this dashboard need?".</p>
<p>One of our core values at Savoir is "integrated". We designed our application to be as integrated as possible within GitHub. Our dashboard shouldn't be yet another way to write content. Instead, it should be a hub for everything outside the core experience of writing and tracking your documentation within GitHub, things like billing or a repository's settings. It should allow our users to know, at a glance, the status of their documentation and make decisions on where they have to increase or adjust their documentation efforts. The real product design process was far more in-depth than this, but this gives you a good idea of the product direction I wanted to take.</p>
<p>Knowing this, it became clear what this dashboard needed: access to the logged-in user data; access to GitHub organizations, their repositories, and the repository's settings; a way to edit content; and a way to track all the status checks handled by Savoir. All this data is hidden behind a user's permissions, and you wouldn't want your repository's settings to be visible to other users.</p>
<h2 id="heading-nested-schema-over-a-flat-structure">Nested schema over a flat structure</h2>
<p>Whenever I design a GraphQL API, I tend to fall into the trap of designing that API with REST endpoints in mind. For example, for this dashboard, my first reflex was to start designing a schema like this.</p>
<pre><code class="lang-graphql"><span class="hljs-comment"># Simplified schema</span>

<span class="hljs-keyword">type</span> User {
    <span class="hljs-comment"># An authenticated user's data</span>
}

<span class="hljs-keyword">type</span> Organization {
    <span class="hljs-comment"># A GitHub Organization</span>

    <span class="hljs-string">"A repository owned by this organization"</span>
    repository(<span class="hljs-symbol">name:</span> String!): Repository
}

<span class="hljs-keyword">type</span> Repository {
    <span class="hljs-comment"># A GitHub Repository</span>
}

<span class="hljs-keyword">type</span> Content {
    <span class="hljs-comment"># A content page for a documentation website</span>
}

<span class="hljs-keyword">type</span> Query {
    user(): User
    organization(<span class="hljs-symbol">id:</span> ID!): Organization
    content(<span class="hljs-symbol">path:</span> String!): Content
}
</code></pre>
<p>As said earlier, all access to the dashboard is restricted behind a login. Since we don't want this API to allow us to fetch data a user doesn't have access to, we assign the authentication token to every query. At this point, I am pretty much creating a type REST API, which has its benefits, but also a few major drawbacks. The biggest drawback of this type of schema is that we'll need to fetch the user's data for every query. If you request an organization, the API needs to check if the user authenticated with the token has access to that organization.</p>
<p>The main outlier here is the repository query, which is nested as a field in the organization. I could have made it a query as well, but I would then have needed to take an organization's ID as well to make sure the API doesn't accidentally fetch the wrong repository by name. It seemed silly to have that second parameter in a root query when the parent organization implicitly provides it.</p>
<p>By nesting the repository into the organization, it implies that the organization owns all its repositories, they cannot be fetched without first fetching the organization. Similarly, this implies that a repository cannot exist outside of an organization. To fetch a repository, the server needs to first resolve the organization. In REST, that would be represented by a domain, like <code>/org/:id/repo/:name</code>.</p>
<p>This "natural" ownership pattern came as a result of that clear relationship between the two, but also from a desire to reduce the number of parameters on a query. Looking at the schema more, there seems to be a "hidden" parameter in the user authentication token. If not using authentication headers, I could almost rewrite the query schema like this.</p>
<pre><code class="lang-graphql"><span class="hljs-keyword">type</span> Query {
    user(<span class="hljs-symbol">authToken:</span> String!): User
    organization(<span class="hljs-symbol">authToken:</span> String!, <span class="hljs-symbol">id:</span> ID!): Organization
    content(<span class="hljs-symbol">authToken:</span> String!, <span class="hljs-symbol">path:</span> String!): Content
}
</code></pre>
<p>This tells me there is a clear relationship between users and every other type. I only want to allow a user to access organizations or content pages they have access to, and to do this I need to authenticate every request. Taking into account what we just learned with repositories, it shows we can solve the drawbacks outlined earlier by having the user own those fields rather than have them as queries. Rewriting this schema with that in mind, we come to this:</p>
<pre><code class="lang-graphql"><span class="hljs-comment"># Simplified schema</span>

<span class="hljs-keyword">type</span> User {
    <span class="hljs-comment"># An authenticated user's data</span>

    <span class="hljs-string">"Fetch an organization owned or accessible by this user"</span>
    organization(<span class="hljs-symbol">id:</span> ID!): Organization

    <span class="hljs-string">"Fetch content owned or accessible by this user"</span>
    content(<span class="hljs-symbol">path:</span> String!): Content
}

<span class="hljs-keyword">type</span> Organization {
    <span class="hljs-comment"># A GitHub Organization</span>

    <span class="hljs-string">"A repository owned by this organization"</span>
    repository(<span class="hljs-symbol">name:</span> String!): Repository
}

<span class="hljs-keyword">type</span> Repository {
    <span class="hljs-comment"># A GitHub Repository</span>
}

<span class="hljs-keyword">type</span> Content {
    <span class="hljs-comment"># A content page for a documentation website</span>
}

<span class="hljs-keyword">type</span> Query {
    user(): User
}
</code></pre>
<p>We now have a single query that needs authentication, and once authenticated, we can reuse the auth context to fetch organizations and the content that the user has access to. In fact, the real Savoir API only has a single root query, the <code>user()</code> query. Every other field is owned by other types; the tree gets pretty complex. To fetch a status check, for example, I have to write a query that fetches the user, organization, repository, commit, and finally the status check.</p>
<p>This may look intense. Why design every root query as a nested field like this? What if I am fetching a pull request by number? Do I really need to get the organization in that chain? It all comes down to reusing context in my opinion. One problem I glossed over earlier was how complex it can be to check for permissions in a flat API design. How do I know the repository I am fetching can be accessed by the user? I need to validate that the user has access to the organization owning the repository in addition to the repository itself.</p>
<p>In a nested context, it's not something we have to worry about. Simply told, if I fetch a repository from an organization, I know that organization was accessed through the user query and thus that organization can be viewed by the user. I can then only validate admin access to that repository as permissions can be very granular in GitHub, without worrying about the permissions on the organization itself.</p>
<p>To go back to our earlier example, when fetching a status check through a field on a commit, I do not have to check for access to that status check. I know from the context that the user has access to the commit because it's owned by a repository the user can access. In the context of a dashboard where we definitely don't want to accidentally leak status checks to other users, that guarantee makes things a lot simpler.</p>
<p>The guarantee extends to other checks like existence. When fetching a repository by ID, I do not need to  check if the organization it is owned by still exists in GitHub. That was already checked in the parent's resolve. While the nested nature of the schema may add complexity to individual queries, it made the overall backend logic a lot simpler and gave a clear separation of concerns to every resolver.</p>
<h2 id="heading-paginate-everything">Paginate everything</h2>
<p>Another thing I glossed over was fetching lists of elements and pagination. That is because I initially glossed over it when I was originally designing the API. I couldn't decide which criteria to use to decide if I should paginate a list or not. Pagination can make queries a lot more complex (not to mention how painful they can be in TypeScript). Consider this schema, using relay pagination:</p>
<pre><code class="lang-graphql"><span class="hljs-comment"># Simplified schema</span>

<span class="hljs-keyword">type</span> Repository {
    <span class="hljs-comment"># A GitHub Repository</span>
}

<span class="hljs-keyword">type</span> RepositoryEdge {
    <span class="hljs-symbol">cursor:</span> String
    <span class="hljs-symbol">node:</span> Repository
}

<span class="hljs-keyword">type</span> RepositoryConnection {
    <span class="hljs-symbol">edges:</span> [RepositoryEdge]
    <span class="hljs-symbol">pageInfo:</span> PageInfo!
}

<span class="hljs-keyword">type</span> User {
    <span class="hljs-comment"># An authenticated user's data</span>

    <span class="hljs-string">"Fetch all repositories owned or accessible by this user"</span>
    repositories(): RepositoryConnection
}

<span class="hljs-keyword">type</span> Query {
    user(): User
}
</code></pre>
<p>To query all the repositories for a user, the query would have to look something like this, with the <code>$after</code> parameter used to fetch the next page if any.</p>
<pre><code class="lang-graphql"><span class="hljs-keyword">query</span> Repositories(<span class="hljs-variable">$after</span>: String) {
    user {
      repositories(<span class="hljs-symbol">first:</span> <span class="hljs-number">20</span>, <span class="hljs-symbol">after:</span> <span class="hljs-variable">$after</span>) {
        edges {
          node {
            ..
          }
        }
        pageInfo {
          hasNextPage
          endCursor
        }
      }
    }
  }
</code></pre>
<p>Accessing those repositories in JavaScript gets quite long (<code>user.repositories.edges.map(edge =&gt; edge.node)</code>. What happens if I want to loop over all the commits of all the repositories? Our API already follows a deeply nested structure. Adding connections to all lists makes each query <em>massive</em>. Whether to paginate a query or not is a reasonable question to have: is it worth investing in paginating a list that may have, on average, 20 elements? </p>
<p>To answer this question, I ended up relying on the wisdom of the Lead Backend Engineer from a few roles back. Whenever we asked if we should paginate or not, they always said "If you're thinking about not paginating a list, then paginate it". <strong>Translation: always paginate</strong>. In the context of a dashboard specifically, we want things to be responsive and reactive. Unless we know for certain that a list will <em>only</em> have 10 elements and remain unchanged, then a list should be paginated.</p>
<p>I think it is also worth considering this kind of question from the perspective of the product. A dashboard is a product, it's accessed by users to give them all the information they need to make good decisions about usage of the product. Going back to the definition I outlined for the dashboard, to be a successful product, it's clear that pagination should be the standard. In the case of Savoir, the dashboard should be quick to load and mostly needs to give the user access to specific pieces of information. We do not have complex charts with thousands of data points (which could still be paginated based on the selected time frame). In short, the UX required for a pagination field to work is more than acceptable.</p>
<p>In the past, I often questioned the wisdom of my former colleague, but having designed this product and the API to power it, I now understand where they were coming from. Should you be as intense as they suggested? I think it depends on your specific product needs. In the case of the Savoir dashboard, the answer was yes.</p>
<h2 id="heading-onto-today">Onto today</h2>
<p>The Savoir dashboard is still being built as I write these lines, but these few lessons still guide the entire architecture and design of the API. What are the lessons we learned in this article? Here is a short summary:</p>
<ul>
<li><strong>Define who consumes an API early.</strong> Knowing the target audience of an API helps drive decisions and define the problem statement the API is for.</li>
<li><strong>Do not mirror the data or permission model in your GraphQL schema.</strong> The schema should represent the data the product needs and not the other way around.</li>
<li><strong>GraphQL works best when types are nested based on ownership.</strong> Nest queries within other types to reuse their context and simplify the backend logic.</li>
<li><strong>Always paginate,</strong> unless a list is strictly limited in size or your product demands unpaginated list fields.</li>
</ul>
<p>This post ended up being much more of a story than a tutorial, contrary to what I initially planned. Yet, I think these lessons might be useful in your design and decision-making process for your GraphQL-powered dashboard. Please comment below or <a target="_blank" href="info@savoir.dev">drop me a line</a> to share your experience.</p>
<p>Stay tuned for the next part of this series where I'll provide an update on how we implemented mutations and when to combine both GraphQL and REST to power a single application. For those looking for a tutorial, we will also be releasing a post on GraphQL API documentation in the near future.</p>
<hr />
<p><strong>I'd love to hear your thoughts - please comment, share and follow.</strong></p>
<p><strong>We are building up Savoir, so keep an eye out for features and updates on our <a target="_blank" href="https://www.savoir.dev/?utm_source=blog">website</a> at savoir.dev. If you'd like to subscribe for updates or beta testing, send me a message at info@savoir.dev!</strong></p>
<blockquote>
<p>Savoir is the french word for Knowledge, pronounced <a target="_blank" href="https://en.wiktionary.org/wiki/savoir">sɑvwɑɹ</a>.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Savoir and documentation tracking]]></title><description><![CDATA[Documenting a piece of software is hard work, regardless of who uses it. We have to think about describing the features and functionality of the software, documenting any potential limitation or security issue, and keeping it all readable for everyon...]]></description><link>https://blog.savoir.dev/savoir-and-documentation-tracking</link><guid isPermaLink="true">https://blog.savoir.dev/savoir-and-documentation-tracking</guid><category><![CDATA[documentation]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[bot]]></category><dc:creator><![CDATA[Guillaume St-Pierre]]></dc:creator><pubDate>Mon, 18 Apr 2022 22:14:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1650314711081/IR76g7UH3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Documenting a piece of software is hard work, regardless of who uses it. We have to think about describing the features and functionality of the software, documenting any potential limitation or security issue, and keeping it all readable for everyone. Thankfully, lots of great tools exist to make this process easier for developers and writers alike. But what about tracking the status of your documentation? How do you know when your documentation is outdated or needs to be updated?</p>
<p>This problem is one I have personally experienced on multiple occasions, and I have seen many attempts at solving it through processes. I have seen organizations use a required step in their agile process to force developers to write documentation or contact the documentation team. For each well-organized documentation process, there is also one where individual developers have to take care of documenting everything. Even with the best system in place, there often is a disconnect between writing documentation and a team's development process. Documentation is rarely, in my experience, a core part of the development process in the way tests are. I have heard the sentence "we can't invest in documentation, it gets outdated too quickly" far too often. </p>
<p>Features or bug fixes require a developer to write the code, write tests, ensure the quality of the code, get a PR review, deploy it...the list goes on. Especially in smaller teams, it falls on the developer to write the documentation for that feature or fix it into that list. It's a lot, and it's hard to track for managers and team leads. If the tools to write documentation do exist, the offering is not as great when it comes to documentation tracking or productivity tools.</p>
<p>Long introduction short, documentation is HARD. For developers commonly tasked with writing user documentation and other developers alike, tracking the work and surfacing it to the whole company becomes very important.</p>
<p>I created <strong><a target="_blank" href="https://www.savoir.dev?utm_source=blog">Savoir</a></strong> to resolve some of those issues.</p>
<h1 id="heading-tracking-your-documentation-status-from-your-code">Tracking your documentation status from your code</h1>
<p>When I founded <strong>Savoir</strong>, I knew my problem statement but I had yet to find the right solution. Even selling that solution was a far-off dream. After many prototypes and a lot of research (including validating multiple solutions with a researcher in Human-Computer Interaction), I proudly present to you our first product.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1650316847017/Og0nolQOU.png" alt="Savoir in Action.png" /></p>
<p><strong>Savoir</strong> is a GitHub application that acts as both a chatbot and a GitHub check on your pull requests. When a developer pushes some code to a repository with <strong>Savoir</strong> installed, our bot (nicknamed <code>savoirbot</code>) reads which files were changed (but not the code itself, we never download your code) and compares that against the previous documentation status of your code. If a file has been changed or updated, <strong>Savoir</strong> tells you in that pull request and gives you the tools to update your documentation. This is all done through comments, if you tag <strong>Savoir</strong>  in an issue or pull request, you can give it commands and it will update your documentation for you. <strong>Consider it a like member of your team working diligently in the background to keep an eye on outdated documentation and prompting you to update when needed.</strong></p>
<p>I believe that code is the best source of truth for the status of the documentation of a piece of software. Documentation describes features, limitations, security issues, and usage. Code is the building block of that software. If it changes, it is very likely to be changing the software itself, and thus your documentation might get outdated. Tracking the documentation status through code allows you to act on this early. <strong>Savoir basically takes the grunt work out of this process so you can focus on building your code.</strong></p>
<h1 id="heading-another-github-check">Another GitHub check?</h1>
<p>One of our core values at <strong>Savoir</strong> is "integrated". A tool like ours should integrate directly into our users' existing workflow, and not disrupt it. Let's face it, most developers live in GitHub (or your git provider of choice) and that's where most of the magic happens. An integrated tool shouldn't ask developers to open a new tab or window, everything should happen right alongside pull requests and commits.</p>
<p>I explored many possible solutions as I have been building <strong>Savoir</strong>, from a completely separate product to a browser extension. What finally made me settle for a GitHub application/chatbot was the experience of using Dependabot. When Dependabot is enabled on your repository, the bot will create new pull requests to update any dependencies of a project. While it is now directly integrated directly into GitHub, the experience of triggering actions from a bot <em>without ever leaving GitHub</em> screamed "integrated". <strong>Savoir</strong> expands on that pattern to provide a documentation tracking tool that is truly integrated into GitHub.</p>
<p>GitHub checks may seem annoying on the surface, and they definitely can be. It's a challenge we are very aware of and guides how we think of <strong>Savoir</strong> as a product. We will be tweaking and improving <strong>Savoir</strong> as we go to make it as useful, integrated, and seamless as possible. I hope to see you join us on this journey.</p>
<hr />
<p><strong>I'd love to hear your thoughts - please comment, share and follow.</strong></p>
<p><strong>We are building up Savoir, so keep an eye out for features and updates on our <a target="_blank" href="https://www.savoir.dev/?utm_source=blog">website</a> at savoir.dev. If you'd like to subscribe for updates or beta testing, send me a message at info@savoir.dev!</strong></p>
<blockquote>
<p>Savoir is the french word for Knowledge, pronounced <a target="_blank" href="https://en.wiktionary.org/wiki/savoir">sɑvwɑɹ</a>.</p>
</blockquote>
]]></content:encoded></item></channel></rss>