Hi, I'm Frederic.

Web Developer & Entrepreneur
blogging about life and programming in Node.js+React.


Text editor implementation as a programming ├ętude

If you’re a programmer yourself and are looking to improve your skills, I would like to propose to generally “deliberately practice” your craft, in the sense “repeatedly attack problems at the edge of you capabilities in an exercise context, not at work”. And, as finding personal project ideas can be quite tricky for some, I would like to propose implementing a text editor as a really good project choice.

Now this proposition makes sense, in my opinion, because of the wide array of real hard problems related to so many different subjects of programming it has to offer. Also, there are good chances it’s quite different from what you work on daily (a majority of programmers are knee deep in the web of mobile applications these days) hence, will feel like a playful, new and exciting project.

Now hold your horses as, implementing a complete text editor that can rival with you current one (even if it’s Notepad or ed(1)) is a really big task. Try to start by setting your eyes on implementing a subpart of a one first.

Now for ideas on sub-parts that you could aim for:

  • A line editor: You know how your shell allows you to press backspace, delete characters to the beginning of the line, move the cursor? Well all of those are really nice thing but are quite novelties, most of there where not present in older shells. So, try your hand at implementing a program that asks for input but implements line editing features. Think of what a REPL does. Try for:
    • Typing characters
    • Backspace
    • Moving with arrows
    • Deleting to beginning/end of line
    • Moving to beginning/end of line
    • Moving word by word backward/forward
    • Entering in “replace mode” (like the insert key on keyboards)
  • Rendering a window tree of files: Try writing a terminal program that renders a tree of windows, each node being one of three types: horizontal split, vertical split, actual file/window. That one will get you thinking about recursing in a tree, caching information about location in files, calculating what is a line, how wide is a char, a tab, a Unicode char, how do you make it fast enough so that render wouldn’t block the editors main loop in a real editor. Try for:
    1. Reading files from disk
    2. Selecting a current position (so you get to implement scrolling)
    3. Creating a window tree so that all files have their own window split
    4. Rendering the window tree with nice window borders
    5. Framing and rendering the currently visible lines of the files
    6. Maybe have a status bar under each files showing stats line number of lines, current line, chars, file rights, file size, ….
    7. Make it fast by only rendering what’s needed when some file changes on disk
  • A file datastructure: Holding a file in memory, a task most editors need to do, is not an easy task. Getting it to be the right balance between: size in memory, insertion speed, deletion speed and interface complexity is a real struggle. The other problem you can attack after you have the basics right is testing operations on a file that is huge, bigger than 100MB and make that use case operations work in a decent time (< 100ms at least). Try exploring the following prior art in the matter:
    • Rope
    • Gap Buffer
    • Circular buffer
    • Red-Black tree
    • or a simple Linked List
    • or try the venerable Array
      Try testing how fast (and what’s the Big O of each?) of the following operations:
    • Inserting 1 character at the start/middle/end
    • Deleting 1 character at the start/middle/end
    • Inserting 10 000 characters at the start/middle/end
    • Deleting 10 000 characters at the start/middle/end
    • Inserting at the start the right after at the end
    • Loading in memory
    • Writing to disk
  • A command set/language: This one is a fun one for language lovers as it involves implementing a set of commands the user can use to edit files. You need to be able to parse an input string into an abstract syntax tree, then interpret it and execute it against a file contents. Here are good examples of editors that implemented a command set:
    • Vi - covers a lot, succinct [1][2][3]
    • Ed - not a visual editing, the grandfather of many others 1
    • Teco - really a language, also inspired a few others [1]
    • Emacs - less on point but think about keyboard keybindings and how natural they are to hit [1]
  • A plugin/configuration language: This one is all about implementing a full blown programming language (parser, interpreter, interface with host implementation language). A lot of toy editor project go without this one as it is a big chunk of work but a really crucial one in all popular editors theses day. You’ll be designing a language with the direct goal of exposing editor features, configuration and allowing the implementation of plugins that can change behaviour and call core editor methods. Take a look at the following languages that are used in popular editors:
    • ELisp
    • VimL
    • Lua
    • Python
    • Guile Scheme
    • Perl
  • A progressive rendering algorithm: This one is a bit smaller and needs quite a few pieces around it to make it work/visual. It consists in writing an algorithm that allows you to start a rerender of the screen following a user’s input but allows for stopping in the middle to handle user input then start back for where we where at the last rerender call making sure to invalidate parts that where just changed by the user’s action. Try reading this book at this point, it’s really one of the only books going in deep about many subject related to text editor implementation:

There a more smaller projects/parts that could be added to this list but at this point I would advise starting small but trying to build a complete editor and adding adding is all of those subparts together. Those first big goal being to be able to write the text editor with the new editor itself. :D

If you are interested in reading the implementation of a few toy editors with rather simple codebases you can look at:

Happy hacking & learning!

Testing web applications made fast and easy

Many other content already discusses about the advantages of testing. The advice almost always goes something like this: “Testing won’t add much to development time but will save your ass more than once on bugs and that’s before they even reach your customers. Plus, it has the nice side effect of making you write cleaner code if you write tests before code”.

I am here to talk about the two main pain-points developers have with writing tests, they are hard to write and lengthen the feedback loop (especially with large codebases).

Now those two factors are probably the biggest detractors of, first, the people looking to get into testing and TDD and, for the second, people writing test but hating it as their test suite is so slow it can take 1 hour to run.

That often leads to you pushing to the CI and hoping what you wrote didn’t break anything elsewhere, it’s an hour wait so you context switch to an other task, come back to it later, it failed, re-checkout the git branch fix the little details, push again…

Now, I am no different and here is what I propose: ~80% of tests you write are unit tests. That is 80% of tests you write and, in terms, ~80% of the running time of your test suite. Why not, forget about a real database, forget about HTTP, forget about all dependencies mock them all and only run (in Node.js’s case) pure JavaScript across just the few lines in the function you are currently testing, no other part of the codebase.

Here, let’s say you have this controller with a method fetching a list of users:

class UsersController {
  constructor(userRepository) {
    this.userRepository = userRepository;

  users(req, res) {
    const limit = req.query.limit || 20;
    const order = req.query.order || 'created';

    return this.userRepository.find({limit, order})
      .then(users => {
        res.json({data: users});
      }, next)

Now, the route most often taken to test this part of the code is to reach out for a library to do an http request and make sure you have a test database setup and also create few models so that you know what to look for in the controller’s response, all while making sure the database tables are cleared in between tests as you don’t want all those models from other tests showing up in the controllers response…

A lot to think about, a lot of setup, quite slow because of all the moving components and really, looks more like integration testing (which you should still be doing here and there) than unit testing.

How about creating a fake request object and a fake response, and just for this controller a fake userRepository that will help us verify that for a given input, the correct calls are made by the piece of code being tested.

// fake-request.js
class FakeRequest {
  constructor() {
    this.query = {};
    this.body = {};
    this.params = {};
// fake-response.js
class FakeResponse {
  constructor() {
    this.statusCode = 200;
    this.jsonBody = null;
    this.endCalled = false;

  json(value) {
    this.jsonBody = value;

  status(code) {
    this.statusCode = code;

  end() {
    this.endCalled = true;

  // ...

Then with those two you can start writing a test specific to that controller’s users method:

// test/controllers/users.js
const assert = require('assert');
const FakeRequest = require('...');
const FakeResponse = require('...');
const UsersController = require('...');

describe('controllers:users', () => {
  let fakeUserRepository = {};
  let fakeRequest;
  let fakeResponse;
  let fakeNext;
  let usersContoller;

  beforeEach(() => {
    usersContoller = new UsersController(fakeUserRepository);
    fakeRequest = new FakeRequest();
    fakeResponse = new FakeResponse();

    fakeUserRepository.findOptions = null;
    fakeUserRepository.find = (options) => {
      fakeUserRepository.findOptions = options;
      return Promise.resolve([]);

    fakeNext = (error) => {
      fakeNext.givenError = error;

  describe('users()', () => {
    it('calls userRepository defaulting to a limit of 20', () => {
      return usersContoller.users(fakeRequest, fakeResponse)
        .then(() => {
          assert.equal(fakeUserRepository.findOptions.limit, 20);

    it('calls userRepository defaulting to ordering by creation date', () => {
      return usersContoller.users(fakeRequest, fakeResponse)
        .then(() => {
          assert.equal(fakeUserRepository.findOptions.order, 'created');

    it('calls userRepository respecting query parameters', () => {
      fakeRequest.query.limit = 5;
      fakeRequest.query.order = 'name';

      return usersContoller.users(fakeRequest, fakeResponse)
        .then(() => {
          assert.equal(fakeUserRepository.findOptions.limit, 5);
          assert.equal(fakeUserRepository.findOptions.order, 'name');

    it('calls next on userRepository error', () => {
      fakeUserRepository.find = () => Promise.reject('repo error');

      return usersContoller.users(fakeRequest, fakeResponse, fakeNext)
        .then(() => {
          assert.equal(fakeNext.givenError, 'repo error');

    it('sends the right response', () => {
      const result = [{id: 89, name: 'Jack'}, {id: 41, name: 'Dooey'}];
      fakeUserRepository.find () => Promise.resolve(result);

      return usersContoller.users(fakeRequest, fakeResponse)
        .then(() => {
          assert.equal(fakeResponse.statusCode, 200);
          assert.deepEqual(fakeResponse.jsonBody, {data: result});

Sorry for the long file, but that’s all there is to it, running that test file takes no more that 2-3 milliseconds. A full test suite for a bigger project might be more like 20 seconds, a far cry from 20 minutes.

I know it isn’t perfect, there is a lot that happens in between component and you really can’t always write tests verifying exactly the right outputs but that’s why integration testing still has it’s place, to make sure everything integrates properly. Just don’t do it in place of actual unit tests and as 90% of the test suite.

As always, take this with a grain of salt, your projects are different than mines for sure, if you find that way of writing minimal unit tests promising try it out see if it sticks.

Bringing sanity to growing Node.js applications

It seems like most of the content written and blogged about Node.js, even now, 6 years in, takes a really basic approach to showing you how to build applications.

A lot of Node.js articles explain Express.js the leading web framework, the problem is, this framework is comparable to Sinatra in Ruby or Flask in Python or Silex in PHP. Good for small few pages website, basically gives you routing and an interface to HTTP but not much more.

Now Ruby, Python and others have bigger frameworks that are well suited for larger project where you benefits from more architecture, opinionated defaults and supporting modules (ORMs, utilities, rendering, mailing, background workers, assets pipelines). The story is a bit different in Node.js as it promotes small npm modules (i.e. gems, packages) that you put together by yourself, which, is a good thing, most experienced developers prefer libraries over frameworks, but, there is no literature or examples of how this can be done within the Node.js ecosystem.

So, to solve this, let’s try and define few libraries or simple files that can help us out with our growing codebase.


The goals here are to have something easier to maintain than an app.js file, a routes/, models/ and views/ folder that’s it. To achieve this we are going to go on a hunt and steal few time tested tricks from other ecosystems.

Dependency injection

Some people seem to dread this one, others love it. Having experienced it a lot in Laravel a great framework for PHP and in Java in quite a few places, dependency injection can help keep all our application parts and files decoupled. Leading to way easier unit testing and modification of dependencies.

The concept is, all of your request handlers/controllers have dependencies, your models/repositories/entities too, you could go and hard code them by requiring the right file and using it but if you let a dependency injection container do it for you, you can more easily change that required components by a different implementation of it and, when testing, you can directly pass in stubs/mocks without any trickery or magic.

So, how would we go about implementing this?

First step is to have a file that represents the global instance of the container. That is, where all instances will be stored and the tool that resolve needed dependencies when you’ll want to instantiate a controller.

It would look like this:

import Container from './lib/contrainer';
export default new Container();

Then, in your app.js you can register libraries you want to make available to the following classes you’ll register/use.

import express from 'express';
import container from './container';

let app = express();
// ... middlewares, config ...

// Manually setting intance
import EventEmitter from 'events';
container.set('events', new EventEmitter());

// Automatically resolving dependencies and setting an instance

// Using container to resolve dependencies but
// giving back the instance insted of setting it.
let requireUser = contrainer.get('auth').requireUserMiddleware;
let userController = container.create(require('./controllers/user'));
app.get('/users/:id', requireUser, userController.showUser);
app.get('/users/create', requireUser, userController.showCreateUser);
app.post('/users', requireUser, userController.createUser);

// ... error handling ...


(When you grow to have many more routes, extracting those to their own routes.js is a good idea)

The final piece being the DI container it-self. I tried making it as compact as possible.

import R from 'ramda';

export default class Container {
  constructor() {
    this.contents = {};

  get(name) {
    if (!(name in this.contents)) {
      throw Error('Container has nothing registered for key ' + name);
    return this.contents[name];

  set(name, instance) {
    this.contents[name] = instance;

  create(klass) {
    if (!('length' in klass.dependencies)) {
      throw new Error('Invariant: container can\'t resolve a class without dependencies');

    var dependencies = R.map(function(dependencyName) {
      return this.get(dependencyName);
    }.bind(this), klass.dependencies);

    return applyToConstructor(klass, dependencies)

  load(klass) {
    if (typeof klass.dependencyName !== 'string') {
      throw new Error('Invariant: container can\'t resolve a class without a name');

    this.set(klass.dependencyName, this.create(klass));

  unset(name) {
    delete this.contents[name]

  reset() {
    this.contents = {};

function applyToConstructor(constructor, args) {
  var newObj = Object.create(constructor.prototype);
  var constructorReturn = constructor.apply(newObj, args);

  // Some constructors return a value; let's make sure we use it!
  return constructorReturn !== undefined ? constructorReturn : newObj;

Repositories, Entities and Services instead of large Models

It’s been told on many blog posts and talks for a good while now that fat models are evil. The ActiveRecord pattern that’s so prevalent in Rails and many ORMs is easily replaced by separating concerns:

  • Data representation goes in models. Those are as dump as possible, optimally immutable.
  • Fetching/Saving/Database interactions are made in repositories. Those take plain models and knows how to persist them and query datastores.
  • Business logic goes into services. Services is the place where most of the complexity resides, it’s what controllers call with input, what validates business rules, what’s calling repositories and external apis.

To give concrete examples:

An entity is a simple POJO/PORO/POCO…

import R from 'ramda';

export default class InvoiceLine {
  constructor(params) {
    R.mapObjIndexed((v, k) => this[k] = v, R.merge(User.defaults, params));

  taxAmount() {
    return this.price * this.taxes;

  total() {
    return this.price + this.taxAmount();
InvoiceLine.defaults = {price: 0, taxes: 0.15, created: Date.now()};

A repository will most likely take a database object in it’s constructor to be able to interact with the datastore. Repositories are singletons loaded once when stating the app using the container’s load method.

import User from '../entities/user';
const TABLE_NAME = 'users';

export default class UserRepository {
  constructor(db) {
    this.db = db;

  findByEmail(email) {
    return this.db.select('id, name, email, ...')
      .where('email = ?', email)
UserRepository.dependencyName = 'repositories:user';
UserRepository.dependencies = ['db'];

A service is the simplest of the 3 in form but the one in which most complexity will hide. It simply has instance methods and dependencies listed to it can be registered in the container for controllers to depend on.

export default class BillingService {
  constructor(userRepository, stripeService, mailer) {
    this.userRepository = userRepository;
    this.stripeService = stripeService;
    this.mailer = mailer;

  createNewAccout(name, email, password, stripeToken) {
    // validate
    // create user
    // create stripe customer
    // update db user
    // send welcome email
    // ...

  // ...
BillingService.dependencyName = 'services:billing';
BillingService.dependencies = [
  'repositories:user', 'services:stripe', 'mailer'

Slimmer Controllers in favor of Services

Now that we have a dedicated place to put business logic, you should aim to slim down those controllers to their essential job: mapping requests and the http protocol oddities to method calls/actions to be taken.

This simple action has the new benefit of decoupling yourself from the transport protocol enabling reuse of all that business logic by other consumers like: background workers, a websocket endpoint, a protobuff endpoint even a separate codebase if you decide to extract the core of your app into a library when you grow bigger.


As your project grows and your entities become more complex you may come to a point where you find yourself spending a lot of lines initializing entities in your services, it’s a good idea to extract those to factories. Those object will give you a clean way to encapsulate complex entity construction with many branches.

The lib folder still exists

Not everything fits into the concepts we just went over, there are few middlewares, really simple libs or wrapper and definitively have their place in your lib folder, just try to keep it lean and mean, most of your code is supposed to be elsewhere.


I hope this post gave you ideas on how to reduce the size and complexity of your routes files/folder. Code organization (/architecture) starts simple in a new project but needs to grow linearly as your project matures or your productivity will suffer quite a bit.

I would love to know how you deal with growing codebases too! DM on Twitter or send me an email.

Welcome to Hugo!

It’s been a little while I wanted to switch to a new bloging platform. My old solution was a tool a built myself (often unwise when a lot already exists) and was pretty basic. It got the job done, but did really have space for growth.

I am now joining the ranks of people using the static website generator called Hugo. Advantages it encompasses go from blazing fast compilation, by more than simply bloging, to simple in design but infinite in possibilities with concepts like content types (not plain blog posts) and taxonomies). Making themes for it was not an afterthought and was really well designed.

This was an occasion to try something new design wise, I wanted something more simple, a different layout, and, mainly, simple CSS and a serif font.

How hard is it?

Well, Hugo is a bit different in the way it approaches content but still easy to understand if you come from other static website generators like Jekyll or Metalsmith.

For a new site simply invoke (from the command like)

hugo new site <FOLDER>

When I wish to create a new post I type:

hugo new posts/2015-09-18-welcome-to-hugo.md

I will then base itself from the template at _archetypes/default.md and fill in the title and current date for me.

Next, firing up a development server is as simple as:

hugo server --buildDrafts -w

The --buildDrafts ensure you preview the posts that have draft = true in their metadata, allowing you to work on posts, put them on hold, and still keep publishing other posts.

When ready to publish something you simply invoke:


This will generate all the necessary html & static assets in the public directory that you can them upload or commit to Github Pages.

Hope this makes it less scary to get into! Hugo has good documentation as really is a seriously good option in the market of static site generators used for blogs.

4 reasons for slowing down on the grind and enjoying life

Every now and then you’ll have moments in your life here it feels way more difficult to to get focused and work. You try to force it but still end up procrastinating. It’s almost impossible to get in the flow, get more that an hour or two of straight work.

I think possible reasons for that are, in my case at least, a new environment to adapt to, lack of routine, a bad diet, being in between two projects, no really motivating/interesting project, you could surely uncover more…

Here’s the thing: you shouldn’t be that situation, make the best of it instead. Not fighting it will even help you get your productivity back faster in my experience.

  1. There is no time like now. You wont get to relive that week, month, year, decade so, keep that in mind and enjoy yourself. Worrying about the future or regretting the past wont lest to a great future neither a nice past to look back to. Plus, traveling and many other activities are really not experienced the same at 25 than at 45 or 65 so, don’t always push back and just do it.
  2. Spend time working on other spheres of your life. Work is not everything, there is more to it, and, saying you are working “for your family” doesn’t mean you can omit spending real time with them NOW. Those personal relationships with your loved one, kids, best friends is something that lives on the most important resource you have: time. Without it, they die, and I am telling you, no work, how ever fulling is worth loosing a dear one. And when I say “other spheres” that includes taking care of your health, making time for a good diet, having hobbies.
  3. Balance will make you perform better in everything. Countless researches shown that exercising, working normal days (not more that 10ish hours), having diversified activities, eating well, makes you time working way more productive that otherwise. I see no disadvantage here, you live a more healthy and fulfilling life while doing better work.
  4. Slowing down is normal, and you will spring back soon enough. The important thing here is not to fight those less productive moments. Sometimes it’s because something else in your life needs fixing or simply more attention. Sometimes it’s because work is not that interesting. Let it flow normally and I guarantee this wont be permanent, you’ll will spring back in hyper focused and productive mode sooner than you think.