GraphQL i18n

One of the things I love about GraphQL is that it is “self documenting”, but of course, here in Canada the obvious question that follows is “in both languages?”. Since GraphQL is one of the core technologies for me I really wanted to figure out a decent answer when people ask about i18n, but it’s never been clear to me how to handle it.

Since I’m familiar with GraphQL-js, I’ll be using that here but Apollo Server is on my list to explore and may have some different answers. I’ll also be using my favourite i18n library Lingui here, but any other i18n library could be easily substituted.

Just to be clear, the issue at hand is not i18n for the data (that you’d handle with something like ArangoDB’s TRANSLATE) it’s i18n for the description attributes that can be attached to all your schema objects.
Once description strings are added, anyone (and “anyone” could and probably will include developer tools, IDEs or developers themselves) can introspect on the schema by querying the __schema or __type fields to find the descriptions with a query like this:

{
  __schema {
    queryType {
      fields {
        name
        description
        args {
          name
          description
        }
      }
    }
  }
}

To help think this through lets create a basic schema that just returns the current time and returns the DateTime type from above and use it to explore i18n.

const DateTime = new GraphQLObjectType({
  name: 'DateTime',
  description: 'An example date/time object.',
  fields: () => ({
    date: {
      description: 'The current date in DD/MM/YYYY format.',
      type: GraphQLString,
    },
    time: {
      description: 'The current time in HH:MM:SS AM/PM format.',
      type: GraphQLString,
    },
  }),
})

const query = new GraphQLObjectType({
  name: 'Query',
  fields: {
    now: {
      description: 'Returns current time and date values.',
      type: DateTime, // what the resolve function will produce
      resolve: (root, args, context) => {
        let now = new Date()
        let time = now.toLocaleDateString(context.language, {
          timeZone: 'America/Toronto',
        })
        let date = now.toLocaleTimeString(context.language, {
          timeZone: 'America/Toronto',
        })
        return { date, time }
      },
    },
  },
})

With our query type and it’s return type created, we just need to wrap that in a schema and pass it to the express-graphql middleware. It will mount the schema on the url we specify and pass the request object to all our resolvers as the third argument (aka “the context”).

Adding the requestLanguage middleware from the express-request-language library before express-graphql means that incoming requests will be checked for the Accept-Language header, and the best matching language of the languages you specify will be attached to the request as request.language. Remember that express-graphql is passing the request to our resolvers as context so that means that we access the request.language as context.language.

const schema = new GraphQLSchema({ query })

let server = express()

server
  .use(
    requestLanguage({
      languages: ['en', 'fr'], // First locale becomes the default
    }),
  )
  .use('/graphql', graphqlHTTP({ schema }))

server.listen(3000)

With this basic setup, we can already see the outline of the issue: our schema and types with their accompanying descriptions are defined once when the script is run, but the language we want is whatever is appropriate for each request.

It probably won’t surprise you that our middleware can been passed a function that it will execute per request that will produce the configuation needed. With that in place we have a pretty clear path forward.

server
  .use(
    requestLanguage({
      languages: ['en', 'fr'], // First locale becomes the default
    }),
  )
  .use(
    '/graphql',
    graphqlHTTP((request, response, graphQLParams) => {
      return {
        schema: new GraphQLSchema({
          query: // define a schema and types using the request language and pass it in
        }),
      }
    }),
  )

My first step was to define a function that recieves an i18n object and returns a schema.

const createSchema = i18n => {
  // Define a type that describes the data
  const DateTime = new GraphQLObjectType({
    name: 'DateTime',
    description: i18n.t`An example date/time object.`,
    fields: () => ({
      date: {
        description: i18n.t`The current date in DD/MM/YYYY format.`,
        type: GraphQLString,
      },
      time: {
        description: i18n.t`The current time in HH:MM:SS AM/PM format.`,
        type: GraphQLString,
      },
    }),
  })

  const query = new GraphQLObjectType({
    name: 'Query',
    fields: {
      now: {
        description: i18n.t`Returns current time and date values.`,
        type: DateTime, // what the resolve function will produce
        resolve: (root, args, context) => {
          let now = new Date()
          let time = now.toLocaleDateString(context.language, {
            timeZone: 'America/Toronto',
          })
          let date = now.toLocaleTimeString(context.language, {
            timeZone: 'America/Toronto',
          })
          return { date, time }
        },
      },
    },
  })

  return query
}

With that defined I can use it to produce a schema, but because lingui works by scanning for and extracting things like i18n.t`...` from my code, I have to remember not to rename that otherwise lingui extract won’t find my translations. Additionally I don’t want variable shadowing, so I import i18n under a different name and rename it to what Lingui expects only when I go to create the schema:

const express = require('express')
const graphqlHTTP = require('express-graphql')
const { GraphQLSchema, GraphQLObjectType, GraphQLString } = require('graphql')
const { i18n: internationalization, unpackCatalog } = require('lingui-i18n') // import i18n as something else
const requestLanguage = require('express-request-language')

internationalization.load({  // Load our language files
  fr: unpackCatalog(require('./locale/fr/messages.js')),
  en: unpackCatalog(require('./locale/en/messages.js')),
})

// Our function that creates the schema
const createSchema = i18n => {
...
}

let server = express()

server
  .use(
    requestLanguage({
      languages: internationalization.availableLanguages.sort(), // First locale becomes the default
    }),
  )
  .use(
    '/graphql',
    graphqlHTTP(async (request, response, graphQLParams) => {
      internationalization.activate(request.language)
      return {
        schema: new GraphQLSchema({
          query: createSchema(internationalization),
        }),
      }
    }),
  )

server.listen(3000)

Getting Lingui properly integrated took a little fiddling. It brings along it’s own copy of babel which doesn’t seem to see your other babel plugins but does read your .babelrc.
Configuring my projects babel using only command line options, solved the clashes with lingui’s babel, and then making sure Lingui would only look for translations in the src folder was all I needed to get Lingui in working along-side the usual “transpile to dist” workflow (the finished code is available here).
After running Lingui’s lingui extract and doing my translations I’m now able to hit my endpoint and see the translated descriptions:

# Ask for English!
mike@sleepycat:~$ curl -H "Accept-Language: en" -H "Content-Type: application/graphql" -d "{ __schema {queryType { fields { name description } } } }" "localhost:3000/graphql"
{"data":{"__schema":{"queryType":{"fields":[{"name":"now","description":"Returns current time and date values."}]}}}}
# Ask for French!
mike@sleepycat:~$ curl -H "Accept-Language: fr" -H "Content-Type: application/graphql" -d "{ __schema {queryType { fields { name description } } } }" "localhost:3000/graphql"
{"data":{"__schema":{"queryType":{"fields":[{"name":"now","description":"Renvoie les valeurs actuelles de date et d'heure."}]}}}}

Performance

Obviously defining the schema and types before processing each request is going to cost something but it would be good to know what. This is a question some load testing with wrk2 can give us insight into (with the caveat that both the server and the load testing program were running on the same laptop, so take this with a large grain of salt).

First, a version without the schema per request:

mike@sleepycat:~$ wrk2 -t4 -c100 -d30s -R2000 --latency "http://localhost:3000/graphql?query=%7B%0A%20%20now%20%7B%0A%20%20%20%20date%0A%20%20%20%20time%0A%20%20%7D%0A%7D"
Running 30s test @ http://localhost:3000/graphql?query=%7B%0A%20%20now%20%7B%0A%20%20%20%20date%0A%20%20%20%20time%0A%20%20%7D%0A%7D
  4 threads and 100 connections
  Thread calibration: mean lat.: 3309.908ms, rate sampling interval: 11272ms
  Thread calibration: mean lat.: 3310.066ms, rate sampling interval: 11280ms
  Thread calibration: mean lat.: 3308.845ms, rate sampling interval: 11280ms
  Thread calibration: mean lat.: 3310.191ms, rate sampling interval: 11280ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    11.83s     3.26s   17.47s    57.94%
    Req/Sec   214.00      0.00   214.00    100.00%

And now the one with our i18n schema per request:

mike@sleepycat:~$ wrk2 -t4 -c100 -d30s -R2000 --latency "http://localhost:3000/graphql?query=%7B%0A%20%20now%20%7B%0A%20%20%20%20date%0A%20%20%20%20time%0A%20%20%7D%0A%7D"
Running 30s test @ http://localhost:3000/graphql?query=%7B%0A%20%20now%20%7B%0A%20%20%20%20date%0A%20%20%20%20time%0A%20%20%7D%0A%7D
  4 threads and 100 connections
  Thread calibration: mean lat.: 3196.913ms, rate sampling interval: 11296ms
  Thread calibration: mean lat.: 3196.537ms, rate sampling interval: 11288ms
  Thread calibration: mean lat.: 3115.362ms, rate sampling interval: 10493ms
  Thread calibration: mean lat.: 3196.062ms, rate sampling interval: 11288ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    11.91s     3.35s   17.81s    57.65%
    Req/Sec   207.00      0.00   207.00    100.00%

So generating our schema/types per request drops us from 214 requests per second down to 207. Clearly it’s not free, but in this little example it’s pretty reasonable and in this world of microservices, there are a fair number of services that aren’t much bigger than this example. That said, a ~3% drop for something so simple is probably something you would want to watch carefully. A larger schema with more imports might well be far more costly. Clearly this little performance test is far from rigorous, but it’s nice to have some vague sense of the impact.

i18n and GraphQL

In the end this blog post is as much a question as it is an answer. I’m pretty certain their are futher refinements to be made here and that there are ways to avoid some or maybe all of the performance penalty highlighted above.
I’m also pretty curious about what it would look like to get a similar thing happening with Apollo Server.
Hopefully this will help other people who are trying to do i18n with GraphQL and maybe surface better options.

Advertisements

Javascript i18n with Lingui

Living in a country with two official lanaguage means that you don’t get far into a project before the question of internationalization (aka i18n to anyone who has to type it more than a few times) comes up.

There are a few options for dealing with this in Javascript, but it’s taken a while to find one I like. First, I expect to use the same library on the server and on the client, and I expect to be able to use it with libraries like React.

React-Intl works OK on the client side, but using the underlying Intl on the server looks under-documented and deeply clunky. I18next is reasonable on the server and has integrations with most client side frameworks. While it’s a decent choice, there is something about the way it works which rubs me the wrong way.

i18next.init({
  lng: 'en',
  fallbackLng: 'en',
  resources: {
    en: {
      translation: {
        person: {
          firstName: "First name",
          lastName: "Last name",
	}
      },
    },
    fr: {
      translation: {
        person: {
          firstName: "Prénom",
          lastName: "Nom de famille",
	}
      },
    },
  }
})

The above code shows how to use it. It’s a pretty standard setup (very familiar if you’ve ever done i18n in Rails), some singleton object (you can make others if you need to) with an internal collection of messages which are stored as a JSON object.

One of the things that I dislike about this approach is that the translations stored in that JSON object tend to accumulate and hang around long after the code that needs that message is gone.

The other thing I find doesn’t sit well with me is the way you access those messages: i18n.t('mutation.fields.purchase.args.expiryYear').

What you are looking at is a function call that assumes the existence of an object like {translation:{fields:{purchase:{args:{expiryYear: "Expiry year"}}}}}. This an example of structural coupling, my code depends on the structure of that object. This sort of thing is normally considered an anti-pattern, a violation of the “law of Demeter”, but it’s pretty common among i18n libraries. I have to decide on the structure to start with, and after that, changing that structure (say if you decide you didn’t make the right decision about how to structure it originally) is going to break a lot of things.

Poking around I stumbled on a library that takes a different approach: Lingui.

Lingui is interesting because it builds a nice translation workflow by leveraging the now ubiquitous infrastructure of Babel.

Aside from the core code in lingui-i18n (and other packages dealing with React) the heart of lingui’s approach are two babel plugins: babel-plugin-lingui-extract-messages and babel-plugin-lingui-transform-js.

We can install what we need for the server side like this.

yarn add lingui-cli lingui-i18n

The babel-plugin-lingui-extract-messages plugin does what is advertised on the tin. First we need a little test code to extract.

const { i18n } = require('lingui-i18n')

i18n.t`I do like a bit of gorgonzola.`
i18n.t`Not even Wensleydale?`

Then we need to create some locales using the helper supplied by lingui-cli:

[mike@longshot lingui]$ lingui add-locale en fr
Added locale en.
Added locale fr.

(use "lingui extract" to extract messages)
[mike@longshot lingui]$ tree
.
├── index.js
├── locale
│   ├── en
│   │   └── messages.json
│   └── fr
│       └── messages.json
├── package.json
└── yarn.lock

Next we use babel-plugin-lingui-extract-messages via the Lingui CLI commandlingui extract to scan our code for those internationalized strings and extract them into translation files.

[mike@longshot lingui]$ lingui extract
Extracting messages from source files…
Collecting all messages…
Writing message catalogues…
Messages extracted!

Catalog statistics:
┌──────────┬─────────────┬─────────┐
│ Language │ Total count │ Missing │
├──────────┼─────────────┼─────────┤
│ en       │      2      │    2    │
│ fr       │      2      │    2    │
└──────────┴─────────────┴─────────┘

(use "lingui add-locale <language>" to add more locales)
(use "lingui extract" to update catalogs with new messages)
(use "lingui compile" to compile catalogs for production)

Lingui prints out a nice summary of the state of my translations.
A look at the translation files shows how Lingui can solve that coupling problem; it generates an object with the content of the strings used as the keys. This way my translations are looked up by content, rather than via their location in some stucture. Since Lingui defaults to showing the message id (which is actually the English content string from the source) we’ll just edit the French messages file.

[mike@longshot lingui]$ cat locale/fr/messages.json 
{
  "I do like a bit of gorgonzola.": {
    "translation": "Je aime un peu de gorgonzola.",
    "origin": [
      [
        "index.js",
        3
      ]
    ]
  },
  "Not even Wensleydale?": {
    "translation": "Pas même Wensleydale?",
    "origin": [
      [
        "index.js",
        4
      ]
    ]
  }
}

With that done we need to compile the json into JS files for use with lingui compile. The missing piece now is the how those i18n.t tagged template literals are going to produce our translated strings at runtime, and the answer is babel-plugin-lingui-transform-js.

Since a picture is worth a thousand words, I think the best way to explain it is this:

[mike@longshot lingui]$ cat index.js | babel --plugins lingui-transform-js
const { i18n } = require('lingui-i18n');

i18n._('I do like a bit of gorgonzola.');
i18n._('Not even Wensleydale?');

As you can see, all the calls to i18n.t`` are replaced with calls to i18n._(). This is underscore function is the low level api that Lingui uses to actually give you the translated strings.

Now that we know that, let’s take a look at what using the library looks like.

[mike@longshot lingui]$ node
> var { i18n, unpackCatalog } = require('lingui-i18n')
undefined
> i18n.load({fr: unpackCatalog(require('./locale/fr/messages.js')), en: unpackCatalog(require('./locale/en/messages.js'))})
undefined
> i18n.availableLanguages
[ 'fr', 'en' ]>
> i18n.activate('en')
undefined
> i18n._('I do like a bit of gorgonzola.')
'I do like a bit of gorgonzola.'
> i18n.activate('fr')
undefined
> i18n._('I do like a bit of gorgonzola.')
'Je aime un peu de gorgonzola.'

Lingui has some more tricks up it’s sleeve like pluralization, but one of the things I’m happiest about is that this approach also solves that “unused messages” problem that I mentioned.

If we delete our “Not even Wensleydale?” message and run lingui extract again we can see the benefits of this static analysis style approach: Lingui knows when there is nothing referencing a message, and marks it as obsolete.

[mike@longshot lingui]$ cat locale/fr/messages.json 
{
  "I do like a bit of gorgonzola.": {
    "translation": "Je aime un peu de gorgonzola.",
    "origin": [
      [
        "index.js",
        3
      ]
    ]
  },
  "Not even Wensleydale?": {
    "translation": "Pas même Wensleydale?",
    "origin": [
      [
        "index.js",
        4
      ]
    ],
    "obsolete": true
  }
}

Better still, Lingui will clean out the obsolete messages for you with lingui extract --clean.

[mike@longshot lingui]$ lingui extract --clean
Extracting messages from source files…
Collecting all messages…
Writing message catalogues…
Messages extracted!

Catalog statistics:
┌──────────┬─────────────┬─────────┐
│ Language │ Total count │ Missing │
├──────────┼─────────────┼─────────┤
│ en       │      1      │    1    │
│ fr       │      1      │    0    │
└──────────┴─────────────┴─────────┘

(use "lingui add-locale <language>" to add more locales)
(use "lingui extract" to update catalogs with new messages)
(use "lingui compile" to compile catalogs for production)
[mike@longshot lingui]$ cat locale/fr/messages.json 
{
  "I do like a bit of gorgonzola.": {
    "translation": "Je aime un peu de gorgonzola.",
    "origin": [
      [
        "index.js",
        3
      ]
    ]
  }
}

For me this is pretty much the holy grail for i18n. Here I’ve focused on using Lingui without any other libraries, but it’s just as awesome with React. With locale files that can be plausibly handed over to a translator, and tooling that both find and remove translations Lingui has become my goto i18n library.

Setting up Oracle-XE on Arch linux

Currently I’m working on a project that needs to pull data from an Oracle database. My normal development setup is to install the database locally and develop the application TDD style with a test database, so it seemed reasonable to do the same with Oracle as well. Although, the fact that this became fodder for a blog post suggests it wasn’t as easy as I expected.

First up was the basic decision about what to install. The Oracle database itself is a multi-gigabyte monster, seemingly designed to sell support contracts so I was glad to find discover an Express Edition exists. Last released in 2014, and aimed at whatever “easy development” means in the world of Oracle, apparently “Applications developed with XE may be immediately used with other editions of the Oracle Database”. This sounds like the right thing to me.

So “easy development” obviously begins with logging into your Oracle account and downloading the Oracle XE zip file.

Next we want to package this up so that it can be cleanly installed and removed from our system. The Oracle-XE package on the AUR uses the zip file we just downloaded to build a package we can install, so lets get that happening

[mike@longshot ~]$ git clone https://aur.archlinux.org/oracle-xe.git
Cloning into 'oracle-xe'...
remote: Counting objects: 22, done.
remote: Compressing objects: 100% (13/13), done.
remote: Total 22 (delta 8), reused 22 (delta 8)
Unpacking objects: 100% (22/22), done.
[mike@longshot ~]$ cd oracle-xe                           
[mike@longshot oracle-xe]$ cp ~/Downloads/oracle-xe-11.2.0-1.0.x86_64.rpm.zip .
[mike@longshot oracle-xe]$ ls
listener.ora  oracle_env.csh  oracle_env.sh  oracle.install  oracle-xe  oracle-xe-11.2.0-1.0.x86_64.rpm.zip  oracle-xe.conf  oracle-xe.service  PKGBUILD
[mike@longshot oracle-xe]$ makepkg
==> Making package: oracle-xe 11.2.0_1.0-4 (Tue Oct 17 14:51:00 EDT 2017)
==> Checking runtime dependencies...
==> Checking buildtime dependencies...
==> Retrieving sources...
  -> Found oracle-xe-11.2.0-1.0.x86_64.rpm.zip
  -> Found oracle_env.csh
  -> Found oracle_env.sh
  -> Found oracle-xe
  -> Found oracle-xe.conf
  -> Found listener.ora
  -> Found oracle-xe.service
==> Validating source files with md5sums...
    oracle-xe-11.2.0-1.0.x86_64.rpm.zip ... Passed
    oracle_env.csh ... Passed
    oracle_env.sh ... Passed
    oracle-xe ... Passed
    oracle-xe.conf ... Passed
    listener.ora ... Passed
    oracle-xe.service ... Passed
==> Extracting sources...
  -> Extracting oracle-xe-11.2.0-1.0.x86_64.rpm.zip with bsdtar
==> Starting build()...
==> Entering fakeroot environment...
==> Starting package()...
==> Tidying install...
  -> Removing libtool files...
  -> Purging unwanted files...
  -> Removing static library files...
  -> Compressing man and info pages...
==> Checking for packaging issue...
==> Creating package "oracle-xe"...
  -> Generating .PKGINFO file...
  -> Generating .BUILDINFO file...
  -> Adding install file...
  -> Generating .MTREE file...
  -> Compressing package...
==> Leaving fakeroot environment.
==> Finished making: oracle-xe 11.2.0_1.0-4 (Tue Oct 17 14:54:35 EDT 2017)
[mike@longshot oracle-xe]$ ls
listener.ora  oracle_env.csh  oracle_env.sh  oracle.install  oracle-xe  oracle-xe-11.2.0_1.0-4-x86_64.pkg.tar.xz  oracle-xe-11.2.0-1.0.x86_64.rpm.zip  oracle-xe.conf  oracle-xe.service  pkg  PKGBUILD  src
[mike@longshot oracle-xe]$ sudo pacman -U oracle-xe-11.2.0_1.0-4-x86_64.pkg.tar.xz 
[sudo] password for mike: 
loading packages...
resolving dependencies...
looking for conflicting packages...

Packages (1) oracle-xe-11.2.0_1.0-4

Total Installed Size:  564.61 MiB

:: Proceed with installation? [Y/n] y
(1/1) checking keys in keyring                                                                                                 [#############################################################################] 100%
(1/1) checking package integrity                                                                                               [#############################################################################] 100%
(1/1) loading package files                                                                                                    [#############################################################################] 100%
(1/1) checking for file conflicts                                                                                              [#############################################################################] 100%
(1/1) checking available disk space                                                                                            [#############################################################################] 100%
:: Processing package changes...
(1/1) installing oracle-xe                                                                                                     [#############################################################################] 100%

creating group "dba" ...done

creating user "oracle" ...done

change directory rights ...done

set sticky bit to oracle executable ...done

creating /etc/sysconfig ...done

creating /var/log/oracle ...done


add your user to the "dba" group in order to use the oracle tools

:: Running post-transaction hooks...
(1/2) Arming ConditionNeedsUpdate...
(2/2) Updating the desktop file MIME type cache...
[mike@longshot oracle-xe]$ sudo usermod -aG dba $USER

Above we built and installed the Oracle-XE package, and added the dba group to the current users existing groups.

To get a sense of what we just installed it’s good to look at what that package put into the /etc directory.

[mike@longshot node_oracle]$ pacman -Ql oracle-xe | grep "etc"
oracle-xe /etc/
oracle-xe /etc/ld.so.conf.d/
oracle-xe /etc/ld.so.conf.d/oracle-xe.conf
oracle-xe /etc/profile.d/
oracle-xe /etc/profile.d/oracle_env.csh
oracle-xe /etc/profile.d/oracle_env.sh
oracle-xe /etc/rc.d/
oracle-xe /etc/rc.d/oracle-xe
oracle-xe /etc/systemd/
oracle-xe /etc/systemd/system/
oracle-xe /etc/systemd/system/oracle-xe.service
[mike@longshot node_oracle]$ cat /etc/ld.so.conf.d/oracle-xe.conf
/usr/lib/oracle/product/11.2.0/xe/lib
[mike@longshot node_oracle]$ cat /etc/profile.d/oracle_env.sh 
export ORACLE_HOME=/usr/lib/oracle/product/11.2.0/xe
export ORACLE_SID=XE
export NLS_LANG=`$ORACLE_HOME/bin/nls_lang.sh`
export PATH=$PATH:$ORACLE_HOME/bin

Here we can see that this package installed an entry in our library search path (/etc/ld.so.conf.d/oracle-xe.conf), added some env vars for us (/etc/profile.d/oracle_env.sh), added a run script (/etc/rc.d/oracle-xe) and a systemd service (/etc/systemd/system/oracle-xe.service).

In theory we should be able to install our node driver and have it work.

Installing node-oracledb

Oracle has thoughtfully released a Node.js driver, which can be installed with npm install oracledb. This driver installs and compiles a bunch of stuff with node-gyp and expects some libraries and headers to be available for that process. Let’s see!

[mike@longshot node_oracle]$ npm i oracledb

> oracledb@1.13.1 install /home/mike/projects/play/node_oracle/node_modules/oracledb
> node-gyp rebuild

node-oracledb ERR! Error: Cannot find $OCI_LIB_DIR/libclntsh.so
node-oracledb ERR! Error: See https://github.com/oracle/node-oracledb/blob/master/INSTALL.md

gyp: Call to 'INSTURL="https://github.com/oracle/node-oracledb/blob/master/INSTALL.md"; ERR="node-oracledb ERR! Error:"; if [ -z $OCI_LIB_DIR ]; then OCI_LIB_DIR=`ls -d /usr/lib/oracle/*/client*/lib/libclntsh.* 2> /dev/null | tail -1 | sed -e 's#/libclntsh[^/]*##'`; if [ -z $OCI_LIB_DIR ]; then if [ -z "$ORACLE_HOME" ]; then if [ -f /opt/oracle/instantclient/libclntsh.so ]; then echo "/opt/oracle/instantclient/"; else echo "$ERR Cannot find Oracle library libclntsh.so" >&2; echo "$ERR See $INSTURL" >&2; echo "" >&2; fi; else if [ -f "$ORACLE_HOME/lib/libclntsh.so" ]; then echo $ORACLE_HOME/lib; else echo "$ERR Cannot find \$ORACLE_HOME/lib/libclntsh.so" >&2; echo "$ERR See $INSTURL" >&2; echo "" >&2; fi; fi; else if [ -f "$OCI_LIB_DIR/libclntsh.so" ]; then echo $OCI_LIB_DIR; else echo "$ERR Cannot find \$OCI_LIB_DIR/libclntsh.so" >&2; echo "$ERR See $INSTURL" >&2; echo "" >&2; fi; fi; else if [ -f "$OCI_LIB_DIR/libclntsh.so" ]; then echo $OCI_LIB_DIR; else echo "$ERR Cannot find \$OCI_LIB_DIR/libclntsh.so" >&2; echo "$ERR See $INSTURL" >&2; echo "" >&2; fi; fi;' returned exit status 0 while in binding.gyp. while trying to load binding.gyp
gyp ERR! configure error 
gyp ERR! stack Error: `gyp` failed with exit code: 1
gyp ERR! stack     at ChildProcess.onCpExit (/home/mike/.nodenv/versions/8.7.0/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:336:16)
gyp ERR! stack     at emitTwo (events.js:125:13)
gyp ERR! stack     at ChildProcess.emit (events.js:213:7)
gyp ERR! stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:200:12)
gyp ERR! System Linux 4.13.8-1-hardened
gyp ERR! command "/home/mike/.nodenv/versions/8.7.0/bin/node" "/home/mike/.nodenv/versions/8.7.0/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /home/mike/projects/play/node_oracle/node_modules/oracledb
gyp ERR! node -v v8.7.0
gyp ERR! node-gyp -v v3.6.2
gyp ERR! not ok 
npm WARN node_oracle@1.0.0 No description
npm WARN node_oracle@1.0.0 No repository field.

npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! oracledb@1.13.1 install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the oracledb@1.13.1 install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/mike/.npm/_logs/2017-10-19T21_01_24_226Z-debug.log

Not being a C/C++ programmer, these moments are pretty perplexing. It looks a lot like ORACLE_HOME=/usr/lib/oracle/product/11.2.0/xe OCI_LIB_DIR=$ORACLE_HOME/lib OCI_INC_DIR=$ORACLE_HOME/xdk/include npm i oracledb should work, but it doesn’t.

ldconfig -N -v | grep libclntsh.so prints out libclntsh.so.11.1 -> libclntsh.so.11.1 so the library seems to be findable, just not by the driver.

Plan B

It turns out that the headers and libraries we need are also available in Oracle’s instantclient. This would mean more downloading/packaging silliness except someone has gone to the effort to package these instantclient libraries and providing the as a pacman repo. Since the world is a beautiful place and everyone is friends on the internet I am going to pull my packages from them by adding these lines to my pacman.conf:

[mike@longshot node_oracle]$ tail -n 3 /etc/pacman.conf 
[oracle]
SigLevel = Optional TrustAll
Server = http://linux.shikadi.net/arch/$repo/$arch/

Then we update and install.

[mike@longshot node_oracle] sudo pacman -Sy
[mike@longshot node_oracle]$ sudo pacman -S oracle-instantclient-sdk oracle-instantclient-basic

Looking at the contents pacman -Ql oracle-instantclient-sdk shows bunch of files being put into /usr/include, while pacman -Ql oracle-instantclient-basic shows our much sought after libclntsh.so going into /usr/lib. It looks like we finally have some plausible values for OCI_LIB_DIR and OCI_INC_DIR.

[mike@longshot node_oracle]$ OCI_LIB_DIR=/usr/lib OCI_INC_DIR=/usr/include npm i oracledb

> oracledb@1.13.1 install /home/mike/projects/play/node_oracle/node_modules/oracledb
> node-gyp rebuild

make: Entering directory '/home/mike/projects/play/node_oracle/node_modules/oracledb/build'
  CXX(target) Release/obj.target/oracledb/src/njs/src/njsOracle.o
  CXX(target) Release/obj.target/oracledb/src/njs/src/njsPool.o
  CXX(target) Release/obj.target/oracledb/src/njs/src/njsConnection.o
  CXX(target) Release/obj.target/oracledb/src/njs/src/njsResultSet.o
  CXX(target) Release/obj.target/oracledb/src/njs/src/njsMessages.o
  CXX(target) Release/obj.target/oracledb/src/njs/src/njsIntLob.o
  CXX(target) Release/obj.target/oracledb/src/dpi/src/dpiEnv.o
  CXX(target) Release/obj.target/oracledb/src/dpi/src/dpiEnvImpl.o
  CXX(target) Release/obj.target/oracledb/src/dpi/src/dpiException.o
  CXX(target) Release/obj.target/oracledb/src/dpi/src/dpiExceptionImpl.o
  CXX(target) Release/obj.target/oracledb/src/dpi/src/dpiConnImpl.o
  CXX(target) Release/obj.target/oracledb/src/dpi/src/dpiDateTimeArrayImpl.o
  CXX(target) Release/obj.target/oracledb/src/dpi/src/dpiPoolImpl.o
  CXX(target) Release/obj.target/oracledb/src/dpi/src/dpiStmtImpl.o
  CXX(target) Release/obj.target/oracledb/src/dpi/src/dpiUtils.o
  CXX(target) Release/obj.target/oracledb/src/dpi/src/dpiLob.o
  CXX(target) Release/obj.target/oracledb/src/dpi/src/dpiCommon.o
  SOLINK_MODULE(target) Release/obj.target/oracledb.node
  COPY Release/oracledb.node
make: Leaving directory '/home/mike/projects/play/node_oracle/node_modules/oracledb/build'
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN node_oracle@1.0.0 No description
npm WARN node_oracle@1.0.0 No repository field.

+ oracledb@1.13.1
added 2 packages in 11.894s

Talking to Oracle-XE from Node

After installing XE, instandclient-basic and sdk, the full set of environmental variable that made this thing work are:

export ORACLE_HOME=/usr/lib/oracle/product/11.2.0/xe
export ORACLE_SID=XE
export NLS_LANG=`$ORACLE_HOME/bin/nls_lang.sh`
export PATH=$PATH:$ORACLE_HOME/bin
export OCI_INC_DIR=/usr/include
export OCI_LIB_DIR=/usr/lib

Next up I want to configure XE (which seems to need those vars set). Below I’ll use sudo -E to ensure that all of those variable still exist when I do sudo:

[mike@longshot node_oracle]$ sudo -E /etc/rc.d/oracle-xe configure
[sudo] password for mike: 

Oracle Database 11g Express Edition Configuration
-------------------------------------------------
This will configure on-boot properties of Oracle Database 11g Express 
Edition.  The following questions will determine whether the database should 
be starting upon system boot, the ports it will use, and the passwords that 
will be used for database accounts.  Press <Enter> to accept the defaults. 
Ctrl-C will abort.

Specify the HTTP port that will be used for Oracle Application Express [8080]:

Specify a port that will be used for the database listener [1521]:

Specify a password to be used for database accounts.  Note that the same
password will be used for SYS and SYSTEM.  Oracle recommends the use of 
different passwords for each database account.  This can be done after 
initial configuration:
Confirm the password:

Do you want Oracle Database 11g Express Edition to be started on boot (y/n) [y]:y

Starting Oracle Net Listener...Done
Configuring database...Done
Starting Oracle Database 11g Express Edition instance...Done
Installation completed successfully.

In theory XE is configured and running (in the future you’ll probably want to start it with systemctl start oracle-xe), and the node-oracle README suggests that we run one of the examples to test it. What they don’t mention is that the example is based on sample data in an “hr” account that needs to be enabled first.

[mike@longshot oracle-xe]$ sqlplus /nolog

SQL*Plus: Release 11.2.0.2.0 Production on Fri Oct 20 14:20:30 2017

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

SQL> connect system/yourpassword as sysdba
Connected.
SQL> ALTER USER hr ACCOUNT UNLOCK;          
User altered.
SQL> ALTER USER hr IDENTIFIED BY password;         
User altered.

The example script reads it’s config from a file so I created that using the terrible password I assigned to the hr account above:

[mike@longshot node_oracle]$ cat dbconfig.js 
module.exports = {
    user: "hr",
    password: "password",
    connectString: "localhost/XE",
};

So now we should be able to run the example:

[mike@longshot node_oracle]$ node select1.js 
[ { name: 'DEPARTMENT_ID' }, { name: 'DEPARTMENT_NAME' } ]
[ [ 180, 'Construction' ] ]

What’s next

The next logical step here is to start exploring the capabilities of the Node driver. There is also the Simple-oracledb package which is suddenly sounding very interesting to me.
Hopefully this will save someone else some time.

invalid value for parameter “TimeZone”

While working on standing up a Rails app I ran into a pretty weird error that really had me scratching my head.

[mike@longshot identity-idp]$ rake db:create
PG::InvalidParameterValue: ERROR:  invalid value for parameter "TimeZone": "UTC"
: SET time zone 'UTC'
Couldn't create database for {"pool"=>5, "timeout"=>5000, "host"=>"localhost", "adapter"=>"postgresql", "encoding"=>"utf8", "database"=>"upaya_development", "port"=>5432}
rake aborted!
ActiveRecord::StatementInvalid: PG::InvalidParameterValue: ERROR:  invalid value for parameter "TimeZone": "UTC"
: SET time zone 'UTC'

PG::InvalidParameterValue: ERROR:  invalid value for parameter "TimeZone": "UTC"

Tasks: TOP => db:create
(See full trace by running task with --trace)

The output of timedatectl status looked OK, but just to be sure, I updated them to EDT. No difference. When I tried rake db:migrate I got a far more instructive error:

[mike@longshot identity-idp]$ rake db:migrate
rake aborted!
ArgumentError: Invalid Timezone: UTC
/home/mike/projects/identity-idp/config/environment.rb:5:in `<top (required)>'
TZInfo::InvalidTimezoneIdentifier: Expected 44 bytes reading '/usr/share/zoneinfo/UTC', but got 0 bytes
/home/mike/projects/identity-idp/config/environment.rb:5:in `<top (required)>'
TZInfo::InvalidZoneinfoFile: Expected 44 bytes reading '/usr/share/zoneinfo/UTC', but got 0 bytes
/home/mike/projects/identity-idp/config/environment.rb:5:in `<top (required)>'
Tasks: TOP => log => environment
(See full trace by running task with --trace)

/usr/share/zoneinfo/UTC is 0 bytes? A quick looks shows it to be true, and the package that supplied this file is tzdata.

[mike@longshot identity-idp]$ cat /usr/share/zoneinfo/UTC 
[mike@longshot identity-idp]$ ls -l /usr/share/zoneinfo/UTC 
-rw-r--r-- 6 root root 0 Jul 26 18:01 /usr/share/zoneinfo/UTC
[mike@longshot identity-idp]$ pacman -Qo /usr/share/zoneinfo/UTC
/usr/share/zoneinfo/UTC is owned by tzdata 2017b-1

That doesn’t seem right, let’s reinstall…

[mike@longshot identity-idp]$ sudo pacman -S tzdata
warning: tzdata-2017b-1 is up to date -- reinstalling
resolving dependencies...
looking for conflicting packages...

Packages (1) tzdata-2017b-1

Total Installed Size:  1.81 MiB
Net Upgrade Size:      0.00 MiB

:: Proceed with installation? [Y/n] y
(1/1) checking keys in keyring                                               [###########################################] 100%
(1/1) checking package integrity                                             [###########################################] 100%
(1/1) loading package files                                                  [###########################################] 100%
(1/1) checking for file conflicts                                            [###########################################] 100%
(1/1) checking available disk space                                          [###########################################] 100%
:: Processing package changes...
(1/1) reinstalling tzdata                                                    [###########################################] 100%
:: Running post-transaction hooks...
(1/1) Arming ConditionNeedsUpdate...
[mike@longshot identity-idp]$ ls -l /usr/share/zoneinfo/UTC 
-rw-r--r-- 6 root root 127 Mar 24 12:38 /usr/share/zoneinfo/UTC
[mike@longshot identity-idp]$ cat /usr/share/zoneinfo/UTC 
TZif2UTCTZif2�UTC
UTC0

After that, creating and migrating worked again without problems. I’m not sure what happened there, but hopefully this will prevent people (or future me) wasting a bunch more time on it.

Installing Ruby 2.3 on Archlinux

I’ve been running Archlinux for a few years now. I ran Ubuntu for a 8 years before that and frequently ran into issues with old packages that eventually spurred me to jump to Arch where I get to deal with issues in new packages instead. “Pick your poison” as the saying goes.

Today I needed to get an app running that required Ruby 2.3.3 and, true to form, the poison of the day was all about the libraries installed on my system being to new to compile Ruby 2.3.

I’m a long time user of Rbenv. It’s nice and clean and it’s ruby-build plugin makes installing new versions of Ruby as easy as rbenv install 2.3.3… which is exactly what kicked off the fun.

[mike@longshot identity-idp]$ rbenv install 2.3.3
Downloading ruby-2.3.3.tar.bz2...
-> https://cache.ruby-lang.org/pub/ruby/2.3/ruby-2.3.3.tar.bz2
Installing ruby-2.3.3...
*** Error in `./miniruby': malloc(): memory corruption: 0x00007637497798d8 ***
======= Backtrace: =========
/usr/lib/libc.so.6(+0x72bdd)[0x66e27048fbdd]
...
./miniruby(+0x2470b)[0x80e03b1670b]
/usr/lib/libc.so.6(__libc_start_main+0xea)[0x66e27043d4ca]
./miniruby(_start+0x2a)[0x80e03b1673a]
======= Memory map: ========
80e03af2000-80e03de0000 r-xp 00000000 00:27 154419
...
66e2715e7000-66e2715e8000 rw-p 00000000 00:00 0
763748f81000-763749780000 rw-p 00000000 00:00 0                          [stack]

BUILD FAILED (Arch Linux using ruby-build 20170726-9-g86909bf)

Inspect or clean up the working tree at /tmp/ruby-build.20170828122031.16671
Results logged to /tmp/ruby-build.20170828122031.16671.log

Last 10 log lines:
generating enc.mk
creating verconf.h
./template/encdb.h.tmpl:86:in `<main>': undefined local variable or method `encidx' for main:Object (NameError)
	from /tmp/ruby-build.20170828122031.16671/ruby-2.3.3/lib/erb.rb:864:in `eval'
	from /tmp/ruby-build.20170828122031.16671/ruby-2.3.3/lib/erb.rb:864:in `result'
	from ./tool/generic_erb.rb:38:in `<main>'
make: *** [uncommon.mk:818: encdb.h] Error 1
make: *** Waiting for unfinished jobs....
verconf.h updated
make: *** [uncommon.mk:655: enc.mk] Aborted (core dumped)

The issues here are twofold; Ruby 2.3 won’t build with GCC 7 or OpenSSL 1.1. Arch as it stands today has both by default.

[mike@longshot ~]$ openssl version
OpenSSL 1.1.0f  25 May 2017
[mike@longshot ~]$ gcc -v
gcc version 7.1.1 20170630 (GCC)

To solve the OpenSSL problem we need 1.0 installed (sudo pacman -S openssl-1.0, but it’s probably installed already), and we need to tell ruby-build where to find both the header files, and the openssl directory itself.

Helping compilers find header files is the job of pkg-config. On Arch the config files that do that are typically in /usr/lib/pkgconfig/ but in this case we want to point to the pkg-config file in /usr/lib/openssl/1.0/pkgconfig before searching there. To do that we assign a colon-delimited set of paths to PKG_CONFIG_PATH.

Then we need to tell Ruby where the openssl directory is which is done via RUBY_CONFIGURE_OPTS.

[mike@longshot ~]$ PKG_CONFIG_PATH=/usr/lib/openssl-1.0/pkgconfig/:/usr/lib/pkgconfig/ RUBY_CONFIGURE_OPTS=--with-openssl-dir=/usr/lib/openssl-1.0/ rbenv install 2.3.3
Downloading ruby-2.3.3.tar.bz2...
-> https://cache.ruby-lang.org/pub/ruby/2.3/ruby-2.3.3.tar.bz2
Installing ruby-2.3.3...

BUILD FAILED (Arch Linux using ruby-build 20170726-9-g86909bf)

Inspect or clean up the working tree at /tmp/ruby-build.20170829103308.24191
Results logged to /tmp/ruby-build.20170829103308.24191.log

Last 10 log lines:
  R8: 0x0000016363058550  R9: 0x0000016362cc3dd8 R10: 0x0000016362fafe80
 R11: 0x000000000000001b R12: 0x0000000000000031 R13: 0x0000016363059a40
 R14: 0x0000000000000000 R15: 0x00000163630599a0 EFL: 0x0000000000010202

-- C level backtrace information -------------------------------------------
linking static-library libruby-static.a
ar: `u' modifier ignored since `D' is the default (see `U')
verifying static-library libruby-static.a
make: *** [uncommon.mk:655: enc.mk] Segmentation fault (core dumped)
make: *** Waiting for unfinished jobs....

Our OpenSSL errors fixed we now get the segfault that comes from GCC 7. So we need to install an earlier gcc (sudo pacman -S gcc5) add two more variables (CC and CXX) to specify the C and C++ compilers to we want used.

[mike@longshot ~]$ CC=gcc-5 CXX=g++-5 PKG_CONFIG_PATH=/usr/lib/openssl-1.0/pkgconfig/:/usr/lib/pkgconfig/ RUBY_CONFIGURE_OPTS=--with-openssl-dir=/usr/lib/openssl-1.0/ rbenv install 2.3.3
Downloading ruby-2.3.3.tar.bz2...
-> https://cache.ruby-lang.org/pub/ruby/2.3/ruby-2.3.3.tar.bz2
Installing ruby-2.3.3...
Installed ruby-2.3.3 to /home/mike/.rbenv/versions/2.3.3

With that done, you should now have a working Ruby 2.3:

[mike@longshot ~]$ rbenv global 2.3.3
[mike@longshot ~]$ ruby -e "puts 'hello world'"
hello world

ArangoDB and GraphQL

For a while now I’ve been wondering about what might be the minimal set of technologies that allows me to tackle the widest range of projects. The answer I’ve arrived at, for backend development at least, is GraphQL and ArangoDB.

Both of these tools expand my reach as a developer. Projects involving integrations, multiple clients and complicated data that would have been extremely difficult are now within easy reach.

But the minimal set idea is that I can enjoy this expanded range while juggling far fewer technologies than before. Tools that apply in more situations mean fewer things to learn, fewer moving parts and more depth in the learning I do.

While GraphQL and ArangoDB are both interesting technologies individually, it’s in using them together that I’ve been able to realize those benefits; one of those moments where the whole is different from the sum of it’s parts.

Backend Minimalism

My embrace of Javascript has definitely been part of creating that minimal set. A single language for both front and back end development has been a big part of simplifying my tech stack. Both GraphQL and ArangoDB can be used in many languages, but Javascript support is what you might describe as “first among equals” for both projects.

GraphQL can replace, and for me has replaced, server side frameworks like Rails or Django, leaving me with a handful of Javascript functions and more modular, testable code.

GraphQL also replaces ReST, freeing me from thinking about HATEOAS, bike-shedding over the vagaries of ReST, or needing pages and pages of JSON API documentation to save me from bike-shedding over the vagaries of ReST.

ArangoDB has also reduced the number of things I need to need to know. For a start it has removed the “need” for an ORM (no relational database, no need for Object Relational Mapping), which never really delivered on it’s promise to free you from knowing the underlying SQL.

More importantly it’s replaced not just NoSQL databases with a razor-thin set of capabilities like Mongodb (which stores nested documents but can’t do joins) or Neo4j (which does joins but can’t store nested documents), but also general purpose databases like MySQL or Postgres. I have one query language to learn, and one database whose quirks and characteristics I need to know.

It’s also replaced the deeply unpleasant process of relational data modeling with a seamless blend of documents and graphs that make modeling even really ugly connected datasets anticlimactic. As a bonus, in moving the schema outside the database GraphQL lets us enjoy all the benefits of a schema (making sure there is at least some structure I can rely on) and all the benefits of schemalessness (flexibility, ease of change).

Tools that actually reduce the number of things you need to know don’t come along very often. My goal here is to give a sense of what it looks like to use these two technologies together, and hopefully admiring the trees can let us appreciate the forest.

Show me the code

First we need some data to work with. ArangoDB’s administrative interface has some example graphs it can create, so lets use one to explore.

example_graphs

If we select the “knows” graph, we get a simple graph with 5 vertices.

knows_graph

This graph is going to be the foundation for our little exploration.

Next, the only really meaningful information these vertices have is a name attribute. If we are wanting to create a GraphQL type that represents one of these objects it would look like this:

  let Person = new GraphQLObjectType({
    name: 'Person',
    fields: () => ({
      name: {
        type: GraphQLString
      }
    })
  })

Now that we have a type that describes what a Person object looks like we can use it in a schema. This schema has a field called person which has two attributes: type, and resolve.

let schema = new GraphQLSchema({
    query: new GraphQLObjectType({
      name: 'Query',
      fields: () => ({
        person: {
          type: Person,
          resolve: () => {
            return {name: 'Mike'}
          },
        }
      })
    })
  })

The resolve is a function that will be run whenever graphql is asked to produce a person object. type is a type that describes the object that the resolve function returns, which in this this case is our Person type.

To see if this all works we can write a test using Jest.

import {
  graphql,
  GraphQLSchema,
  GraphQLObjectType,
  GraphQLString,
  GraphQLList,
  GraphQLNonNull
} from 'graphql'

describe('returning a hardcoded object that matches a type', () => {

  let Person = new GraphQLObjectType({
    name: 'Person',
    fields: () => ({
      name: {
        type: GraphQLString
      }
    })
  })

  let schema = new GraphQLSchema({
    query: new GraphQLObjectType({
      name: 'Query',
      fields: () => ({
        person: {
          type: Person,
          resolve: () => {
            return {name: 'Mike'}
          },
        }
      })
    })
  })

  it('lets you ask for a person', async () => {

    let query = `
      query {
        person {
          name
        }
      }
    `;

    let { data } = await graphql(schema, query)
    expect(data.person).toEqual({name: 'Mike'})
  })

})

This test passes which tells us that we got everything wired together properly, and the foundation laid to talk to ArangoDB.

First we’ll use arangojs and create a db instance and then a function that allows us to get a person using their name.

//src/database.js
import arangojs, { aql } from 'arangojs'

export const db = arangojs({
  url: `http://${process.env.ARANGODB_USER}:${process.env.ARANGODB_PASSWORD}@127.0.0.1:8529`,
  databaseName: 'knows'
})

export async function getPersonByName (name) {
  let query = aql`
      FOR person IN persons
        FILTER person.name == ${ name }
          LIMIT 1
          RETURN person
    `
  let results = await db.query(query)
  return results.next()
}

Now lets use that function with our schema to retrieve real data from ArangoDB.

import {
  graphql,
  GraphQLSchema,
  GraphQLObjectType,
  GraphQLString,
  GraphQLList,
  GraphQLNonNull
} from 'graphql'
import {
  db,
  getPersonByName
} from '../src/database'

describe('queries', () => {

  it('lets you ask for a person from the database', async () => {

    let Person = new GraphQLObjectType({
      name: 'Person',
      fields: () => ({
        name: {
          type: GraphQLString
        }
      })
    })

    let schema = new GraphQLSchema({
      query: new GraphQLObjectType({
        name: 'Query',
        fields: () => ({
          person: {
            args: { //person now accepts args
              name: { // the arg is called "name"
                type: new GraphQLNonNull(GraphQLString) // name is a string & manadatory
              }
            },
            type: Person,
            resolve: (root, args) => {
              return getPersonByName(args.name)
            },
          }
        })
      })
    })

    let query = `
        query {
          person(name "Eve") {
            name
          }
        }
      `

    let { data } = await graphql(schema, query)
    expect(data.person).toEqual({name: 'Eve'})
  })
})

Here we have modified our schema to accept a name argument when asking for a person. We access the name via the args object and pass it to our database function to go get the matching person from Arango.

Let’s add a new database function to get the friends of a user given their id.
What’s worth pointing out here is that we are using ArangoDB’s AQL traversal syntax. It allows us to do a graph traversal across outbound edges get the vertex on the other end of the edge.

export async function getFriends (id) {
  let query = aql`
      FOR vertex IN OUTBOUND ${id} knows
        RETURN vertex
    `
  let results = await db.query(query)
  return results.all()
}

Now that we have that function, instead of adding it to the schema, we add a field to the Person type. In the resolve for our new friends field we are going to use the root argument to get the id of the current person object and then use our getFriends function to do the traveral to retrieve the persons friends.

    let Person = new GraphQLObjectType({
      name: 'Person',
      fields: () => ({
        name: {
          type: GraphQLString
        },
        friends: {
          type: new GraphQLList(Person),
          resolve(root) {
            return getFriends(root._id)
          }
        }
      })
    })

What’s interesting is that because of GraphQL’s recursive nature, this change lets us query for friends:

        query {
          person(name: "Eve") {
            name
            friends {
              name
            }
          }
        }

and also ask for friends of friends (and so on) like this:

        query {
          person(name: "Eve") {
            name
            friends {
              name
              friends {
                name
              }
            }
          }
        }

We can show that with a test.

import {
  graphql,
  GraphQLSchema,
  GraphQLObjectType,
  GraphQLString,
  GraphQLList,
  GraphQLNonNull
} from 'graphql'
import {
  db,
  getPersonByName,
  getFriends
} from '../src/database'

describe('queries', () => {

  it('returns friends of friends', async () => {

    let Person = new GraphQLObjectType({
      name: 'Person',
      fields: () => ({
        name: {
          type: GraphQLString
        },
        friends: {
          type: new GraphQLList(Person),
          resolve(root) {
            return getFriends(root._id)
          }
        }
      })
    })

    let schema = new GraphQLSchema({
      query: new GraphQLObjectType({
        name: 'Query',
        fields: () => ({
          person: {
            args: {
              name: {
                type: new GraphQLNonNull(GraphQLString)
              }
            },
            type: Person,
            resolve: (root, args) => {
              return getPersonByName(args.name)
            },
          }
        })
      })
    })

    let query = `
        query {
          person(name: "Eve") {
            name
            friends {
              name
              friends {
                name
              }
            }
          }
        }
      `

    let result = await graphql(schema, query)
    let { friends } = result.data.person
    let foaf = [].concat(...friends.map(friend => friend.friends))
    expect([{name: 'Charlie'},{name: 'Dave'},{name: 'Bob'}]).toEqual(expect.arrayContaining(foaf))
  })

})

This test has running a query three levels deep and walking the entire graph. Because we can ask for any combination of any of the things our types defined, we have a whole lot of flexibility with very little code. The code that’s there is just a few simple functions, modular and easy to test.

But what did we trade away to get all that? If we look at the queries that get sent to Arango with tcpdump we can see how that sausage was made.

// getPersonByName('Eve') from the person resolver in our schema 
{"query":"FOR person IN persons
  FILTER person.name == @value0
  LIMIT 1 RETURN person","bindVars":{"value0":"Eve"}}
// getFriends('persons/eve') in Person type -> returns Bob & Alice.
{"query":"FOR vertex IN OUTBOUND @value0 knows
  RETURN vertex","bindVars":{"value0":"persons/eve"}}
// now a new request for each friend:
// getFriends('persons/bob')
{"query":"FOR vertex IN OUTBOUND @value0 knows
  RETURN vertex","bindVars":{"value0":"persons/bob"}}
// getFriends('persons/alice')
{"query":"FOR vertex IN OUTBOUND @value0 knows
  RETURN vertex","bindVars":{"value0":"persons/alice"}}

What we have here is our own version of the famous N+1 problem. If we were to add more people to this graph things would get out of hand quickly.

Facebook, which has been using GraphQL in production for years, is probably even less excited about the prospect of N+1 queries battering their database than we are. So what are they doing to solve this?

Using Dataloader

Dataloader is a small library released by Facebook that solves the N+1 problem by cleverly leveraging the way promises work. To use it, we need to give it a batch loading function and then replace our calls to the database with calls call Dataloader’s load method in all our resolves.

What, you might ask, is a batch loading function? The dataloader documentation offers that “A batch loading function accepts an Array of keys, and returns a Promise which resolves to an Array of values.”

We can write one of those.

async function getFriendsByIDs (ids) {
  let query = aql`
    FOR id IN ${ ids }
      let friends = (
        FOR vertex IN OUTBOUND id knows
          RETURN vertex
      )
      RETURN friends
  `
  let response = await db.query(query)
  return response.all()
}

We can then use that in a new test.

import {
  graphql
} from 'graphql'
import DataLoader from 'dataloader'
import {
  db,
  getFriendsByIDs
} from '../src/database'
import schema from '../src/schema'

describe('Using dataloader', () => {

  it('returns friends of friends', async () => {

    let Person = new GraphQLObjectType({
      name: 'Person',
      fields: () => ({
        name: {
          type: GraphQLString
        },
        friends: {
          type: new GraphQLList(Person),
          resolve(root, args, context) {
            return context.FriendsLoader.load(root._id)
          }
        }
      })
    })

    let query = `
        query {
          person(name: "Eve") {
            name
            friends {
              name
              friends {
                name
              }
            }
          }
        }
      `
    const FriendsLoader = new DataLoader(getFriendsByIDs)
    let result = await graphql(schema, query, {}, { FriendsLoader })
    let { person } = result.data
    expect(person.name).toEqual('Eve')
    expect(person.friends.length).toEqual(2)
    let names = person.friends.map(friend => friend.name)
    expect(names).toContain('Alice', 'Bob')
  })

})

The key section of the above test is this:

    const FriendsLoader = new DataLoader(getFriendsByIDs)
    //                         schema, query, root, context
    let result = await graphql(schema, query, {}, { FriendsLoader })

The context object is passed as the fourth parameter to the graphql function which is then available as the third parameter in every resolve function. With our FriendsLoader attached to the context object, you can see us accessing it in the resolve function on the Person type.

Let’s see what effect that batch loading has on our queries.

// getPersonByName('Eve') from the person resolver in our schema 
{"query":"FOR person IN persons
  FILTER person.name == @value0
  LIMIT 1 RETURN person","bindVars":{"value0":"Eve"}}
// getFriendsByIDs(["persons/eve"]) -> returns Bob & Alice.
{"query":"FOR id IN @value0
   let friends = (
    FOR vertex IN  OUTBOUND id knows
      RETURN vertex
    )
  RETURN friends","bindVars":{"value0":["persons/eve"]}}
// getFriendsByIDs(["persons/alice","persons/bob"])
{"query":"FOR id IN @value0
   let friends = (
    FOR vertex IN  OUTBOUND id knows
      RETURN vertex
    )
  RETURN friends","bindVars":{"value0":["persons/alice","persons/bob"]}}

Now for a three level query (Eve, her friends, their friends) we are down to just 1 query per level and the N+1 problem is no longer a problem.

When it’s time to serve your data to the world, express-graphql supplies a middleware that we can pass our schema and loaders to like this:

import express from 'express'
import graphqlHTTP from 'express-graphql'
import schema from './schema'
import DataLoader from 'dataloader'
import { getFriendsByIDs } from '../src/database'

const FriendsLoader = new DataLoader(getFriendsByIDs)
const app = express()
app.use('/graphql', graphqlHTTP({ schema, context: { FriendsLoader }}))
app.listen(3000)
// http://localhost:3000/graphql is up and running!

What we just did

With just those few code examples we’ve built a backend system that provides a query-able API for clients backed by a graph database. Growing it would look like adding a few more functions and a few more types. The code stays modular and testable. Dataloader has ensured that we didn’t even pay a performance penalty for that.

A perfect combination

While geeking out on the technology is fun, it loses sight of what I think is the larger point: The design of both GraphQL and ArangoDB allow you to combine and recombine some really simple primitives to tackle anything you can think of.

With ArangoDB, it’s all just documents, whether you use them like that or treat them as key/value or a graph is up to you. While this approach is marketed as “multi-model” database, the term is unfortunate since it makes the database sound like it’s trying to do lots of things instead of leveraging some fundamental similarity between these types of data. That similarity becomes the “primitive” that makes all this flexibility possible.

For GraphQL, my application is just a bunch of functions in an Abstract Syntax Tree which get combined and recombined by client queries. The parser and execution engine take care of what gets called when.

In each case what I need to understand is simple, the behaviour I can produce is complex.

I’m still honing my minimal set for front end development, but for backend development this is now how I build. These days I’m refocusing my learning to go narrow and deep and it feels good. Infinite width never felt sustainable. It’s not apparent at first, but once that burden is lifted off your shoulders you will realize how heavy it was.

Working with the Google Vision API

I remember hearing a story about a developer whose contract with the military specified the number of kilos of documentation that were required to accompany the system they were building. I think of that story from time to time when I use Google products.

Google’s Vision API gives access to legit state-of-the-art Artificial Intelligence and is amazing for extracting text from images, but a concise modern example doesn’t seem to exist in spite of the huge volume of documentation.

The example they give is in the classic callback style:

var vision = require('@google-cloud/vision');

var visionClient = vision({
  projectId: 'grape-spaceship-123',
  keyFilename: '/path/to/keyfile.json'
});

visionClient.detectText('./image.jpg', function(err, text) {
  // text = [
  //   'This was text found in the image',
  //   'This was more text found in the image'
  // ]
});

With all that has been written about the inversion of control problems of callbacks and ES2015 support nearly complete and in wide use thanks to Babel, examples like this are feeling distinctly retro.

Also painful for anyone working with Docker is that the authentication appears to require me to include a keyfile.json somewhere in my container, where what I actually want is to store that stuff in the environment.

After a bit of experimentation, it turns out that that the google-cloud-node library doesn’t let us down. It’s filled with all the promisey goodness we scripters-of-java have come to expect. If you are using jest this test should get you going:

import Vision from '@google-cloud/vision'

describe('Google Vision client', () => {

  it('successfully connects', async () => {
    let client = Vision({
      projectId: process.env.GOOGLE_VISION_PROJECT_ID,
      credentials: {
	      private_key: process.env.GOOGLE_VISION_PRIVATE_KEY.replace(/\\n/g, '\n'),
        client_email: process.env.GOOGLE_VISION_CLIENT_EMAIL
      }
    })

    let [[text, ...words], annotations] = await client.detectText(__dirname + '/data/foo.jpg')
    expect(text).toEqual("foo bar\n")
    expect(words).toContain("foo", "bar")
  })

})

The project id is easy enough to find, but the environment variables used to avoid the keyfile.json are actually found within the keyfile.

{
  "type": "service_account",
  "project_id": "...",
  "private_key_id": "...",
  "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n",
  "client_email": "...@developer.gserviceaccount.com",
  "client_id": "...",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://accounts.google.com/o/oauth2/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/..."
}

The keyfile above was created by going to the credentials console and following the instructions here.

Note the replace(/\\n/g, '\n') happening on the GOOGLE_VISION_PRIVATE_KEY. This is from issue 1173 and without it you end up with the error
Error: error:0906D06C:PEM routines:PEM_read_bio:no start line. Replacing new lines with new lines seems silly but you gotta do what you gotta do.

The last missing piece is an image with some text. I created a quick test image in Gimp with the words “foo bar”:

foo

While it wasn’t clear at first glance, google-cloud-node is a pretty sophisticated and capable library, despite being theoretically “alpha”. Google is remaking itself as “the AI company” and the boundary pushing stuff it’s doing means I’m probably going to be using this client a lot. I really was hoping to find a small amount of the “right” documentation instead of the huge volume of partial answers spread across their sprawling empire. Hopefully this is a useful contribution towards that reality.