Working with the Google Vision API

I remember hearing a story about a developer whose contract with the military specified the number of kilos of documentation that were required to accompany the system they were building. I think of that story from time to time when I use Google products.

Google’s Vision API gives access to legit state-of-the-art Artificial Intelligence and is amazing for extracting text from images, but a concise modern example doesn’t seem to exist in spite of the huge volume of documentation.

The example they give is in the classic callback style:

var vision = require('@google-cloud/vision');

var visionClient = vision({
  projectId: 'grape-spaceship-123',
  keyFilename: '/path/to/keyfile.json'

visionClient.detectText('./image.jpg', function(err, text) {
  // text = [
  //   'This was text found in the image',
  //   'This was more text found in the image'
  // ]

With all that has been written about the inversion of control problems of callbacks and ES2015 support nearly complete and in wide use thanks to Babel, examples like this are feeling distinctly retro.

Also painful for anyone working with Docker is that the authentication appears to require me to include a keyfile.json somewhere in my container, where what I actually want is to store that stuff in the environment.

After a bit of experimentation, it turns out that that the google-cloud-node library doesn’t let us down. It’s filled with all the promisey goodness we scripters-of-java have come to expect. If you are using jest this test should get you going:

import Vision from '@google-cloud/vision'

describe('Google Vision client', () => {

  it('successfully connects', async () => {
    let client = Vision({
      projectId: process.env.GOOGLE_VISION_PROJECT_ID,
      credentials: {
	      private_key: process.env.GOOGLE_VISION_PRIVATE_KEY.replace(/\\n/g, '\n'),
        client_email: process.env.GOOGLE_VISION_CLIENT_EMAIL

    let [[text, ...words], annotations] = await client.detectText(__dirname + '/data/foo.jpg')
    expect(text).toEqual("foo bar\n")
    expect(words).toContain("foo", "bar")


The project id is easy enough to find, but the environment variables used to avoid the keyfile.json are actually found within the keyfile.

  "type": "service_account",
  "project_id": "...",
  "private_key_id": "...",
  "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n",
  "client_email": "",
  "client_id": "...",
  "auth_uri": "",
  "token_uri": "",
  "auth_provider_x509_cert_url": "",
  "client_x509_cert_url": ""

The keyfile above was created by going to the credentials console and following the instructions here.

Note the replace(/\\n/g, '\n') happening on the GOOGLE_VISION_PRIVATE_KEY. This is from issue 1173 and without it you end up with the error
Error: error:0906D06C:PEM routines:PEM_read_bio:no start line. Replacing new lines with new lines seems silly but you gotta do what you gotta do.

The last missing piece is an image with some text. I created a quick test image in Gimp with the words “foo bar”:


While it wasn’t clear at first glance, google-cloud-node is a pretty sophisticated and capable library, despite being theoretically “alpha”. Google is remaking itself as “the AI company” and the boundary pushing stuff it’s doing means I’m probably going to be using this client a lot. I really was hoping to find a small amount of the “right” documentation instead of the huge volume of partial answers spread across their sprawling empire. Hopefully this is a useful contribution towards that reality.

Making requests in vanilla js with Apollo

There are lots of good reasons to be running GraphQL on the server. It’s clean, no ORM‘s or frameworks needed and has some interesting security properties too. But just because you are rockin’ the new hotness on the server side doesn’t mean you want it on the client side too. Sometimes the right thing is the simplest thing that can possibly work.

The Apollo Client is a GraphQL client made by the people behind Meteor. It aims to be an advanced and capable client that plays nice with the rest of the ecosystem. It has a lot going on, and sadly doesn’t seem to spend much time advertising that it’s actually a pretty great fit for those “simplest thing that can possibly work” moments as well.

Installing it is roughly what you might expect, but you also need the graphql-tag library so you can create queries Javascript’s new tagged template literals.

npm install --save apollo-client graphql-tag

So here, in all it’s glory, “simplest thing that can possibly work”:

import ApolloClient from 'apollo-client'
import gql from 'graphql-tag'

const client = new ApolloClient();

let query = gql`
  query {
    foo {
client.query({query}).then((results) => {
  //do something useful

I think this is actually even more simple than Lokka, which actually bills itself as the “Simple JavaScript Client for GraphQL”.

If you need to specify your endpoint as something other than the host the js came from, then you get to add just a little extra:

import ApolloClient, { createNetworkInterface } from 'apollo-client'

const opts = {uri: ''}
const networkInterface = createNetworkInterface(opts)
const client = new ApolloClient({

But simple doesn’t mean we are restricted to queries only. Mutations can be simple too:

let mutation = gql`
  mutation ($foo: [FooInput] $bar: String!) {
      foo: $foo
      bar: $bar

client.mutate({mutation, variables: {foo: [1,2,3], bar: "baz"}}).then((results) => {
  //do something with result

Obviously you will need the server side schema to support that, but that is all that is needed on the client.

Apollo has a tonne of features and integrates with Redux nicely (it does caching with it’s own internal Redux store unless you want it to use yours). While simplicity doesn’t appear to be it’s focus, the Apollo client is certainly capable of it. You’d just never guess from the documentation. Hopefully this will make it a little easier to appreciate the simple side of Apollo.

A look at the React Lifecycle

Every react component is required to provide a render function. It can return false or it can return elements but it needs to be there. If you providing a single function, it’s assumed to be a render function:

const Foo = ({thing}) => <p>Hello {thing}</p>
<Foo thing="world" />

There has been a fair bit written about the chain of lifecycle methods that React calls leading up to it’s invocation of the render function and afterwards. The basic order is this:


But things are rarely that simple, and often this.setState is called in componentDidMount which gives a call chain that looks like this:


Nesting components inside each other adds another wrinkle to this, as does my use of ES6/7, which adds a few subtle changes to the existing lifecycle methods. To get this sorted out in my own head, I created two classes: an Owner and and Ownee.

class Owner extends React.Component {

  // ES7 Property Initializers replace getInitialState
  //state = {}

  // ES6 class constructor replaces componentWillMount
  constructor(props) {
    console.log("Owner constructor")
    this.state = {
      foo: "baz"

  componentWillReceiveProps(nextProps) {
    console.log("Owner componentWillReceiveProps")

  shouldComponentUpdate(nextProps, nextState) {
    console.log("Owner shouldComponentUpdate")
    return true

  componentWillUpdate(nextProps, nextState) {
    console.log("Owner componentWillUpdate")

  shouldComponentUpdate(nextProps, nextState) {
    console.log("Owner shouldComponentUpdate")
    return true

  render() {
    console.log("Owner render")
    return (
      <div className="owner">
        <Ownee foo={ } />

  componentDidUpdate(nextProps, nextState) {
    console.log("Owner componentDidUpdate")

  componentDidMount() {
    console.log("Owner componentDidMount")

  componentWillUnmount() {
    console.log("Owner componentWillUnmount")


A component is said to be the owner of another component when it sets it’s props. A component whose props are being set is an ownee, so here is our Ownee component:

class Ownee extends React.Component {

  // ES6 class constructor replaces componentWillMount
  constructor(props) {
    console.log("  Ownee constructor")

  componentWillReceiveProps(nextProps) {
    console.log("  Ownee componentWillReceiveProps")

  shouldComponentUpdate(nextProps, nextState) {
    console.log("  Ownee shouldComponentUpdate")
    return true

  componentWillUpdate(nextProps, nextState) {
    console.log("  Ownee componentWillUpdate")

  render() {
    console.log("  Ownee render")
    return (
        <p>Ownee says {}</p>

  componentDidUpdate(nextProps, nextState) {
    console.log("  Ownee componentDidUpdate")

  componentDidMount() {
    console.log("  Ownee componentDidMount")

  componentWillUnmount() {
    console.log("  Ownee componentWillUnmount")


This gives us the following chain:

Owner constructor
Owner render
  Ownee constructor
  Ownee render
  Ownee componentDidMount
Owner componentDidMount

Adding this.setState({foo: "bar"}) into the Owner’s componentDidMount gives us a more complete view of the call chain:

Owner constructor
Owner render
  Ownee constructor
  Ownee render
  Ownee componentDidMount
Owner componentDidMount
Owner shouldComponentUpdate
Owner componentWillUpdate
Owner render
  Ownee componentWillReceiveProps
  Ownee shouldComponentUpdate
  Ownee componentWillUpdate
  Ownee render
  Ownee componentDidUpdate
Owner componentDidUpdate

Things definitely get more complicated when components start talking to each other and passing functions that setState but the basic model is reassuringly straight forward. The changes that ES6/7 bring to the React lifecycle are relatively minor but nonetheless nice to have clear in my head as well.
If you want to explore this further I’ve created a JSbin.

D3 and React 3 ways

D3 and React are two of the most popular libraries out there and a fair bit has been written about using them together.
The reason this has been worth writing about is the potential for conflict between them. With D3 adding and removing DOM elements to represent data and React tracking and diffing of DOM elements, either library could end up with elements being deleted out from under it or operations returning unexpected elements (their apparent approach when finding such an element is “kill it with fire“).

One way of avoiding this situation is simply telling a React component not to update it’s children via shouldComponentUpdate(){ return false }. While effective, having React manage all the DOM except for some designated area doesn’t feel like the cleanest solution. A little digging shows that there are some better options out there.

To explore these, I’ve taken D3 creator Mike Bostock’s letter frequency bar chart example and used it as the example for all three cases. I’ve updated it to ES6, D3 version 4 and implemented it as a React component.

Mike Bostock’s letter frequency chart

Option 1: Use Canvas

One nice option is to use HTML5’s canvas element. Draw what you need and let React render the one element into the DOM. Mike Bostock has an example of the letter frequency chart done with canvas. His code can be transplanted into React without much fuss.

class CanvasChart extends React.Component {

  componentDidMount() {
    //All Mike's code

  render() {
    return <canvas width={this.props.width} height={this.props.height} ref={(el) => { this.canvas = el }} />

I’ve created a working demo of the code on Plunkr.
The canvas approach is something to consider if you are drawing or animating a large amount of data. Speed is also in it’s favour, but React probably narrows the speed gap a bit.

A single element is produced since the charts are drawn with Javascript no other elements need be created or destroyed, avoiding the conflict with React entirely.

Option 2: Use react-faux-dom

Oliver Caldwell’s react-faux-dom project creates a Javascript object that passes for a DOM element. D3 can do it’s DOM operations on that and when it’s done you just call toReact() to return React elements. Updating Mike Bostock’s original bar chart demo gives us this:

import React from 'react'
import ReactFauxDOM from 'react-faux-dom'
import d3 from 'd3'

class SVGChart extends React.Component {

  render() {
    let data =

    let margin = {top: 20, right: 20, bottom: 30, left: 40},
      width = this.props.width - margin.left - margin.right,
      height = this.props.height - - margin.bottom;

    let x = d3.scaleBand()
      .rangeRound([0, width])

    let y = d3.scaleLinear()
      .range([height, 0])

    let xAxis = d3.axisBottom()

    let yAxis = d3.axisLeft()
      .ticks(10, "%");

    //Create the element
    const div = new ReactFauxDOM.Element('div')
    //Pass it to and proceed as normal
    let svg ="svg")
      .attr("width", width + margin.left + margin.right)
      .attr("height", height + + margin.bottom)
      .attr("transform", `translate(${margin.left},${})`);

      x.domain( => d.letter));
      y.domain([0, d3.max(data, (d) => d.frequency)]);

      .attr("class", "x axis")
      .attr("transform", `translate(0,${height})`)

      .attr("class", "y axis")
      .attr("transform", "rotate(-90)")
      .attr("y", 6)
      .attr("dy", ".71em")
      .style("text-anchor", "end")

      .attr("class", "bar")
      .attr("x", (d) => x(d.letter))
      .attr("width", 20)
      .attr("y", (d) => y(d.frequency))
      .attr("height", (d) => {return height - y(d.frequency)});

    //DOM manipulations done, convert to React
    return div.toReact()


This approach has a number of advantages, and as Oliver points out, one of the big ones is being able to use this with Server Side Rendering. Another bonus is that existing D3 visualizations hardly need to be modified at all to get them working with React. If you look back at the original bar chart example, you can see that it’s basically the same code.

Option 3: D3 for math, React for DOM

The final option is a full embrace of React, both the idea of components and it’s dominion over the DOM. In this scenario D3 is used strictly for it’s math and formatting functions. Colin Megill put this nicely stating “D3’s core contribution is not its DOM model but the math it brings to the client”.

I’ve re-implemented the letter frequency chart following this approach. D3 is only used to do a few calculations and format numbers. No DOM operations at all. Creating the SVG elements is all done with React by iterating over the data and the arrays generated by D3.

Screenshot from 2016-06-02 09-24-34
My pure React re-implementation of Mike Bostock’s letter frequency bar chart. D3 for math, React for DOM. No cheating.

What I learned from doing this, is that D3 does a lot of work for you, especially when generating axes. You can see in the code there is a fair number of “magic values”, a little +5 here or a -4 there to get everything aligned right. Probably all that stuff can be cleaned up into props like “margin” or “padding”, but it’ll take a few more iterations (and possibly actual reuse of these components) to get that stuff all cleaned up. D3 has already got that stuff figured out.

This approach is a lot of work in the short term, but has some real benefits. First, I like this approach for it’s consistency with the React way of doing things. Second, long term, after good boundaries between components are established you can really see lots of possibilities for reuse. The modular nature of D3 version 4 probably also means this approach will lead to some reduced file sizes since you can be very selective about what functions you include.
If you can see yourself doing a lot of D3 and React in the future, the price paid for this purity would be worth it.

Where to go from here

It’s probably worth pointing out that D3 isn’t a charting library, it’s a generic data visualisation framework. So while the examples above might be useful for showing how to integrate D3 and React, they aren’t trying to suggest that this is a great use of D3 (though it’s not an unreasonable use either). If all you need is a bar chart there are libraries like Chart.js and react-chartjs aimed directly at that.

In my particular case I had and existing D3 visualization, and react-faux-dom was the option I used. It’s a perfect balance between purity and pragmatism and probably the right choice for most cases.

Hopefully this will save people some digging.

Graph migrations

One of the things that is not obvious at first glance is how “broad” ArangoDB is. By combining the flexibility of the document model with the joins of the graph model, ArangoDB has become a viable replacement for Postgres or MySQL, which is exactly how I have been using it; for the last two years it’s been my only database.

One of the things that falls out of that type of usage is a need to actually change the structure of your graph. In a graph, structure comes from the edges that connect your vertices. Since both the vertices and edges are just documents, that structure is actually just more data. Changing your graph structure essentially means refactoring your data.

There are definitely patterns that appear in that refactoring, and over the last little while I have been playing with putting the obvious ones into a library called graph_migrations. This is work in progress but there are some useful functions working already and could use some proper documentation.


One of the first of these is what I have called eagerDelete. If you were wanting to delete Bob from graph below, Charlie and Dave would be orphaned.

Screenshot from 2016-04-06 10-55-54

Deleting Bob with eagerDelete means that Bob is deleted as well as any neighbors whose only neighbor is Bob.

gm = new GraphMigration("test") //use the database named "test"
gm.eagerDelete({name: "Bob"}, "knows_graph")



Occasionally you will end up with duplicate vertices, which should be merged together. Below you can see we have an extra Charlie vertex.


gm = new GraphMigration("test")
gm.mergeVertices({name: "CHARLIE"},{name: "Charlie"}, "knows_graph")



One of the other common transformations is needing to make a vertex out of attribute. This process of “promoting” something to be a vertex is sometimes called reifying. Lets say Eve and Charlie are programmers.


Lets add an attribute called job to both Eve and Charlie identifying them as programmers:


But lets say that we decide that it makes more sense for job: "programmer" to be a vertex on it’s own (we want to reify it). We can use the attributeToVertex function for that, but because Arango allows us to split our edge collections and it’s good practice to do that, lets add a new edge collection to our “knows_graph” to store the edges that will be created when we reify this attribute.


With that we can run attributeToVertex, telling it the attribute(s) to look for, the graph (knows_graph) to search and the collection to save the edges in (works_as).

gm = new GraphMigration("test")
gm.attributeToVertex({job: "programmer"}, "knows_graph", "works_as", {direction: "inbound"})

The result is this:



Another common transformation is exactly the reverse of what we just did; folding the job: "programmer" vertex into the vertices connected to it.

gm = new GraphMigration("test")
gm.vertexToAttribute({job: "programmer"}, "knows_graph", {direction: "inbound"})

That code puts us right back to where we started, with Eve and Charlie both having a job: "programmer" attribute.



There are times when things are just not connected the way you want. Lets say in our knows_graph we want all the inbound edges pointing at Bob to point instead to Charlie.

We can use redirectEdges to do exactly that.

gm = new GraphMigration("test")
gm.redirectEdges({_id: "persons/bob"}, {_id: "persons/charlie"}, "knows_graph", {direction: "inbound"})

And now Eve and Alice know Charlie.


Where to go from here.

As the name “graph migrations” suggests the thinking behind this was to create something similar to the Active Record Migrations library from Ruby on Rails but for graphs.

As more and more of this goes from idea to code and I get a chance to play with it, I’m less sure that a direct copy of Migrations makes sense. Graphs are actually pretty fine-grained data in the end and maybe something more interactive makes sense. It could be that this makes more sense as a Foxx app or perhaps part of Arangojs or ArangoDB’s admin interface. It feels a little to early to tell.

Beyond providing a little documentation the hope here is make this a little more visible to people who are thinking along the same lines and might be interested in contributing.

Back up your data, give it a try and tell me what you think.

Flash messages for Mapbox GL JS

I’ve been working on an application where I’m using ArangoDB’s WITHIN_RECTANGLE function to pull up documents within the current map bounds. The obvious problem there is that the current map bounds can be very very big.

Dumping the entire contents of your database every time the map moves sounded decidedly sub-optimal to me so I decided to calculate the area within the requested bounds using Turf.js and send back an error if it’s to big.

So far so good, but I wanted a nice way to display that error message  as a notification right on the map. There are lots of ways to tackle that sort of thing, but given that this seemed very specific to the map, I thought I might take a stab at making it a mapbox-gl.js plugin.

The result is mapbox-gl-flash. Currently you would install it from github:

npm install --save mapbox-gl-flash

I’m using babel so I’ll use the ES2015 syntax and get a map going.

import mapboxgl from 'mapbox-gl'
import Flash from 'mapbox-gl-flash'

//This is mapbox's api token that it uses for it's examples
mapboxgl.accessToken = 'pk.eyJ1IjoibWlrZXdpbGxpYW1zb24iLCJhIjoibzRCYUlGSSJ9.QGvlt6Opm5futGhE5i-1kw';
var map = new mapboxgl.Map({
    container: 'map', // container id
    style: 'mapbox://styles/mapbox/streets-v8', //stylesheet location
    center: [-74.50, 40], // starting position
    zoom: 9 // starting zoom

// And now set up flash:
map.addControl(new Flash());

This sets up an element on the map that listens for a “mapbox.setflash” event.

Next the element that is listening has a class of .flash-message, so lets set up a little basic styling for it:

.flash-message {
  font-family: 'Ubuntu', sans-serif;
  position: relative;
  text-align: center;
  color: #fff;
  margin: 0;
  padding: 0.5em;
  background-color: grey;
} {
  background-color: DarkSeaGreen;

.flash-message.warn {
  background-color: Khaki;

.flash-message.error {
  background-color: LightCoral;

With that done lets fire an CustomEvent and see what it does.

document.dispatchEvent(new CustomEvent('mapbox.setflash', {detail: {message: "foo"}}))


Ruby on Rails has three different kinds of flash messages: info, warn and error. That seems pretty reasonable so I’ve implemented that here as well. We’ve already set up some basic styles for those classes above and we can apply one of those classes by adding another option to out custom event detail object:

document.dispatchEvent(new CustomEvent('mapbox.setflash', {detail: {message: "foo", info: true}}))

document.dispatchEvent(new CustomEvent('mapbox.setflash', {detail: {message: "foo", warn: true}}))

document.dispatchEvent(new CustomEvent('mapbox.setflash', {detail: {message: "foo", error: true}}))

These events add the specified class to the flash message.


One final thing that I expect is for the flash message to fade out after a specified number of seconds. The is accomplished by adding a fadeout attribute:

document.dispatchEvent(new CustomEvent('mapbox.setflash', {detail: {message: "foo", fadeout: 3}}))

Lastly you can make the message go away by firing the event again with an empty string.

With a little CSS twiddling I was able to get the nice user-friendly notification I had in mind to let people know why there is no more data showing up.


I’m pretty happy with how this turned out. Now I have a nice map specific notification that not only works in this project, but is going to be easy to add to future ones too.

Using mapbox-gl and webpack together

For those who might have missed it, Mapbox has been doing some very cool work to update the age old slippy-map to brand new world of WebGL. The library they have released to do this is mapbox-gl.

Webpack is a module bundler that reads the imports of your Javascript files and creates a bundled version by walking the dependency graph. Part of its appeal is the fact that it can do “code splitting”; creating bundles for specific pages as well as bundles for code shared across pages (Of course there’s more to it). Pete Hunt gives a great overview of it here.

So the big question is, what happens when you try to use this two awesome projects together?

ERROR in ./~/mapbox-gl/js/render/shaders.js
Module not found: Error: Cannot resolve module 'fs' in /home/mike/projects/usesthis/node_modules/mapbox-gl/js/render
 @ ./~/mapbox-gl/js/render/shaders.js 3:9-22

ERROR in ./~/mapbox-gl-style-spec/reference/v8.json
Module parse failed: /home/mike/projects/usesthis/node_modules/mapbox-gl-style-spec/reference/v8.json Line 2: Unexpected token :
You may need an appropriate loader to handle this file type.
| {
|   "$version": 8,
|   "$root": {
|     "version": {
 @ ./~/mapbox-gl-style-spec/reference/latest.js 1:17-37

ERROR in ./~/mapbox-gl-style-spec/reference/v8.min.json
Module parse failed: /home/mike/projects/usesthis/node_modules/mapbox-gl-style-spec/reference/v8.min.json Line 1: Unexpected token :
You may need an appropriate loader to handle this file type.

With a bunch flailing around and a little google-fu you also run into other fun errors like the “Uncaught TypeError: fs.readFileSync is not a function” or the dreaded “can’t read property ‘call’ of undefined” when you try to run this in your browser.

After playing around with loaders and config options before finding useful github issues, I thought I would for the benefit of my future self compile a simple working example, so I don’t have to figure this out again.

The goal here is to get Mapbox’s most basic example up and running with webpack.

Screenshot from 2016-02-24 14-43-47
The basic Mapbox-gl example.

Let create a directory to work in:

mkdir webpack-mapboxgl && cd webpack-mapboxgl

To do this we will divide the code from the example into two basic files; app.js for the javascript and index.html for the HTML.

First here’s index.html. Note that we are removing all the Javascript and in it’s place we are including the bundle.js that will be generated by webpack:

<!DOCTYPE html>
    <meta charset='utf-8' />
    <meta name='viewport' content='initial-scale=1,maximum-scale=1,user-scalable=no' />
    <link href='' rel='stylesheet' />
        body { margin:0; padding:0; }
        #map { position:absolute; top:0; bottom:0; width:100%; }

<div id='map'></div>
<script src="bundle.js"></script>

Next, app.js:

import mapboxgl from 'mapbox-gl'

mapboxgl.accessToken = 'pk.eyJ1IjoibWlrZXdpbGxpYW1zb24iLCJhIjoibzRCYUlGSSJ9.QGvlt6Opm5futGhE5i-1kw';
var map = new mapboxgl.Map({
    container: 'map', // container id
    style: 'mapbox://styles/mapbox/streets-v8', //stylesheet location
    center: [-74.50, 40], // starting position
    zoom: 9 // starting zoom

No real changes, just using the new ES2015 import syntax to pull in mapboxgl.
It’s probably a good time to install webpack globally:

npm install -g webpack

This is where is gets a little hairy. Obviously mapboxgl and webpack need to be installed, as well as babel and a mess of loaders and transpiler presets. That’s life in the big city, right?

I set up npm in the directory with npm init, and then the fun begins:

npm install --save-dev webworkify-webpack transform-loader json-loader babel-loader babel-preset-es2015 babel-preset-stage-0 babel-core mapbox-gl

Next is the secret sauce that knits it all together, the webpack.config.js file:

    var webpack = require('webpack')
    var path = require('path')

    module.exports = {
      entry: './app.js',
      output: { path: __dirname, filename: 'bundle.js' },
      resolve: {
        extensions: ['', '.js'],
        alias: {
          webworkify: 'webworkify-webpack',
          'mapbox-gl': path.resolve('./node_modules/mapbox-gl/dist/mapbox-gl.js')
      module: {
        loaders: [
            test: /\.jsx?$/,
            loader: 'babel',
            exclude: /node_modules/,
            query: {
              presets: ['es2015', 'stage-0']
            test: /\.json$/,
            loader: 'json-loader'
            test: /\.js$/,
            include: path.resolve(__dirname, 'node_modules/webworkify/index.js'),
            loader: 'worker'
            test: /mapbox-gl.+\.js$/,
            loader: 'transform/cacheable?brfs'
    }; }

With that you should be able to run the webpack command and it will produce the bundle we referenced earlier in our HTML. Open index.html in your browser and you should have a working WebGl map.

If you want to just clone this example, I’ve put it up on Github.