Blog logotrial and stderr

Brandon Istenes

(Failed) Conditional Bundling with Lasso

 •  Filed under markojs

In my project's shared npm package are included both my database models and some code that I'd like a client, bundled with Lasso, to be able to access. So I figured I could make the model only conditionally required, right?

const browser =
  typeof window !== 'undefined' ||
  process.browser ||
  process.env.IN_BROWSER === '1'

module.exports = {
  config: require('./config'),
  models: browser ? null : require('./models'),
}

Running marko-starter build, I see that browser == true, but the Lasso bundling still fails on code in models, indicating that the module is getting required regardless. Bummer.

The right way to solve this, of course, would be to use the browser field spec which Lasso ostensibly respects, but that doesn't seem to be working, and my requests for help have fallen on deaf ears. The Marko community is mostly very helpful people, but unfortunately it's a community of like four people that work for eBay.

So I suppose I'll now maintain two common packages, one browser-compatible, one not.

Import without SSR in Marko.js

 •  Filed under markojs

I was looking for a nice way to do a "dynamic import" in Marko.js, like the following Next.js code:

import dynamic from 'next/dynamic'
const L = dynamic(import('leaflet'), { ssr: false})

This seems pretty good:

var L
class {
  onMount() {
    L = require('leaflet')
    ...

Another Node.js Configuration Pattern

 •  Filed under nodejs

I'm becoming a fan of Yos Riady's blog. He's got this nice node.js configuration pattern, which introduced me to the glory of nconf. Unfortunately, that pattern doesn't quite have the flexibility I seek, and leaves more up to nconf's convoluted resolution system than I'd like. So here's mine, with the help of Ramda, my favorite utility belt.

const nconf = require('nconf')
const R = require('ramda')

const defaults =
  {
    'node_env': 'development',
    'mongo': {
      'host': 'mongo',
      'collection': 'mything'
    }
  }

const testDefaults = {
  'mongo': { 'db': 'mything-test' }
}

const localhostDefaults = {
  'mongo': { 'host': 'localhost' }
}

function Config() {

  nconf.argv().env({ lowerCase: true })  // get NODE_ENV as node_env

  const mergeAllDeepRight = R.reduce(R.mergeDeepRight, {})
  const computedDefaults = mergeAllDeepRight([
    defaults,
    this.env.isTest ? testDefaults : {},
    nconf.get('localhost_services') === '1' ? localhostDefaults : {}
  ])

  nconf.defaults(computedDefaults)
}

Config.prototype.get = function(key) {
  return nconf.get(key)
}

// for convenience, because `config.env.isTest` is real nice
const nodeEnv = process.env.NODE_ENV
Config.prototype.env = {
  'isProd':  nodeEnv === 'production',
  'isTest': nodeEnv === 'testing',
  'isDev': nodeEnv === 'development'
}

module.exports = new Config()

This makes the rules about how defaults are loaded very explicit and very flexible.

Don't define volumes in your public Dockerfiles

 •  Filed under docker

I had tried to use the sebp/lighttpd container for something before realizing it did this, making it so it can't be used without using volumes, which introduce portability issues and so should only be used when actually needed.

Don't define volumes in your Dockerfiles. Don't do it. That's a client decision.

See bistenes/lighttpd for my volume-free fork.

The real pain came when I found out that docker-compose persists volumes between builds, which is apparently not a bug, it's a feature! despite causing a great deal of wondering why on earth my containers had volumes mounted despite their not being defined anywhere at all.

Firewalling Docker with iptables

 •  Filed under docker

I thought I'd written down the solution before, but it turned out I'd just asked about it on StackOverflow and then "solved" it by starting a new server and failing to tell docker not to wipe my iptables rules. The problem was I could either

  1. use docker with my own iptables rules and "iptables": false and have those containers not able to access the internet or
  2. allow docker to wreck my iptables rules and allow the whole big nasty internet access to all of my containers with exposed ports.

Apparently there's an easy way to address this now, using the DOCKER-USER chain.

I set iptables=true and append to my iptables configuration

iptables -A DOCKER-USER -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A DOCKER-USER -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT
iptables -A DOCKER-USER -i eth0 -p tcp -m tcp --dport 443 -j ACCEPT
iptables -A DOCKER-USER -i eth0 -j DROP

which says "accept established, accept input HTTP(S), drop everything else."

Containers can connect out, can't be connected into, except on 80 and 443. 👍

Hacked through a Docker hole in iptables

 •  Filed under docker

Blood and thunder, I've been hacked!

> use Warning
switched to db Warning
> show collections
Readme
> db.Readme.find()
{ "_id" : ObjectId("59d52f735e716205267adea9"), "BitCoin" : "1Jqw2tHBkUAGY32YzettJiDAwe8A9mUzok", "eMail" : "cru3lty@safe-mail.net", "Exchange" : "https://localbitcoins.com", "Solution" : "Your DataBase is downloaded and backed up on our secured servers. To recover your lost data: Send 0.2 BTC to our BitCoin Address and Contact us by eMail with your MongoDB server IP Address and a Proof of Payment. Any eMail without your MongoDB server IP Address and a Proof of Payment together will be ignored. You are welcome!" }
>

Dastards. Idiosyncratic capitalization, to boot. Fortunately I keep backups (cron gsutil /data gs://my-backups) and there was actually nothing in this database. But what the heck happened to my iptables rules?

Looks like docker has been starting without the iptables=false flag.

I thought the solution was to echo '{ "iptables": false }' | sudo tee /etc/docker/daemon.json and restart the daemon (sudo service docker restart), to tell docker not to mess with iptables rules, but then Docker containers can't access the internet.

The true path is above, using the DOCKER-USER chain.

ADDENDUM: I'm going to try and prevent this from happening in the future using Uptime Robot, a free service that will let me know if my site goes down or ports unexpectedly become open.