Finally! Free SSL Certs!!! Let’s Encrypt!


letsencrypt-logo_0OMFG! From the why-the-hell-did-this-take-so-long department, I bring you an exciting announcement: you may never, ever have to pay for a trusted SSL certificate for your website again! E-ver! I’d like to introduce you to Let’s Encrypt. From their website:

Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG).

Before I walk you through the process of installing a free trusted SSL certificate signed by a top-level CA on your DigitalOcean droplet, let me explain why this is such a HUGE deal.

The Secure WEb is Coming

Slowly but surely, the entire internet is moving to only use SSL (HTTPS). Some examples:

It is clear that if you haven’t started to do this already, you should definitely begin moving all of your online content onto a server that only uses HTTPS.

Obstacles to the Secure Web

In the past, there have been two main obstacles to using HTTPS:

  1. It was complex to install
  2. It was expensive

Do a search for “how do you install an SSL certificate” and you get a dizzying list of confusing, hard-to-follow tutorials. There are a great number of variables involved in the process, such as what kind of server you’re on, who your web host is, what company originated the certificate, what kind of certificate it is, etc. While it’s certainly gotten easier in the last few years as many companies have found ways to automate the process, figuring out how and where to buy a certificate, knowing how much to pay, and what you get for that money is still extremely challenging.

At the high end, you can pay Symantec (formerly VeriSign) up to $1,500 per year for their “most secure” EV SSL certificate!!! At the low end, you can get certs for as cheap as $9/year at places like NameCheap. They confuse the process by adding warranties of varying amounts, but it all hides the fact that your site visitors really aren’t that much safer with a $1,500 cert than they are with a $9 cert. That’s not to mention the cost many web hosts charge you for obtaining a static IP address. If you don’t know what you’re doing, it is easy to get taken for hundreds of dollars a year for something that is increasingly a necessity. All of this because a small number of companies have a monopoly on the business of generating SSL certificates…until now.

Enter Let’s Encrypt!

Although I couldn’t find any information on its founding in 2013, an IRS tax-exempt 501(c)(3) organization incorporated as a public benefit corporation (a b-corp) called the Internet Security Research Group (ISRG) announced Let’s Encrypt in 2014. Their goals are simple:

  1. Make it easy to install an SSL certificate
  2. Make it free for anybody who wants one

Let’s Encrypt started a limited beta test back in September. You had to fill out a form and get approved to join. If accepted, they would send you an email with instructions on how to download their client software and install your software. It’s been a little bit buggy over the past several months, but they are on target to launch their public beta just 3 days from now on December 3rd! Starting then, anyone should be able to fill out the form on their website and get whitelisted to download your new, free certs instantaneously. Okay, so enough background, let’s do it!

Installing Your SSL Certificate

To start out, here are the specs of my server (here’s my tutorial on how to set up a server like this):

  • Ubuntu 14.04 LTS running on a droplet at DigitalOcean
  • PHP 7 FPM
  • Nginx
  • ufw and fail2ban firewalls

Step 1: Log In and Download the letsencrypt Command Line Client

After logging in, I just performed the following from my home directory (~):

$ git clone
$ cd letsencrypt
$ ./letsencrypt-auto --server --help

If you do this, you’ll run their letsencrypt-auto script and get some basic help information on how to use their tool. As of the time of this writing, this is still new software, so it doesn’t work exactly like they advertise on their website, but if you’re an early adopter like me, and understand how cool this is, you won’t mind that it’s a little bumpy out of the gate.

Step 2: Open Up Port 443/tcp on your Firewall and Reboot

If you followed my tutorial to set up your server, you may not have opened up port 443 (the HTTPS port) at that time if you didn’t already have a certificate ready to install. To open up the ports, I ran the following commands:

$ sudo ufw allow 443/tcp

$ sudo ufw enable

$ sudo reboot

I’m not sure you actually have to reboot here, but I was having trouble getting port 443 to show up as open. To debug this, from a terminal window on my local machine I used a program called nmap (on a Mac you can install this with brew install nmap) and ran the command:

$ nmap

After I rebooted, it seemed to start working fine.

Step 3: Stop Firewalls and nginx and Reconfigure Your Default Site

Although letsencrypt will eventually come with a plugin for nginx that will automatically install your certs for you with the server running, at present that plugin is “very buggy” so they don’t even have it installed by default. Instead, I used the standalone method, which I’ll explain in the next step. First, however, stop nginx and then open up the config file for your site where you want to install your SSL cert:

$ sudo ufw disable

$ sudo service fail2ban stop

$ sudo service nginx stop

$ sudo nano /etc/nginx/sites-available/default

Once the config file is opened, modify it so it looks like this:

server {
        listen 80   default_server;
        return      301 https://$server_name$request_uri;

server {
        listen                    443 ssl;
        server_name     ;

        ssl_certificate           /etc/letsencrypt/live/;
        ssl_certificate_key       /etc/letsencrypt/live/;
        ssl_protocols             TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers               AES256+EECDH:AES256+EDH:!aNULL;

        root /var/www/;
        index index.php index.html index.htm;

        location / {
                try_files $uri $uri/ /index.php?$query_string;

        location ~ \.php$ {
                try_files $uri /index.php =404;
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;

So note a couple of things:

  1. I’ve broken what was one server block up into two server blocks.
  2. In the first one on line 4 I added return 301 https://$server_name$request_uri; which will redirect ALL website traffic to HTTPS, regardless of whether or not people actually typed in https://... in the address bar when they came there.
  3. The paths to the certificate files (lines 11 and 12) may be different on your server, but I’ll show you how to figure out what they are.
  4. You should use your actual domain name wherever I have
  5. I’m setting this up to provide SSL for both and It’s important to understand that SSL certs are tied to very specific domain names and you have to get this right or it won’t work.

Save your changes and exit.

Step 4: Install Your SSL Certificate

Okay, now type the following at the terminal:

$ cd ~/letsencrypt

$ ./letsencrypt-auto certonly -a standalone --webroot-path /var/www/ \

-d -d --server \


Cross your fingers. This didn’t work for me the first few times which led me down a bunch of rabbit holes trying to debug the problem. Essentially what I decided is that either their service is still buggy, or I just happened to be trying to install my certificate when they had a lot of people hitting their servers, or maybe both. I had to put in that last command maybe 10 times or more before it worked. I also tried some of the other variations on that command that got sent in the email (NOT the apache one, though) and it finally happened:

- Congratulations! Your certificate and chain have been saved at
  /etc/letsencrypt/live/ Your cert
  will expire on 2016-02-29. To obtain a new version of the
  certificate in the future, simply run Let's Encrypt again.

I can’t tell you how happy I was to see that message!!! Take note here that the message tells you where on your hard drive the certificates got stored. Make sure that path is the same one you set in your config file above. If it isn’t the same, then you need to modify it.

Step 5: Restart Your Firewalls and Nginx

Okay, at this point the final thing to do is just get your web server up and running again:

$ sudo service nginx start

$ sudo ufw enable

$ sudo service fail2ban start

And now head over to your website and your browser and with any luck, you’ll see the lock in the address bar!

Closing Thoughts

Okay, now none of us have any excuse not to be securing ALL of our websites. We’re entering a new era. Don’t believe the myths about how SSL will slow your site down. In some cases it actually speeds them up! The call has been made: Let’s Encrypt!


Installing Laravel 5.1 at Digital Ocean with PHP 7

Get $10 credit when you create a new account at DigitalOcean!
Get $10 credit when you create a new account at DigitalOcean!

PHP 7 is here!!! …well, almost. As of the writing of this post, we’ve reached the 7th Release Candidate (!) phase, so it shouldn’t be too much longer until a stable release is made official. With reports of between 25% and 70% speed improvements, all of us PHP devs should be very excited about this.

I hadn’t had a chance to play with PHP 7 yet, and I’m also about to launch a new project (no announcements yet!), so I figured I would take advantage of the opportunity to learn some new stuff, and maybe give back to the community from which I’ve taken so much. Happy Thanksgiving to the PHP/Laravel community!

The Specs

My plan for the new server was essentially to follow the Homestead template, leaving out anything that was unnecessary for production:

  • Ubuntu 14.04 LTS (with the unattended-upgrades package)
  • PHP 7 using fpm (with mcrypt) + Composer
  • Postgresql MariaDB (really wanted Postgres, but I’ll explain why I switched later)
  • Nginx
  • Redis (for cacheing, queues, and other optimizations)
  • Node (with Bower, Grunt, and Gulp)
  • Git

Once all this was set up, my plan was to configure it to automatically pull updates from my Github repo whenever changes have been pushed. (I’ll describe the push-to-deploy setup in a future post.) If it all works out, I’d have a super fast server that would require very little maintenance. To provide more detail, in the process of the above, I also set up:

  • Secure access to the server
  • A basic firewall
  • The server timezone
  • A swapfile to handle potential memory problems
  • Email notifications for hack attempts
  • My domain name, including configuring Mailgun for my MX server

I followed many wonderful tutorials in the process. I’ll point to many of them here, and summarize any places that I deviated from what those other incredibly smart people did.

Setting up Ubuntu 14.04 LTS

DigitalOcean also supports CoreOS, and for a while I considered learning how to use it with Docker to set up my server. After a little research, though, I decided that this was a bit over my head at the moment, and since I was already pushing the envelope by using PHP 7, I figured I would stick with something I already understood well enough, Ubuntu. So, in order, the tutorials I followed here were:

  1. How to Create Your First DigitalOcean Droplet Virtual Server
  2. How to Connect to Your Droplet with SSH
  3. Initial Server Setup with Ubuntu 14.04
  4. Additional Recommended Steps for New Ubuntu 14.04 Servers
  5. How To Protect SSH with Fail2Ban on Ubuntu 14.04

I followed these tutorials pretty much exactly and it all “just worked.” I have a sneaky suspicion that setting up ufw and fail2ban was a little redundant, but so far I haven’t found that they interfere with each other, and extra security can’t hurt, right? In case you’re curious, my final fail2ban jail.local file “jails” section ended up looking like this:

enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 6

enabled = true
port = ssh
filter = sshd-ddos
logpath = /var/log/auth.log
maxretry = 6

enabled = true
port = http,https
filter = php-url-fopen
logpath = /var/log/nginx/access.log

enabled = true
filter = nginx-http-auth
port = http,https
logpath = /var/log/nginx/error.log

enabled = true
filter = mysqld-auth
port = 3306
logpath = /var/log/mysql.log

This configuration should add some protection for SSH, Nginx, PHP, and MariaDB (MySQL) from brute force and DDOS attacks. Note that you have to manually stop and restart fail2ban after you update your jails.local file, and that it will fail to start if you don’t have the correct path to each of the log files listed above.

Also, I found that the unattended-upgrades package for automatic updates for Ubuntu was already installed on the DigitalOcean Ubuntu 14.04 droplet. However, I found that it was only configured to do the bare minimum when it comes to upgrades. After my changes, my /etc/apt/apt.conf.d/50unattended-upgrades file looked like:

Unattended-Upgrade::Allowed-Origins {
Unattended-Upgrade::Mail "";
Unattended-Upgrade::Remove-Unused-Dependencies "true";
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "02:00";

Which will update all packages, not just security updates, email me a report, get rid of old or unused packages, and automatically reboot the system, if necessary, at 2AM. In addition, my /etc/apt/apt.conf.d/10periodic file looked like:

APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "7";
APT::Periodic::Unattended-Upgrade "1";

Which will set the schedule to update everything daily, and remove unnecessary files once a week. Sweet! Let’s move on to PHP 7.

Setting up PHP 7 on Ubuntu 14.04

Since there is no stable build of PHP 7 as of this moment, you have two options for installing it on Ubuntu:

  1. Build from source
    This is not a bad option. It gives you fine-grained control of the configuration and you’ll have a much better understanding of how it all fits together. BUT, it also means that you have to install all of the many dependencies yourself, and figure out the configuration script, and it is not super easy to upgrade later.
  2. Use a pre-built package
    For ease of installation and maintenance, this is the easier option, and the one I went with. Fortunately, as we’ll see, there was an easy way to do this.

I used the PHP 7 install instructions from Zend along with Bjørn Johansen’s excellent tutorial to install PHP 7 with FPM. Since we’re going to be using Nginx, you’ll want to ignore the instructions for getting it to work with Apache.

First of all, I found that Zend’s instructions for adding their repo to your apt sources didn’t work for me from the command line. First I had to open up /etc/apt/sources.list using nano as follows:

$ sudo nano /etc/apt/sources.list

And then type/paste the following line in at the end of the file:

deb ubuntu/

Save and exit, and then you can install the nightly build with the following:

$ sudo apt-get update && sudo apt-get install php7-nightly

From there on, I needed to follow Bjørn’s instructions for getting FPM up and running. There was one relatively small difference. The Zend install created some default configuration files, and so I had to separate the config options that Bjørn describes into two separate files. First, create /usr/local/php7/etc/php-fpm.conf and put the following in it:


; Load pool definition config files

And then create /usr/local/php7/php-fpm.d/www.conf and put the following in it:

user = www-data
group = www-data

listen =

pm = dynamic
pm.max_children = 10
pm.start_servers = 4
pm.min_spare_servers = 2
pm.max_spare_servers = 6

access.log = /var/log/$pool.access.log
slowlog = /var/log/$pool.log.slow
request_slowlog_timeout = 30s

php_flag[display_errors] = off

Things to note:

  • Make sure you use the username of the account that will run Nginx for the user/group settings, in my case that was www-data
  • I followed Bjørn’s lead on the numbers of children/servers. I’m not actually sure what the “right” values should be here, and the ones I used are slightly higher than the defaults. In any case you can find a decent explanation in the comments in the /usr/local/php7/etc/php-fpm.d/www.conf.default file
  • I set up the access and slow log files for debugging and tuning later on
  • Since this is a production server, I set the php_flag[display_errors] to “off”. You can set any other directives here that you would normally find in php.ini.

Make sure you follow the rest of the step’s in Bjørn’s tutorial VERY CAREFULLY. Whenever he gives you a link for downloading code, I recommend that you use it, rather than running the mistake of mistyping it.

Finally, I symlinked all of the php executables into /usr/local/bin so that they would be available system-wide. You can do that with this command:

$ sudo ln -s /usr/local/php7/bin/* /usr/local/bin/

Then try it out! If it’s working, you should get output like mine.

$ php -v
PHP 7.0.1-dev (cli) (built: Nov 10 2015 20:10:21) ( NTS )
Copyright (c) 1997-2015 The PHP Group
Zend Engine v3.0.0, Copyright (c) 1998-2015 Zend Technologies

One more thing: the default install does NOT come with a php.ini file already installed. To do that you should create one at /usr/local/php7/etc/php.ini. My php.ini only contains a single directive:


Which I guess I could have set in the /usr/local/php7/php-fpm.d/www.conf file above. This directive will be important for getting PHP7/Nginx/Laravel to play nice with each other. Next we’ll get PHP 7 working with Nginx.

Installing and Configuring Nginx to work with PHP7 FPM

The first thing I did was to create the directory structure to house my website, make sure that the Nginx user would have access to it, and then install Nginx:

$ sudo mkdir -p /var/www/mysite/public

$ sudo chown -R myusername:www-data /var/www

$ sudo apt-get install nginx

The next step is to configure the site by modifying /etc/nginx/sites-available/default to look as follows:

server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;

    root /var/www/mysite/public;
    index index.php index.html index.htm;


    location / {
        try_files $uri $uri/ /index.php?$query_string;

    location ~ \.php$ {
        try_files $uri /index.php =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;

Note that the root directive should point to the directory we just set up. Also the server_name can either point to your domain name (if you’ve associated it with your IP address) or should be set to the IP address of your DigitalOcean droplet, which you can find from your account settings in the DigitalOcean website.

After you’ve finished the configuration, it may be necessary to restart (or start) the php7-fpm and nginx processes as follows:

$ sudo service php7-fpm restart

$ sudo service nginx restart

If you have any troubles, check the log files for clues as to what’s gone wrong. Once you do this, to test it out you create a new file at /var/www/mysite/public/index.php and adding the following content:

<?php phpinfo();

Then point your web browser to http://your_ip_address and with any luck you’ll see the default PHP server configuration page. Woohoo!!

Installing and Configuring MariaDB

Installing MariaDB is relatively straightforward. First run:

$ sudo apt-get install mariadb-server

Follow the prompts for setting up a root password, then you can log in using:

$ mysql -u root -p

Once logged in, you’ll want to create a new database and a user to access that. In this example, I’m going to call both the username and database “laravel”:

MariaDB [(none)]> CREATE DATABASE laravel;

MariaDB [(none)]> GRANT all ON laravel.* TO laravel@localhost IDENTIFIED BY 'AmAzing_Pa55werd';

MariaDB [(none)]> quit

That’s about it. You may want to follow DigitalOcean’s tutorial How To Secure MySQL and MariaDB Databases in a Linux VPS for more details. You should now be ready to support your Laravel 5.1 app.

Also, I said earlier that I originally wanted to use Postgresql instead of MariaDB for this project. The main reason that I didn’t do this is that the Postgres drivers for PHP did not come installed by default on the PHP 7 package I got from Zend. It would probably not be too hard to recompile PHP and include the appropriate drivers, but in this case, it was more trouble to me than it was worth.

Other Stuff: Node, Redis, Composer

Installing NodeJS was relatively straightforward. Just make sure you follow the instructions on the NodeJS website, and don’t use the default distribution available from Ubuntu, i.e.:

$ curl -sL | sudo -E bash -

$ sudo apt-get install -y nodejs

$ sudo npm install -g bower grunt gulp

Also, I found that the build-essential package was already installed by the time I got to this point. Don’t know if it was on there from the start, but I didn’t need to install it.

DigitalOcean’s tutorial for installing Redis was spot on.

Installing composer required the standard two commands:

$ curl -sS | php

$ sudo mv composer.phar /usr/local/bin/composer

And if you’ve followed everything carefully up to this point, you should be ready for the final step. At this point, you may want to take a snapshot of your server image so that you have a base image for installing further Laravel droplets at DigitalOcean.

Associating Your Domain Name with Your IP Address

I got stuck a little trying to get my domain name associated with my IP address. This was primarily because I chose to use Mailgun to handle my email. Mailgun recommends that you associate a subdomain with their service rather than your main website domain:

Mailgun recommends that you use a subdomain when you set up their service.
Mailgun recommends that you use a subdomain when you set up their service.

I’ll get to that in a moment. First, to associate your domain with your DigitalOcean droplet, you should follow their tutorial How To Set Up a Host Name with DigitalOcean. When I did that, I was able to get to my main site in a browser, just fine.

The trick came when setting up DNS for getting verified on Mailgun. As I found out, I am not the only person who has had trouble with this. I tried various combinations of the settings in the forum I linked to, to no avail. Finally, it dawned on me that I might need to set up a second DNS record at DigitalOcean for this to work. This proved to be the solution. Here are some screenshots with sensitive info blurred out.

Note that the IP address you associate with your mail subdomain is the one provided by Mailgun and NOT the IP address associated with your DigitalOcean droplet. This step confused me greatly and took me a while to figure out.

Installing your Laravel 5.1 Site

So I’m assuming that you have your Laravel site in a git repo somewhere such as Github, GitLab, or Bitbucket. I think the only thing that may be slightly tricky here is making sure that the www-data user has access to all of the files and directories it needs to display your site.

So the first step is to remove any directories inside of /var/www. When we clone our project it will contain the /public directory which will be the web root that we set up in nginx above:

$ cd /var/www

$ sudo rm -fr mysite

Then clone your repo and cd into the root directory:

$ git clone mysite

$ cd mysite

The next thing you have to do is create your .env file to hold all of the production values for your environment variables. In my case this looks something like:




Once you’ve updated the environment variables, then you can run composer, npm, and your migrations:

$ composer install

$ npm install

$ php artisan migrate

Once everything is downloaded and installed, you can run chown to make sure the nginx user has appropriate access:

$ sudo chown -R :www-data .

And voila! Your site should be up and live!!! Go check it out!

Closing Thoughts and Next Steps

Overall, I’m amazed at how smoothly this whole process went. If I knew what I was doing, I probably could have completed this entire setup in an hour or two. As it was, it only took me about half a day. Not only that, but it was cheap! DigitalOcean only charges $5/month for their lowest price servers, which is more than enough for my current needs (click here to get a $10 credit when you sign up for a new account at DigitalOcean). Mailgun is free if you send/receive fewer than 10,000 emails per month and it saves you the trouble of setting up your own SMTP server, hosting email, and all that nonsense.

In my next post, I’m going to demonstrate how to set up a push-to-deploy system like what you would find at Laravel Forge or Envoyer. With this setup, your live site will be automatically updated whenever you push changes to your git server. Between that and Ubuntu’s automatic upgrades, it will make for a live production server with very little overhead in terms of maintenance. Stay tuned!


Should You Use EAV?


The Entity-Attribute-Value (EAV) pattern can be used to flexibly add or remove properties to an object and its corresponding data model. There is some debate about if and when it is appropriate to use EAV. I’ll provide my opinions here, and also a tutorial on how to implement EAV into a Laravel 5.1 application using Cartalyst’s Attributes package.

The Basics

EAV allows you to attach properties to an object with name/value pairs. For example, lets say that I have a User object, and I’d like to add some metadata that is not in the base model definition. Perhaps the base model doesn’t have a property for a mobile phone number. I’d like to be able to add that by providing a property name and value like mobile=555-867-5309. EAV allows you to do this by simply adding one or two tables to your database schema. In MySQL format, the table definition might look something like:

CREATE TABLE `attributes` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `user_id` int(10) unsigned NOT NULL,
  `name` varchar(255) NOT NULL,
  `value` mediumtext NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `attribute_user_id_name_index` (`user_id`,`name`),

As you can see, the table schema is just four columns. It associates a name/value pair with a user via the user_id. We can add as many name/value pairs to this table as we’d like. In this case I’m forcing each row to have a unique user_id/name pair so that you can’t have more than one value for the same piece of metadata. It is possible to make this more generic so that you can add metadata to any type of object using just one table, but I wanted to keep this example as simple as possible.


Some people consider EAV an anti-pattern, and ask the reasonable question of why not just add the fields you need to the main table or in a more traditional relational structure?Indeed, there are certain situations, and certain reasons why you might not want to use EAV. Most of them boil down to code complexity (i.e. query complexity) and performance. The example above is relatively straightforward, and EAV usually works okay when you restrict yourself to using scalar values, but you can imagine the added complexity of writing a SQL query to retrieve and parse many different values. Also, things get tricky if you try to store more complex things in the value field, like serialized arrays or objects.

If you are working with thousands of objects, using EAV really slows down performance. I tried to use it for a WordPress plugin once that stored metadata for about 25,000 people. When I would pull up the screen that listed all of the people, even though they were paged the screen could take over a minute to load! I eventually realized that WP was not the right tool for the job and built a standalone app to accomplish my task.

When to use EAV

Haters are gonna hate, and some people will scream that you should never use EAV, but I think EAV is actually the right tool for the job in a couple of situations.

Creating Cowpaths During Development

Right now I’m building an app called Bountify, and although I know I’m going to want a user profile that provides more than just a simple name and email address, I don’t actually know exactly what fields I’m going to need/want. In this case, I plan to use EAV during the development phases so that I can add fields to the user profile flexibly. Once I’ve settled on a relatively stable set of fields, I’ll pave the cowpaths, i.e. I’ll modify the data model of the app to use a more traditional schema that performs and scales well.

Simple Stuff or You Have No Choice

Sometimes, you really have little choice. For example, WordPress uses the EAV pattern to support the adding of metadata to themes and plugins. If you’re developing a plugin for WP, unless your data model is very complicated, the cost of adding new tables to their database is usually more trouble than its worth. There are millions of WP users, and there are thousands of themes and plugins that have been developed. Many, if not most, of them use WP’s EAV-based options table to store their data. Given that WP is perhaps the most widely installed CMS on the market, that should be enough evidence to convince you that EAV is not always bad.

Incorporating EAV into Laravel 5.1

I only found one freely available EAV package for Laravel, but since it had only been downloaded six times when I looked at it, I decided it probably wasn’t mature enough to incorporate into my project. I happen to have a subscription to the many excellent packages provided by Cartalyst, and it turns out one of those is a package for adding EAV to composer based projects.


At the time I write this post, Attributes has still not reached a stable release. However, it’s basic functionality seems to work well so far. Their installation instructions are a little outdated and don’t work with Laravel 5.1 because the --package option has been removed from the artisan migrate command. After you have downloaded the package using composer, the simplest thing to do is to just copy the contents of vendor/cartalyst/attributes/src/migrations/ into your database/migrations/ folder and then run php artisan migrate.

Of course, I’m never into simple. If you’d like you can follow these modified instructions to use my patch that updates the package for use with Laravel 5.1. First in the require array in your composer.json file add:

"cartalyst/attributes": "dev-laravel"

And then add the following after the require array:

"repositories": [
        "type": "vcs",
        "url": ""

Then you can run composer update and then php artisan vendor:publish and finally php artisan migrate. This will automatically copy the migration to the correct location.


Although the Attributes documentation is relatively straightforward, they don’t address the question of where or when to create your new attributes from within a Laravel project. Since I was adding metadata to the user model, I chose to update the constructor in the User class:

use Cartalyst\Attributes\Attribute;
use Cartalyst\Attributes\EntityTrait;
use Cartalyst\Attributes\EntityInterface;

class User extends EloquentUser implements EntityInterface
    use EntityTrait;

    public function __construct(array $attributes = [])
         * EAV Attributes
        Attribute::firstOrCreate(['slug' => 'gender']);

        // call parent constructor

A couple of things to note:

  1. The Attribute class is derived from the Eloquent\Model class which means you can use all of the typical functions associated with your other models. In this case, I used the firstOrCreate() method to create the attribute because since this code runs every time the constructor is called, it will throw an error if it tries to create a duplicate.
  2. I called the parent::__construct() method after creating the attribute so that it would be available even the first time I wanted to use it.

The cool thing about this package is that once it’s installed you can access and save attributes exactly as you would any other property of your model.


Do They Still Want To Do Math?


A debate, heated in the way only interwebs debates can be, has been raging for the past few days about a math problem. In a nutshell, here’s what happened. The math problem was this:

5 x 3 = _____

The instructions said “use the repeated addition strategy.” The child answered “15” and showed the work as “5 + 5 + 5” which received a -1 in red pen and was accompanied by the feedback that the correct way to apply the strategy was “3 + 3 + 3 + 3 + 3.” Participants in the debate have gone back and forth about important issues such as:

I’m not being facetious or tongue-in-cheek when I refer to these as important issues. I think that they are. These are exactly the kinds of discussions we should be having with each other, with teachers, and with students. The very act of having these discussions helps us think through the issues and clarify what is important to us.

However, in my opinion, the only question of real importance is this:

After all is said and done, will this kid still want to do math?

People learn at different rates. The only good predictor of how much someone will learn is the amount of time they spend engaged in learning, frequently referred to as “time on task.” People won’t spend time learning to do something unless:

  • They love doing it
  • Doing it is instrumental in helping them do something else they really want
  • Someone forces them to

Our educational system spends almost no time trying to help kids fall in love with the content of their classes, nor does it help them see how the content can really help them improve their lives. We basically resort to the carrot and stick approach and force them to study stuff at gradepoint, despite the fact that we know this kills their motivation.

Ultimately, none of us knows what went on, or what continues to go on, between the student and teacher in this story. There’s probably a lot more to their interaction that any of us can judge from just seeing the image accompanying this post.

For any of you involved with a learner out there, I leave you with what I think the recipe for good learning is:

  1. Make it interesting, if not downright fun
  2. Make it relevant to the life they’re living right now
  3. Create safe spaces to explore, take risks, and learn it in their own time

Using OAuth for Managing User Accounts


Recently I was building a web app in which I wanted to allow people to use their accounts from a variety of social media sites to register for mine. I found a plugin that made this really easy (Sentinel Social for Laravel, if you’re curious), but I quickly found that I ran into some issues, namely:

  1. Not every social media site that allows OAuth integration sends back the same information
  2. Users are likely to have used different credentials (email addresses, names, usernames) to create their various accounts

This created an interesting problem for me, the developer, namely:

How can I reliably match up a “new” registration with an existing user account when appropriate?

Assessing the Landscape

The first thing I did was to create a table with a summary of what fields are returned by the various API calls:

OAuth Service Provider
= yes,  = no,  = multiple values,  = if saved in profile

[Note: If there’s an  in a cell, I’m pretty sure that the info is never provided. However, it’s very likely that some of the  ’s should actually be  since it’s probable that those values are not required in the profiles for those accounts. Please send corrections if you have them!]

What can we do with these data? Since uid is unique to each service, it doesn’t offer us much help. The location, description, imageUrl, urls, and gender fields are also of mostly negligible use in terms of matching up identities. I ruled out using any of these fields.

The nickname field is typically what is used as a username, and is potentially useful in matching up an existing user account with the 3rd party account (Github, Instagram, or Twitter). In my own case, however, I found that the nicknames I had chosen for my various accounts were not the same. Darn. We won’t entirely give up on it, though.

The name field is provided, or potentially provided, by all of the systems. The full name, however, is notoriously hard to parse. It would be easy if everyone had just two names and they were separated by a space, e.g. “Morgan Benton.” Unfortunately, though, you’ll get plenty of Morgan C. Benton’s,  Mary Jo Jackson’s and Bill Van Dyke’s–i.e. names with two spaces and no easy way to tell if the middle word is a middle initial, part of the first name, or part of the last name. Furthermore, there’s a trend recently, especially on Facebook, for people not to use their last names in order to avoid searches that might reveal embarrassing stuff about them to employers and other authority figures. This makes the name field less useful. We’ll put that one in our second tier list, as well.

The firstName and lastName fields always show up as a pair. Facebook claims these values are the “real” names of the user, although I’m not sure how they verify this. When available, these fields seem to be a reasonable way to try to search for existing users on your system, or to match up accounts from different 3rd party accounts.

The email is the closest to being our ideal match field. Frustratingly, Instagram and Twitter stubbornly refuse to provide them. Also, in my own case I had used different email addresses to create accounts in different places. While email clearly offers the best hope of an easy match, it won’t be a universal solution.

Experimenting on Myself

So, I wiped my user database clean before running the following experiment. I clicked on the “Register with XXX” links for all seven of the above services on my site. After that, I had no less than five separate user accounts. The data returned by Google, Github and Microsoft matched each other because I happen to have used my Gmail address to sign up for all of those accounts. None of the rest of the data resulted in accounts that matched up with each other. Yikes!

I’m perhaps unusual in that, as a web developer, I’m constantly creating new identities for myself online, so I have at least four different email addresses that I use regularly to interact with different groups of people. It’s clear though, that services like Twitter and Instagram, which provide neither email address, nor discrete values for firstName and lastName are going to require some form of manual matching, or at least some very clever regular expressions.

First Pass at a Solution

The first thing I did was to group the various OAuth providers according to the strategies I would use to match them. [Note: this code is in PHP, but could easily be ported to other languages.]

$matched = false;
switch ($provider) {
    case 'facebook':
    case 'google':
    case 'linkedin':
    case 'microsoft':
        // match on email, first_name, last_name, name
    case 'github':
        // match on email, nickname, name
    case 'twitter':
    case 'instagram':
        // match on nickname, name
if (!$matched) {
    // either assume no match, or get the user involved

As it turns out, though, when I actually started working on the code, the solution was much simpler than this. Here’s the rough algorithm:

  1. Get the user information returned from the OAuth provider
  2. From non-null fields: email, nickname, firstName, lastName, and name
  3. In order, look for user accounts that match the following conditions:
    1. emails match OAuth info
    2. nicknames match OAuth info
    3. firstName and lastName match OAuth info
  4. If any of the above matched, return that user account, otherwise
  5. Get a list of all user accounts, for each one:
    1. Construct a “display name” property by concatenating the first and last names with a space in between
    2. Do a textual similarity analysis between the name field from the OAuth provider and the “display name”
    3. If the similarity is above 85%, consider it a match and return that user account

Here’s what it looks like in PHP using Laravel’s Eloquent ORM syntax:

public static function findMatch($atts)
    // convert the $atts into individual variables
    // try to match on the various properties
    if (isset($nickname)) {
        if ($user = self::where('username', $nickname)-&amp;gt;first() {
            if (!empty($user)) {
                return $user;
    if (isset($firstName, $lastName)) {
        if ($user = self::where('first_name', $firstName)-&amp;gt;where('last_name', $lastName)-&amp;gt;first()) {
            if (!empty($user)) {
                return $user;
    if (isset($name)) {
        $users = self::all();
        foreach ($users as $u) {
            similar_text($u-&amp;gt;display_name, $name, $pct);
            if ($pct &amp;gt;= 0.85) {
                return $u;

    return false;

As it turns out Sentinel Social already checked for email matching for me so I could leave it out of my function. Also, the 85% similarity threshold for name matching is pretty arbitrary at this point. I have only tried it with one name (my own) and when you check “Morgan Benton” against “Morgan C. Benton” the similarity is about 89.6%.

The good news: I was able to match credentials against all seven of the OAuth providers listed above!

Obvious Problems & Dangers

I was pretty excited to have gotten this working, when one of my students pointed out an obvious problem: this approach will be susceptible to false positives, i.e. it is highly likely that it will match people with similar names who are NOT, in fact, the same person. Worse than that, it could be used maliciously to get access to someone’s account, i.e. intentionally using someone else’s name to set up a social media account, and then using that account to sign up on my site, giving the person access to whomever’s name that got stolen.

Another obvious problem is that this matching won’t work if people haven’t correctly updated their names in their various online accounts. For example, it is not required to provide your name when setting up a Microsoft Live account, so if that isn’t done, it will be impossible to match using names.

Second Pass at a Solution

In my next try, I’m going to figure out how to overcome these last problems. I’m pretty sure that at the end of it all, I will have to allow for some sort of interaction and/or verification by the user. All in all, though, I think it will be a small price to pay for the convenience of using OAuth.


Should College Be Free?

Photo credit: The People's Record on Tumblr
Photo credit: The People's Record on Tumblr
Photo credit: The People’s Record on Tumblr

A friend of mine posted a link on Facebook to an article entitled This is what would happen if college tuition became free in America. The main thrust of the article is that because “free” tuition actually increased debt and decreased participation for lower-income students in some other countries that have tried it, the same thing is likely to happen in the U.S. As such, Bernie Sanders’ goal to make college tuition and debt free for low-income Americans would be misguided and unlikely to work. I agree with the article that the danger is real, but both the article and Sanders’ plan leave out some important contextual information.

Why go to college?

The short, easy answer is: to get a job. Ask anyone–current students, their parents, employers, politicians, high school guidance counselors–and this answer will certainly be at the top of the list. However, if you ask college graduates the question, “What percentage of the stuff you learned in college do you actually use on your job?” then the answer is going to be, “Almost none.” It begs the question, if going to college is so damn important, why don’t employers make more use of the skills people learn there? In fact, most of what people use on the job is taught to them once they get there, either formally or informally.

Some people try to make the argument that it is for the more abstract, “soft” skills that people acquire in college–things like teamwork, communication skills, and critical thinking. However, if you believe the people who produce the Collegiate Learning Assessment and the people that write about it, college is actually not very effective at imparting these skills.

Furthermore, for many professions, if you really want the knowledge, you can probably learn all of the stuff on your own for free (or close to it) just by using the web. MIT, for example, puts all of their course syllabi online so you could just look up to see what the course materials are and follow them on your own. Then there are things like MOOCs and places like Udemy, Coursera, and Ironically, even having acquired these skills on your own, which arguably demonstrates you have the initiative, resourcefulness, tenacity, and critical thinking ability to be a great employee, without that piece of paper, i.e. the diploma, most companies won’t even look at your application.

I think the real reason that college receives so much emphasis has nothing to do with skills. The real reason is:

A college diploma is a convenient way for employers save time and money, i.e. to allow HR departments to filter out a huge number of potential job applicants without actually looking at their applications.

This is the only explanation that makes coherent sense. We have been duped into providing a huge benefit to companies and shouldering the expense on our own, and in a way that makes us indebted to the economic system at the beginning of our lives. This will not stop until companies stop requiring college degrees as a condition for employment. Trend-setters like Google have already begun to do this. I believe it is only a certain amount of time before other companies follow suit.

In this context, Bernie Sanders’ plan to make employers pay for college makes perfect sense. Employers are the primary beneficiary of the current system. It is immoral to strap our young people with so much debt, so early in their lives, with the vague promise of the potential to get a job. Taxing the wealthy to pay for college is both appropriate and fair. As with so many such plans, however, the devil is in the details of implementation.

$$$: The Root of the Problem

However well-intentioned Sanders’ plan, until we get money out of politics, I believe that his implementation is fraught with the peril of serving the same monied interests the current system now serves. This video explains my view:

I agree with Lawrence Lessig, the MAYDAY PAC, and the folks at Represent.Us, that until we can pass legislation to get money out of politics, this issue, along with pretty much every other issue we care about, is doomed to a feeble, lackluster response, if not outright failure. That’s why I urge everyone who cares about anything to support these causes. We won’t fix any problem until we fix this one.

For what it’s worth, I think Bernie Sanders is the closest candidate in the current field to understanding this and actually doing something about it. Oddly enough, the other candidate who is least likely to be beholden to special interests is Donald Trump. Yes, he’s a megalomaniacal lunatic, but at least he doesn’t have to pander to anyone. To be clear, I am NOT advocating anyone vote for Trump.

So, should people go to college?

Maybe. I work as a college professor. Some might find my cynical view of the current status of a college education hypocritical. However, if anyone has read any of the posts on my other blog, you’ll see that I believe the current system is an impediment to learning, self-awareness, and our chances of solving big problems in the world. The current pragmatics of college are a very real and potent obstacle to learning. I’ll save the details of who I believe should go to college, and why, for another day.

In the meantime, I agree that making college available to everyone in a way that won’t leave them in debt is a laudable and worthwhile goal. Just because there have been problems with implementation in other places doesn’t mean we shouldn’t try it here. Don’t be dissuaded by the nay-sayers–they probably work for special interests anyway, even if they don’t know it.


A new Morphatic

Hi folks,

This blog has been dormant for about six years. I’ve begun to feel the need to have a place to voice opinions that don’t fit into other places that I normally write, e.g. The Burning Mind Project. I can’t promise that this time will be any different than the times before when I’ve felt the need to refresh my blog, but who knows, maybe it will be…