6 reasons to stop using REST and start using GraphQL

Following up the post about a Rails API only app, lets talk about why you should not use REST in your API app.

1: too much unneeded information

Have you ever written a client application to any API? And when you did it, was there a query you needed to do that returned a lot more information than you needed?

It happens to me a lot, last week I was writing a report using PostmarkApp API and I needed to list all the events from a lot of different messages I’ve sent, and to do that, I had to download a lot of information I didn’t need about the messages, including the body of the message in plain text and HTML.

And this does not happen only to PostmarkApp, almost every API out there has the same problem deppending on what the user wants to do

2:  you are not a clairvoyant

It is almost impossible to know before hand all the great things the clients to your API will create in the future, and with REST you would need to create lots of bloated methods, or a lot of very specific methods that could be never used.

3: if you have mobile clients to your API they probably care about their bandwidth usage

I know most of the time, for a desktop computer we never think about bandwidth anymore, we send links to users to download huge files, create APIs that return a lot of information the user does not need, and we do not even care about the bandwidth usage of our servers because nowadays it is really cheap.

But when you have a mobile client, the reality is not exactly the same, and if that client is not in a first world country, they might not have a very good connection at all (yes, that is my reality 😀 )

So a mobile client, usually needs an API that returns only the needed information for that screen or for that logic, to avoid delays and other problems, like spending all your user internet data plan…

4: you will evolve and v1, v2, vx in the URL is a shitty solution

When doing REST any change in the API is usually considered a new version, to add new fields, …

In GraphQL you can just evolve the schema, and new API clients can use the new provided fields.

So there is less reasons to create shitty URLs.

5: security matters

I’m not saying here that security is not possible with REST, but since in GraphQL you can specify what fields you want in the result, it is also possible to allow some users to see one field and not see another from the same model, in the same API call.

Facebook does that a lot in their GraphQL API, with basic security you can access users email and name, to get more information you need to ask for permission, or have you application registered and in production…

What I’m saying is that GraphQL allows for a more fine grained security implementation.

6: it is easy to implement

Lets stop with the easy talk do do a simple exercise?

create a new rails app with the command:

rails new graphqasample --api --skip-test

Now we’ll add the following line to our Gemfile

gem "graphql"

And run the commands:

bundle install
rails g graphql:install

Now we are ready to start playing with GraphQL in our API app.

To start, we’ll need some rails Models, different from most rails apps, only the database schema is not enough, we’ll need to tell GraphQL what fields are available/permitted for each object.

So, lets start creating a simple “schema” for our database, with these commands:

rails g model user username:string first_name:string last_name:string birth_date:date
rails g model post user:belongs_to title:string body:text
rails g model comment post:belongs_to comment:belongs_to body:text owner:string notify_reply:boolean

And then we’ll edit the app/models/user.rb to add the posts collection:

class User < ApplicationRecord
  has_many :posts

and the app/models/post.rb to add the comments collection:

class Post < ApplicationRecord
  belongs_to :user
  has_many :comments, optional: true

And now, lets expose this “blog” using GraphQL, starting with the user model, to do that, create the file app/graphql/user_type.rb with this content:

# defines a new GraphQL type
Types::UserType = GraphQL::ObjectType.define do
  # this type is named `User`
  name 'User'
  # it has the following fields
  field :id, !types.ID
  field :username, !types.String
  field :first_name, !types.String
  field :last_name, !types.String
  field :birth_date, !types.Date
  field :posts, -> { !types[Types::PostType] }

In a graphql model we define the valid fields and the references, this user type references the PostType, so we need to define it in the file app/graphql/post_type.rb

# defines a new GraphQL type
Types::PostType = GraphQL::ObjectType.define do
  # this type is named `Post`
  name 'Post'
  # it has the following fields
  field :id, !types.ID
  field :title, !types.String
  field :body, !types.String
  field :comments, -> { !types[Types::CommentType] }

And this post type refecenres the comment type, and we need to define it in the app/graphql/comment_type.rb

# defines a new GraphQL type
Types::CommentType = GraphQL::ObjectType.define do
  # this type is named `Comment`
  name 'Comment'
  # it has the following fields
  field :id, !types.ID
  field :body, !types.String
  field :owner, !types.String

For more documentation on defining GraphQL Types, you can check the GraphQL Ruby documentation.

Now that we have all the types defined, we can enable the query to any one of them or to only one of them, but we need at least one (the User or Post are good options), and to do that, we need to edit the file app/graphql/query_type.rb

class Types::QueryType < Types::BaseObject
  # Add root-level fields here.
  # They will be entry points for queries on your schema.
  # TODO: remove me
  field :allPosts, [Types::PostType], null: false,
    description: "All User Posts In the App" 
    def all_posts

With this, we can list all posts, with or without comments, you can try it with curl, using the command lines bellow:

curl -X POST -H "Content-Type: application/json" -d '{"query": "{ allPosts{id title comments {body owner} } }"}' http://localhost:3000/graphql
curl -X POST -H "Content-Type: application/json" -d '{"query": "{ allPosts{id title } }"}' http://localhost:3000/graphql

Of course, to test this you probably need to open Rails Console first and insert some data 😀

Of course like this, it is pretty useless, since we cannot pass parameters to the query, but we can fix this easilly, we’ll do just some changes in the app/graphql/query_type.rb as follow:

class Types::QueryType < Types::BaseObject
  # Add root-level fields here.
  # They will be entry points for queries on your schema.
  # TODO: remove me
  field :allPosts, [Types::PostType], null: false,
    description: "All User Posts In the App" do
      argument :limit, Integer, required: false, default_value: 30
      argument :offset, Integer, required: false, default_value: 0
      argument :filter, String, required: false, default_value: nil
  def all_posts(limit:, offset:,filter:)
    result = Post.limit(limit).offset(offset)
    if filter
      term = "%#{filter}%"
      result = result.where("body like ? or title like ?", term, term)

This way we can add and document all parameters that ace acceptable for the query, and as of before, you can test it with curl:

curl -X POST -H "Content-Type: application/json" -d '{"query": "{ allPosts(limit: 20, filter: \"2\"){id title comments {body owner} } }"}' http://localhost:3000/graphql

This will list the first 20 posts that have the number 2 in the body or title columns.

Of course we can use nested parameters, and GraphQL also has support for editing objects, one simple post is too little to explore the possibilities, but I think it was enough to show the idea.

I’ll probably wrinte another post about nested queries and updates using GraphQL, is you think that this is useful, just leave a comment.

API only app? use rails and be happy!

Sometimes we need to create only an API, an application without a user interface, for example if you’ll have someone else built later a mobile client for that API, or even a full SPA web client, there are many reasons to build an API only app.

And since there are that many reasons, rails helps with that also, it has an option you can pass when creating a new application “–api” that will create an streamlined app to favor this kind of development.

Talking like that, it seems like a really big thing, but there aren’t that much differences.

But basically, the differences are:

  • There wont be an app/assets directory, and no asset pipeline preconfigured.
  • The rails frameworks will be expanded in the config/application.rb allowing you to comment anything you’ll not use
  • There are less gems in Gemfile (mostly the JavascriptGems are removed)
  • The ApplicationController descends from ActionController::API instead of ActionController::Base

The main difference in the controller is that it does not include by default some features like

  • Layout
  • Template rendering
  • cookies
  • sessions

Of course this will make the controller stack a lot slimmer, and faster, suitable for an API only App.

So, lets start with the command:

rails new myapissample --api

Then you can edit “config/application.rb” and maybe comment some features that you’ll not use, for example “ActionCable”

require_relative 'boot'
require "rails"
# Pick the frameworks you want:
require "active_model/railtie"
require "active_job/railtie"
require "active_record/railtie"
require "action_controller/railtie"
require "action_mailer/railtie"
#require "action_view/railtie"
#require "action_cable/engine"
# require "sprockets/railtie"
require "rails/test_unit/railtie"
# Require the gems listed in Gemfile, including any gems
# you've limited to :test, :development, or :production.
module Myapissample
class Application &lt; Rails::Application
# Initialize configuration defaults for originally generated Rails version.
config.load_defaults 5.1
# Settings in config/environments/* take precedence over those specified here.
# Application configuration should go into files in config/initializers
# -- all .rb files in that directory are automatically loaded.
# Only loads a smaller set of middleware suitable for API only apps.
# Middleware like session, flash, cookies can be added back manually.
# Skip views, helpers and assets when generating a new resource.
config.api_only = true

After that you can just write your controllers as usual, access your models, …

How to use docker to have an uniform development environment for your rails project

Lets say you work on a company, and there is more than one developer at the company, and sometimes other developer is hired and need to configure the development environment.

Or maybe you work on an opensource project and you want to make the life of anyone that is contributing to the project easier.

Or you might want to deploy your application to production without worrying if the environment in the production server is different from the development environment where the application was tested, this way preventing the infamous “it works on my machine”.

These are all valid reasons to learn a little docker, as we’ll see here, docker will help you configure your environment once, and deploy your application to any environment (we’ll have posts in the next few days showing how to deploy it in all major clouds…).

So lets start installing docker, you can get the right Docker CE  for your platform in the official website. Do not forget to also install docker-compose.

After this you’ll just create a new rails application with a command like this (or work on an existing app you have around…)

 rails new rails_docker_sample -d mysql --skip-coffee

(why I’m using MySQL? just because I’m used to 😛 )

(why I’m skipping coffee script? because I do not like it 😛 )

Now, we need to create a “Dockerfile” and I use almost the same for all my rails projects, with very small differences currently.

FROM ruby:2.5.0
RUN apt-get update -qq && apt-get install -y build-essential  apt-transport-https
# Node.js
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - \
    && apt-get install -y nodejs
# yarn
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -\
    && echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list \
    && apt-get update \
    && apt-get install -y yarn
#install app
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
COPY . /myapp
RUN yarn install
ENTRYPOINT ["/myapp/bin/rails", "s", "-b", ""]

The main differences between projects will be the database driver library and any other specificity of your project, the ruby version…

What is important in this Dockerfile:

  • FROM specify the base image we are using, I’m starting with the image that contains ruby 2.5.0
  • RUN runs a command inside the VM that is building the image
  • WORKDIR sets the work directory inside the image
  • COPY copies one file from your machine to the image
  • ENTRYPOINT specifies the command that will start your app when this image is executed as a container, the important thing here is that to maintain compatibility with most cloud servers were we’ll be running this containers later, we need to use this array variant, the array will be the same “ARGV” parameter to the command later.

Now, lets make some changes to our app to enable it to use environment variables to configure what is where.

First, I changed the config/database.yml file so that it will get the database address always from environment variables.

# MySQL. Versions 5.1.10 and up are supported.
# Install the MySQL driver
#   gem install mysql2
# Ensure the MySQL gem is defined in your Gemfile
#   gem 'mysql2'
# And be sure to use new-style password hashing:
#   https://dev.mysql.com/doc/refman/5.7/en/password-hashing.html
default: &default
  adapter: mysql2
  encoding: utf8
  pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
  username: <%= ENV['DATABASE_USERNAME'] %>
  password: <%= ENV['DATABASE_PASSWORD'] %>
  host: <%= ENV['DATABASE_HOST'] %>

  <<: *default
  database: rails_docker_sample
# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
  <<: *default
  database: rails_docker_sample_test
# As with config/secrets.yml, you never want to store sensitive information,
# like your database password, in your source code. If your source code is
# ever seen by anyone, they now have access to your database.
# Instead, provide the password as a unix environment variable when you boot
# the app. Read http://guides.rubyonrails.org/configuring.html#configuring-a-database
# for a full rundown on how to provide these environment variables in a
# production deployment.
# On Heroku and other platform providers, you may have a full connection URL
# available as an environment variable. For example:
#   DATABASE_URL="mysql2://myuser:mypass@localhost/somedatabase"
# You can use this database configuration with:
#   production:
#     url: <%= ENV['DATABASE_URL'] %>
  <<: *default
  database: rails_docker_sample

The only database with a different name is the test DB, because we do not want trash in any other environment.

then I changed the config/cable.yml to also use environment variables to connect to redis, making it possible to use it in production later.

  adapter: redis
  url: <%= ENV.fetch("REDIS_URL") { "redis://localhost:6379/1" } %>
  channel_prefix: rails_docker_sample_production

  adapter: async

  adapter: redis
  url: <%= ENV.fetch("REDIS_URL") { "redis://localhost:6379/1" } %>
  channel_prefix: rails_docker_sample_production

Again, except for the test environment

Now you can build your docker image, and to make it easier to reference later you can add a tag, the command will be similar to this one:

sudo docker build -t rails_docker_sample  .

we are invoking the build command, tagging the image with “rails_docker_sample” and using the current directory as the source for building.

Ok, that is pretty, but pretty useless also, to setup our development environment, we’ll use docker-compose, to do that, we’ll create a docker-compose.yml file similar to this one, describing all the images we need.

version: '3'
    image: mysql:5.7
      - MYSQL_ROOT_PASSWORD=password
      - ../mysqldata:/var/lib/mysql
      - "3306:3306"
    restart: always
    container_name: redis
    image: redis
    restart: always
    build: .
    container_name: "myapp"
    image: ubuntu/latest
      - DATABASE_HOST=mysqlhost
      - DATABASE_USER=root
      - DATABASE_PASSWORD=password
      - REDIS_URL=redis://redishost:6379/1
      - .:/myapp
      - "3000:3000"
      - mysqlhost
      - redishost

and we can run it with the command (do not forget to create the ../mysqldata directory first):

sudo docker-compose up

But what exactly that will do?

It will download any needed images (like the mysql and redis ones).

It will build your docker image, based on your Dockerfile

It will start a docker container for your app passing the configured variables

And there is some magic there also, the “volumes” section for each service, allow the mapping of one local directory to one container directory, for example, the ../mysqldata that was created before, now contains the mysql databases, you can erase the container and still have access to your data, we can use a similar technique while deploying the app to the cloud later.

We are also mapping the project base directory to the app directory in the container, ans since the RAILS_ENV there is “development”  any changes we do in the files will reflect in the running container.

The “ports” section is also interesting, it allows mapping a container TCP/IP port to your local machine, allowing you to access http://localhost:3000 to access your rails app, and if you do it right now, you’ll notice that you’ll receive one error that the database does not exists.

We can fix that easily, just go to another terminal window in the same project directory and type these command:

sudo docker-compose run --entrypoint "bash -c" web "bundle exec rake db:create"

We had to override the entrypoint specified in the Dockerfile because everything we pass as parameters is passed to that entrypoint, another option we have is to not specify the ENTRYPOINT in the docker file, and specify a command in the docker-compose.yml.

That would allow us to simplify this and access a “bash” in the container with this command:

sudo docker-compose run  web bash

So after this, you just need to share your project with any coworker and they can just “sudo docker-compose up” and start working with all the same environment you have.

Of course this is just a quick and dirty introduction to how to use docker with a Rails app, but we’ll expand this with some posts in the next days about how to use what we learned here to deploy to any of the major cloud providers.

If you want to download the code I used to crate this sample, you can get it in my github page https://github.com/urubatan/rails_docker_sample

If you have any questions about this post or suggestions about the next ones, please leave a comment and I’ll answer it ASAP.


Quick and Dirty introduction to ActionCable – the best WebSockets for Rails!

This post is a followup and a translation of my presentation from “The Developers Conference Florianopolis 2018

What are WebSockets good for?

  • Update the screen of many clients simultaneously when the database is updated
  • Allow many users to edit the same resource at the same time
  • Notify users that something happened

Among many other things.

I’ll not try to convince you that websockets are the best solution for these, and of course you have many options to use, for example:

  • Node.js
  • Websocket-rails
  • ActionCable

I’ll focus here in how to easily use ActionCable that is the default rails implementation and it made my life a lot easier in the last few months (I used websocket-rails before but it’s not being actively developed for a long time now…)

ActionCable basics

Besides being an awesome and simple API, ActionCable has one excellent performance (according to my tests) and has a really good connection handling.

ActionCable is a pub/sub implementation, and that makes things a lot simpler, and to simplify the pub/sub implementation it uses channels.

Each client connection connects to a channel in the server, each channel implementation, streams to a named channel defined when the client connects, allowing to use parameters to define the channel name.

Then the server can send back messages to any of the defined named channels.

Ok, writing it like that, it seems kinda complicated, but it is really simple.

For example, if you wanna send from Ruby a notification to any client, you’ll send data to one of these named channels, with a code similar to this:

ActionCable.server.broadcast 'broadcast_sample', data

where “broadcast_sample” is the name of a channel, and data is any object, for me, usually a hash with the information I want to send back to the clients.

Of course you need to define the name of the channel when the users connect, and this is done in the “ActionCable::Channel” instances in the “subscribed” method, like in the sample bellow:

class MyChannel < ApplicationCable::Channel
  def subscribed
    stream_from "broadcast_sample"
    stream_from "nome#{params[:name]}"
    stream_for current_user
  def unsubscribed
    # Any cleanup needed when channel is unsubscribed

As you can see above, from that method, it is possible to define a constant name for a topic/channel, use parameters sent by the user to define the name, and you can use the “model” variant, that is just a shortcut for creating a string name for that model.

The key is to use the “stream_from” or “stream_for” methods and use the same name later in the broadcast name.

Just to make it clearer how to send a broadcast to each of these 3 samples above, I’ll show bellow a sample code for each:

ActionCable.server.broadcast 'broadcast_sample', data

ActionCable.server.broadcast ‘nomeRodrigo’, comment: ‘Teste’, from_id: 47

ActionCable.server.broadcast_to @post, @comment

Receiving messages in Javascript

Ok, but how do you receive these messages in Javascript? it is almost as easy, just need to implement the “received” method like in the sample bellow:

App.bcsample = App.cable.subscriptions.create("BcsampleChannel", {
    connected: function () {
        // Called when the subscription is ready for use on the server

    disconnected: function () {
        // Called when the subscription has been terminated by the server

    received: function (data) {
        // Called when there's incoming data on the websocket for this channel
        var message = $("<div/>");

    speak_to_all: function (message) {
        return this.perform('speak_to_all', {user_id: window.name, message: message});

Important points in this sample:

  • BcsampleChannel is the class name of the channel in Ruby
  • the data parameter in the received function is the data passet to the broadcast function, it should always be an object, a string does not works, I’ve tried it.

And how to call ruby code from javascript?

Just take a look at the last part of the sample above, in the “speak_to_all” method, the “perform” method, will call a  method with the same name, passing the hash parameter as the data parameter to a method “speak_to_all” in the “BcsampleChannel” class.

Of course we need to update that class to receive this call, like in the sample bellow:

class BcsampleChannel < ApplicationCable::Channel
  def subscribed
    stream_from "broadcast_sample"

  def unsubscribed
    # Any cleanup needed when channel is unsubscribed

  def speak_to_all(data)
    ActionCable.server.broadcast 'broadcast_sample', data

This sample, will receive any data and broadcast it to all connected clients.

There is one last question, how do we pass parameters to subscribed method? simple, just take a quick look at the sample bellow:

App.privatesample = App.cable.subscriptions.create({channel:"PrivatesampleChannel", windowid: window.name}, {
  connected: function() {
    // Called when the subscription is ready for use on the server

  disconnected: function() {
    // Called when the subscription has been terminated by the server

  received: function(data) {
    // Called when there's incoming data on the websocket for this channel

in the create method, instead of passing the name as a string, we need to pass an object, and the “channel” property is required, anything else will be a parameter to the channel in Ruby to use as needed.

But how about deploying?

  • You can use Redis or a database as a backend
  • If you are using passenger and nginx your are almost done!
  • Remember to setup the server path in the routes.rb
  • test and be happy

The first step is to edit the “config/cable.yml” file like the sample bellow:

  adapter: redis
  url: redis://redis.example.com:6379

local: &local
  adapter: redis
  url: redis://localhost:6379

development: *local
test: *local

Then you need to add the mapping to the “config/routes.rb” file:

# Serve websocket cable requests in-process
mount ActionCable.server => '/cable'

and just add a location config to your nginx configuration like in the host bellow:

server {
    listen 80;
    server_name www.foo.com;
    root /path-to-your-app/public;
    passenger_enabled on;

    ### INSERT THIS!!! ###
    location /cable {
        passenger_app_group_name YOUR_APP_NAME_HERE_action_cable;
        passenger_force_max_concurrent_requests_per_process 0;

Of course you have the option to start the server as a standalone server, and configure the reverse proxy, but that is a subject to another post.

You can send broadcasts to it from a sidekiq job or from rails console, as soon as you do not forget to configure the backend as shown above.

And if you have problems or questions about using or deploying ActionCable please leave a comment bellow, I’ll answer as fast as possible.


Git + Lazyness = happy Rails developer

I had some problems that you probably have too, if you are a ruby developer that works in a team…

Check if you have at least one of these problems:

  • Someone commits something that does not passes the test suit
  • Someone have deployed a version of the rails application without precompiling assets
  • Similar to the above but forgot to bundle install or npm install

The list can go on and on…

This short post will just give some tips of cool uses for the .git/hooks scripts to help solve some of these problems…

I have a small web application running, and the deploy of that application is just a git pull in the server (ok, blame on me, I’m not using docker for all my apps…)

And to prevent some of the above problems in this application, I created a .git/hooks/post-merge file with the bellow code:

bundle install
bundle exec rake db:migrate
bundle exec rake assets:precompile
touch tmp/restart.txt

just do not forget to “chmod u+x .git/hooks/post-merge”

With this small script, every time you run “git pull” the hook will fire and do all the dirty work for you.

The problem is that sometimes you are just updating a controller and does not need to run all that, and that is fine.

Of course you can make a more complex script to run only the commands you need, but this is good enough for simple scenarios, and at least in my case, I do no do that many deploys a day (most days there aren’t any deploys)

Another problem I had was some developers in the team, not running the tests before pushing changes to the central repository, and for this, a pre-push hook was just perfect, but I wanted to harden the things a little, and used a pre-commit hook, so to commit any small change, the developer should run the app tests.

To do that I created a .git/hooks/pre-commit (again, do not forget to make it executable) with this code:

STAGED_FILES=$(git diff --cached --name-only)
if [[ "$STAGED_FILES" = "" ]]; then
    exit 0
grep migrations $STAGED_FILES
if [[ $? == 0 ]]; then
  bundle exec rake db:migrate
grep models $STAGED_FILES
if [[ $? == 0 ]]; then
grep controllers $STAGED_FILES
if [[ $? == 0 ]]; then
  TESTS="$TESTS tests/controllers"
grep features $STAGED_FILES
if [[ $? == 0 ]]; then
if [[ "$TESTS" != "" ]]; then
  bundle exec rake test $TESTS
  if [[ $? == 1 ]]; then
    exit 1
if [[ $CUCUMBER == 1 ]]; then
 bundle exec cucumber
 if [[ $? == 1 ]]; then
   exit 1
exit 0;

we have some more verifications in the real file, but this is the idea, if you changed a file, we’ll run the tests before allowing you to commit.

We have some more ideas about how to make git help us, and some of them is building a “heroku like” experience, but we do not really need it but the “trick” that makes it possible, and makes the github web hooks possible too, is the “post-receive” hook.

Since we use github, we have not implemented a post-receive patch, but we have a webhook calling a “cgi script” that was written in ruby (just for fun) that fires a deploy, the script is stupidly simple, only the following:

#!/bin/env ruby
`git pull`
puts "<html></html>"

with this script, protected by authentication of course, and the hooks we mentioned before, I have github firing a deploy in my development/test environment every time a pull-request is merged to the master branch.

Of course we do not do anything that simple and unsecure to production, but this helps a lot our test environment 😀


I hope these git/rails tips help you to improve your project, probably not with the exactly same scripts, but the ideas can be adjusted to your environment.

If you need more ideas or have questions about anything that I wrote here, please leave a comment.