Do you know that idiot that used all the server space with the production.log file? don’t be that guy!

It is surprisingly easy to configure log rotate to rotate your rails logs, don’t wait until your server is out of space.

I think everyone knows a guy that simply put a rails app online, maybe even used capistrano to automate the deployment, and one year later the app simply stopped working due to lack of disk space.

Then when you go there and find out that the production.log is the biggest file in the server.

That happens because rails will keep appending data to the file, but you probably do not need that much information, the logs from last year are not that helpful, and at least could be stored in a different file.

The simplest solution I’ve found for that is to use the same approach every other linux application uses to take care of the logs, logrotate.

The first step is to edit the file /etc/logrotate.conf, you can use VIM, nano or any other editor of your preference, the content of the file will be like this:

/path/to/app/log/*.log {
  daily
  missingok
  rotate 7
  compress
  delaycompress
  notifempty
  copytruncate
}

What exactly is happening? lets take a look at the log specifications.

  • daily – Rotate the log files each day. You can also use weekly or monthly here instead.
  • missingok – If the log file doesn’t exist, ignore it
  • rotate 7 – Only keep 7 days of logs around
  • compress – GZip the log file on rotation
  • delaycompress – Rotate the file one day, then compress it the next day so we can be sure that it won’t interfere with the Rails server
  • notifempty – Don’t rotate the file if the logs are empty
  • copytruncate – Copy the log file and then empties it. This makes sure that the log file Rails is writing to always exists so you won’t get problems because the file does not actually change. If you don’t use this, you would need to restart your Rails application each time.

To run logrotate manually, to test your configuration, you can use the command: ‘sudo /usr/sbin/logrotate -f /etc/logrotate.conf’

To test the delaycompress, you’ll need to run it a second time.

As you can see, it is really easy to avoid being ashamed by a full disk in production, just use logrotate and be happy 😀

Background on Rails – why do you need to learn all the background job APIs?

Rails has a new library called ActiveJob, the idea of this library is to make it easier to create background jobs without worring about what library to use, and the specific API for each one, and even changing it if a better one comes out in the future without changing your application.

ActiveJob will keep one simple API and it already provides an in memory implementation that you can use to test and in development environment, but do not forget to select a real driver for your production environment.

ActiveJob is so cool, that it out of the box, allows you to do things like this:

MyMailer.prepare_email(param1, param2, param3).deliver_later

And the email will be delivered by the job queue, and only this would already be a huge performance improvement for a lot of shitty web applications around.

Of course it is not just that, you can create your own jobs with the simple command:

rails g job MySuperComplexLogic

that will just create a file app/jobs/my_super_complex_logic_job.rb according to the name you’ve passed to the generator.

The file looks like any other Ruby class, because it is a simple ruby class as you can see bellow:

class MySuperComplexLogicJob < ApplicationJob
  queue_as :default
 
  def perform(*args)
    # Do something later
  end
end

you simply implement the perform method, and when you want to invoke it, you will call:

MySuperComplexLogicJob.perform_later

And if you think the API looks really similar to Sidekiq, it is not by accident 😛

But the API is extended, you can also set options before calling the “perform_later” method, for example:

MySuperComplexLogicJob.set(queue: 'other').perform_later
MySuperComplexLogicJob.set(wait: 5.minutes).perform_later(some,arguments)
MySuperComplexLogicJob.set(wait_until: Time.now.tomorrow).perform_later

With this, you can delay the execution, set the time when it will run or even select the exact queue where the job will be executed, assumming the backend supports different queues.

After this, you just need to select wich background job implementation you’ll really use in production, and ActiveJob out of the box supports these backends:

I’m probably going with sidekiq, mostly because I’m already used to it, but if you do not like any of these, you can always write an adapter for your favorite backend.

If you have any questions about how to use ActiveJob or why to use it, please leave a comment bellow, or contact me by email (sobrecodigo@urubatan.com.br).

6 reasons to stop using REST and start using GraphQL

Following up the post about a Rails API only app, lets talk about why you should not use REST in your API app.

1: too much unneeded information

Have you ever written a client application to any API? And when you did it, was there a query you needed to do that returned a lot more information than you needed?

It happens to me a lot, last week I was writing a report using PostmarkApp API and I needed to list all the events from a lot of different messages I’ve sent, and to do that, I had to download a lot of information I didn’t need about the messages, including the body of the message in plain text and HTML.

And this does not happen only to PostmarkApp, almost every API out there has the same problem deppending on what the user wants to do

2:  you are not a clairvoyant

It is almost impossible to know before hand all the great things the clients to your API will create in the future, and with REST you would need to create lots of bloated methods, or a lot of very specific methods that could be never used.

3: if you have mobile clients to your API they probably care about their bandwidth usage

I know most of the time, for a desktop computer we never think about bandwidth anymore, we send links to users to download huge files, create APIs that return a lot of information the user does not need, and we do not even care about the bandwidth usage of our servers because nowadays it is really cheap.

But when you have a mobile client, the reality is not exactly the same, and if that client is not in a first world country, they might not have a very good connection at all (yes, that is my reality 😀 )

So a mobile client, usually needs an API that returns only the needed information for that screen or for that logic, to avoid delays and other problems, like spending all your user internet data plan…

4: you will evolve and v1, v2, vx in the URL is a shitty solution

When doing REST any change in the API is usually considered a new version, to add new fields, …

In GraphQL you can just evolve the schema, and new API clients can use the new provided fields.

So there is less reasons to create shitty URLs.

5: security matters

I’m not saying here that security is not possible with REST, but since in GraphQL you can specify what fields you want in the result, it is also possible to allow some users to see one field and not see another from the same model, in the same API call.

Facebook does that a lot in their GraphQL API, with basic security you can access users email and name, to get more information you need to ask for permission, or have you application registered and in production…

What I’m saying is that GraphQL allows for a more fine grained security implementation.

6: it is easy to implement

Lets stop with the easy talk do do a simple exercise?

create a new rails app with the command:

rails new graphqasample --api --skip-test

Now we’ll add the following line to our Gemfile

gem "graphql"

And run the commands:

bundle install
rails g graphql:install

Now we are ready to start playing with GraphQL in our API app.

To start, we’ll need some rails Models, different from most rails apps, only the database schema is not enough, we’ll need to tell GraphQL what fields are available/permitted for each object.

So, lets start creating a simple “schema” for our database, with these commands:

rails g model user username:string first_name:string last_name:string birth_date:date
rails g model post user:belongs_to title:string body:text
rails g model comment post:belongs_to comment:belongs_to body:text owner:string notify_reply:boolean

And then we’ll edit the app/models/user.rb to add the posts collection:

class User < ApplicationRecord
  has_many :posts
end

and the app/models/post.rb to add the comments collection:

class Post < ApplicationRecord
  belongs_to :user
  has_many :comments, optional: true
end

And now, lets expose this “blog” using GraphQL, starting with the user model, to do that, create the file app/graphql/user_type.rb with this content:

# defines a new GraphQL type
Types::UserType = GraphQL::ObjectType.define do
  # this type is named `User`
  name 'User'
 
  # it has the following fields
  field :id, !types.ID
  field :username, !types.String
  field :first_name, !types.String
  field :last_name, !types.String
  field :birth_date, !types.Date
  field :posts, -> { !types[Types::PostType] }
end

In a graphql model we define the valid fields and the references, this user type references the PostType, so we need to define it in the file app/graphql/post_type.rb

# defines a new GraphQL type
Types::PostType = GraphQL::ObjectType.define do
  # this type is named `Post`
  name 'Post'
 
  # it has the following fields
  field :id, !types.ID
  field :title, !types.String
  field :body, !types.String
  field :comments, -> { !types[Types::CommentType] }
end

And this post type refecenres the comment type, and we need to define it in the app/graphql/comment_type.rb

# defines a new GraphQL type
Types::CommentType = GraphQL::ObjectType.define do
  # this type is named `Comment`
  name 'Comment'
 
  # it has the following fields
  field :id, !types.ID
  field :body, !types.String
  field :owner, !types.String
end

For more documentation on defining GraphQL Types, you can check the GraphQL Ruby documentation.

Now that we have all the types defined, we can enable the query to any one of them or to only one of them, but we need at least one (the User or Post are good options), and to do that, we need to edit the file app/graphql/query_type.rb

class Types::QueryType < Types::BaseObject
  # Add root-level fields here.
  # They will be entry points for queries on your schema.
 
  # TODO: remove me
  field :allPosts, [Types::PostType], null: false,
    description: "All User Posts In the App" 
 
    def all_posts
      Post.all
  end
end

With this, we can list all posts, with or without comments, you can try it with curl, using the command lines bellow:

curl -X POST -H "Content-Type: application/json" -d '{"query": "{ allPosts{id title comments {body owner} } }"}' http://localhost:3000/graphql
 
curl -X POST -H "Content-Type: application/json" -d '{"query": "{ allPosts{id title } }"}' http://localhost:3000/graphql

Of course, to test this you probably need to open Rails Console first and insert some data 😀

Of course like this, it is pretty useless, since we cannot pass parameters to the query, but we can fix this easilly, we’ll do just some changes in the app/graphql/query_type.rb as follow:

class Types::QueryType < Types::BaseObject
  # Add root-level fields here.
  # They will be entry points for queries on your schema.
 
  # TODO: remove me
  field :allPosts, [Types::PostType], null: false,
    description: "All User Posts In the App" do
      argument :limit, Integer, required: false, default_value: 30
      argument :offset, Integer, required: false, default_value: 0
      argument :filter, String, required: false, default_value: nil
    end
 
  def all_posts(limit:, offset:,filter:)
    result = Post.limit(limit).offset(offset)
    if filter
      term = "%#{filter}%"
      result = result.where("body like ? or title like ?", term, term)
    end
    result
  end
end

This way we can add and document all parameters that ace acceptable for the query, and as of before, you can test it with curl:

curl -X POST -H "Content-Type: application/json" -d '{"query": "{ allPosts(limit: 20, filter: \"2\"){id title comments {body owner} } }"}' http://localhost:3000/graphql

This will list the first 20 posts that have the number 2 in the body or title columns.

Of course we can use nested parameters, and GraphQL also has support for editing objects, one simple post is too little to explore the possibilities, but I think it was enough to show the idea.

I’ll probably wrinte another post about nested queries and updates using GraphQL, is you think that this is useful, just leave a comment.

API only app? use rails and be happy!

Sometimes we need to create only an API, an application without a user interface, for example if you’ll have someone else built later a mobile client for that API, or even a full SPA web client, there are many reasons to build an API only app.

And since there are that many reasons, rails helps with that also, it has an option you can pass when creating a new application “–api” that will create an streamlined app to favor this kind of development.

Talking like that, it seems like a really big thing, but there aren’t that much differences.

But basically, the differences are:

  • There wont be an app/assets directory, and no asset pipeline preconfigured.
  • The rails frameworks will be expanded in the config/application.rb allowing you to comment anything you’ll not use
  • There are less gems in Gemfile (mostly the JavascriptGems are removed)
  • The ApplicationController descends from ActionController::API instead of ActionController::Base

The main difference in the controller is that it does not include by default some features like

  • Layout
  • Template rendering
  • cookies
  • sessions

Of course this will make the controller stack a lot slimmer, and faster, suitable for an API only App.

So, lets start with the command:

rails new myapissample --api

Then you can edit “config/application.rb” and maybe comment some features that you’ll not use, for example “ActionCable”

require_relative 'boot'
require "rails"
# Pick the frameworks you want:
require "active_model/railtie"
require "active_job/railtie"
require "active_record/railtie"
require "action_controller/railtie"
require "action_mailer/railtie"
#require "action_view/railtie"
#require "action_cable/engine"
# require "sprockets/railtie"
require "rails/test_unit/railtie"
 
# Require the gems listed in Gemfile, including any gems
# you've limited to :test, :development, or :production.
Bundler.require(*Rails.groups)
 
module Myapissample
class Application &lt; Rails::Application
# Initialize configuration defaults for originally generated Rails version.
config.load_defaults 5.1
 
# Settings in config/environments/* take precedence over those specified here.
# Application configuration should go into files in config/initializers
# -- all .rb files in that directory are automatically loaded.
 
# Only loads a smaller set of middleware suitable for API only apps.
# Middleware like session, flash, cookies can be added back manually.
# Skip views, helpers and assets when generating a new resource.
config.api_only = true
end
end

After that you can just write your controllers as usual, access your models, …

How to use docker to have an uniform development environment for your rails project

Lets say you work on a company, and there is more than one developer at the company, and sometimes other developer is hired and need to configure the development environment.

Or maybe you work on an opensource project and you want to make the life of anyone that is contributing to the project easier.

Or you might want to deploy your application to production without worrying if the environment in the production server is different from the development environment where the application was tested, this way preventing the infamous “it works on my machine”.

These are all valid reasons to learn a little docker, as we’ll see here, docker will help you configure your environment once, and deploy your application to any environment (we’ll have posts in the next few days showing how to deploy it in all major clouds…).

So lets start installing docker, you can get the right Docker CE  for your platform in the official website. Do not forget to also install docker-compose.

After this you’ll just create a new rails application with a command like this (or work on an existing app you have around…)

 rails new rails_docker_sample -d mysql --skip-coffee

(why I’m using MySQL? just because I’m used to 😛 )

(why I’m skipping coffee script? because I do not like it 😛 )

Now, we need to create a “Dockerfile” and I use almost the same for all my rails projects, with very small differences currently.

FROM ruby:2.5.0
 
RUN apt-get update -qq && apt-get install -y build-essential  apt-transport-https
 
# Node.js
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - \
    && apt-get install -y nodejs
 
# yarn
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -\
    && echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list \
    && apt-get update \
    && apt-get install -y yarn
 
 
#install app
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
COPY . /myapp
RUN yarn install
ENTRYPOINT ["/myapp/bin/rails", "s", "-b", "0.0.0.0"]

The main differences between projects will be the database driver library and any other specificity of your project, the ruby version…

What is important in this Dockerfile:

  • FROM specify the base image we are using, I’m starting with the image that contains ruby 2.5.0
  • RUN runs a command inside the VM that is building the image
  • WORKDIR sets the work directory inside the image
  • COPY copies one file from your machine to the image
  • ENTRYPOINT specifies the command that will start your app when this image is executed as a container, the important thing here is that to maintain compatibility with most cloud servers were we’ll be running this containers later, we need to use this array variant, the array will be the same “ARGV” parameter to the command later.

Now, lets make some changes to our app to enable it to use environment variables to configure what is where.

First, I changed the config/database.yml file so that it will get the database address always from environment variables.

# MySQL. Versions 5.1.10 and up are supported.
#
# Install the MySQL driver
#   gem install mysql2
#
# Ensure the MySQL gem is defined in your Gemfile
#   gem 'mysql2'
#
# And be sure to use new-style password hashing:
#   https://dev.mysql.com/doc/refman/5.7/en/password-hashing.html
#
default: &default
  adapter: mysql2
  encoding: utf8
  pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
  username: <%= ENV['DATABASE_USERNAME'] %>
  password: <%= ENV['DATABASE_PASSWORD'] %>
  host: <%= ENV['DATABASE_HOST'] %>

development:
  <<: *default
  database: rails_docker_sample
 
# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
test:
  <<: *default
  database: rails_docker_sample_test
 
# As with config/secrets.yml, you never want to store sensitive information,
# like your database password, in your source code. If your source code is
# ever seen by anyone, they now have access to your database.
#
# Instead, provide the password as a unix environment variable when you boot
# the app. Read http://guides.rubyonrails.org/configuring.html#configuring-a-database
# for a full rundown on how to provide these environment variables in a
# production deployment.
#
# On Heroku and other platform providers, you may have a full connection URL
# available as an environment variable. For example:
#
#   DATABASE_URL="mysql2://myuser:mypass@localhost/somedatabase"
#
# You can use this database configuration with:
#
#   production:
#     url: <%= ENV['DATABASE_URL'] %>
#
production:
  <<: *default
  database: rails_docker_sample

The only database with a different name is the test DB, because we do not want trash in any other environment.

then I changed the config/cable.yml to also use environment variables to connect to redis, making it possible to use it in production later.

development:
  adapter: redis
  url: <%= ENV.fetch("REDIS_URL") { "redis://localhost:6379/1" } %>
  channel_prefix: rails_docker_sample_production

test:
  adapter: async

production:
  adapter: redis
  url: <%= ENV.fetch("REDIS_URL") { "redis://localhost:6379/1" } %>
  channel_prefix: rails_docker_sample_production

Again, except for the test environment

Now you can build your docker image, and to make it easier to reference later you can add a tag, the command will be similar to this one:

sudo docker build -t rails_docker_sample  .

we are invoking the build command, tagging the image with “rails_docker_sample” and using the current directory as the source for building.

Ok, that is pretty, but pretty useless also, to setup our development environment, we’ll use docker-compose, to do that, we’ll create a docker-compose.yml file similar to this one, describing all the images we need.

version: '3'
services:
  mysqlhost:
    image: mysql:5.7
    environment:
      - MYSQL_ROOT_PASSWORD=password
    volumes:
      - ../mysqldata:/var/lib/mysql
    ports:
      - "3306:3306"
    restart: always
  redishost:
    container_name: redis
    image: redis
    restart: always
  web:
    build: .
    container_name: "myapp"
    image: ubuntu/latest
    environment:
      - DATABASE_HOST=mysqlhost
      - DATABASE_USER=root
      - DATABASE_PASSWORD=password
      - REDIS_URL=redis://redishost:6379/1
    volumes:
      - .:/myapp
    ports:
      - "3000:3000"
    depends_on:
      - mysqlhost
      - redishost

and we can run it with the command (do not forget to create the ../mysqldata directory first):

sudo docker-compose up

But what exactly that will do?

It will download any needed images (like the mysql and redis ones).

It will build your docker image, based on your Dockerfile

It will start a docker container for your app passing the configured variables

And there is some magic there also, the “volumes” section for each service, allow the mapping of one local directory to one container directory, for example, the ../mysqldata that was created before, now contains the mysql databases, you can erase the container and still have access to your data, we can use a similar technique while deploying the app to the cloud later.

We are also mapping the project base directory to the app directory in the container, ans since the RAILS_ENV there is “development”  any changes we do in the files will reflect in the running container.

The “ports” section is also interesting, it allows mapping a container TCP/IP port to your local machine, allowing you to access http://localhost:3000 to access your rails app, and if you do it right now, you’ll notice that you’ll receive one error that the database does not exists.

We can fix that easily, just go to another terminal window in the same project directory and type these command:

sudo docker-compose run --entrypoint "bash -c" web "bundle exec rake db:create"

We had to override the entrypoint specified in the Dockerfile because everything we pass as parameters is passed to that entrypoint, another option we have is to not specify the ENTRYPOINT in the docker file, and specify a command in the docker-compose.yml.

That would allow us to simplify this and access a “bash” in the container with this command:

sudo docker-compose run  web bash

So after this, you just need to share your project with any coworker and they can just “sudo docker-compose up” and start working with all the same environment you have.

Of course this is just a quick and dirty introduction to how to use docker with a Rails app, but we’ll expand this with some posts in the next days about how to use what we learned here to deploy to any of the major cloud providers.

If you want to download the code I used to crate this sample, you can get it in my github page https://github.com/urubatan/rails_docker_sample

If you have any questions about this post or suggestions about the next ones, please leave a comment and I’ll answer it ASAP.

 

Quick and Dirty introduction to ActionCable – the best WebSockets for Rails!

This post is a followup and a translation of my presentation from “The Developers Conference Florianopolis 2018

What are WebSockets good for?

  • Update the screen of many clients simultaneously when the database is updated
  • Allow many users to edit the same resource at the same time
  • Notify users that something happened

Among many other things.

I’ll not try to convince you that websockets are the best solution for these, and of course you have many options to use, for example:

  • Node.js
  • Websocket-rails
  • ActionCable

I’ll focus here in how to easily use ActionCable that is the default rails implementation and it made my life a lot easier in the last few months (I used websocket-rails before but it’s not being actively developed for a long time now…)

ActionCable basics

Besides being an awesome and simple API, ActionCable has one excellent performance (according to my tests) and has a really good connection handling.

ActionCable is a pub/sub implementation, and that makes things a lot simpler, and to simplify the pub/sub implementation it uses channels.

Each client connection connects to a channel in the server, each channel implementation, streams to a named channel defined when the client connects, allowing to use parameters to define the channel name.

Then the server can send back messages to any of the defined named channels.

Ok, writing it like that, it seems kinda complicated, but it is really simple.

For example, if you wanna send from Ruby a notification to any client, you’ll send data to one of these named channels, with a code similar to this:

ActionCable.server.broadcast 'broadcast_sample', data

where “broadcast_sample” is the name of a channel, and data is any object, for me, usually a hash with the information I want to send back to the clients.

Of course you need to define the name of the channel when the users connect, and this is done in the “ActionCable::Channel” instances in the “subscribed” method, like in the sample bellow:

class MyChannel < ApplicationCable::Channel
  def subscribed
    stream_from "broadcast_sample"
    stream_from "nome#{params[:name]}"
    stream_for current_user
  end
  def unsubscribed
    # Any cleanup needed when channel is unsubscribed
  end
end

As you can see above, from that method, it is possible to define a constant name for a topic/channel, use parameters sent by the user to define the name, and you can use the “model” variant, that is just a shortcut for creating a string name for that model.

The key is to use the “stream_from” or “stream_for” methods and use the same name later in the broadcast name.

Just to make it clearer how to send a broadcast to each of these 3 samples above, I’ll show bellow a sample code for each:

ActionCable.server.broadcast 'broadcast_sample', data

ActionCable.server.broadcast ‘nomeRodrigo’, comment: ‘Teste’, from_id: 47

ActionCable.server.broadcast_to @post, @comment

Receiving messages in Javascript

Ok, but how do you receive these messages in Javascript? it is almost as easy, just need to implement the “received” method like in the sample bellow:

App.bcsample = App.cable.subscriptions.create("BcsampleChannel", {
    connected: function () {
        // Called when the subscription is ready for use on the server
    },

    disconnected: function () {
        // Called when the subscription has been terminated by the server
    },

    received: function (data) {
        // Called when there's incoming data on the websocket for this channel
        var message = $("<div/>");
        message.text(data.message);
        $('.message-list').append(message);
    },

    speak_to_all: function (message) {
        return this.perform('speak_to_all', {user_id: window.name, message: message});
    }
});

Important points in this sample:

  • BcsampleChannel is the class name of the channel in Ruby
  • the data parameter in the received function is the data passet to the broadcast function, it should always be an object, a string does not works, I’ve tried it.

And how to call ruby code from javascript?

Just take a look at the last part of the sample above, in the “speak_to_all” method, the “perform” method, will call a  method with the same name, passing the hash parameter as the data parameter to a method “speak_to_all” in the “BcsampleChannel” class.

Of course we need to update that class to receive this call, like in the sample bellow:

class BcsampleChannel < ApplicationCable::Channel
  def subscribed
    stream_from "broadcast_sample"
  end

  def unsubscribed
    # Any cleanup needed when channel is unsubscribed
  end

  def speak_to_all(data)
    ActionCable.server.broadcast 'broadcast_sample', data
  end
end

This sample, will receive any data and broadcast it to all connected clients.

There is one last question, how do we pass parameters to subscribed method? simple, just take a quick look at the sample bellow:

App.privatesample = App.cable.subscriptions.create({channel:"PrivatesampleChannel", windowid: window.name}, {
  connected: function() {
    // Called when the subscription is ready for use on the server
  },

  disconnected: function() {
    // Called when the subscription has been terminated by the server
  },

  received: function(data) {
    // Called when there's incoming data on the websocket for this channel
  },
});

in the create method, instead of passing the name as a string, we need to pass an object, and the “channel” property is required, anything else will be a parameter to the channel in Ruby to use as needed.

But how about deploying?

  • You can use Redis or a database as a backend
  • If you are using passenger and nginx your are almost done!
  • Remember to setup the server path in the routes.rb
  • test and be happy

The first step is to edit the “config/cable.yml” file like the sample bellow:

production:
  adapter: redis
  url: redis://redis.example.com:6379

local: &local
  adapter: redis
  url: redis://localhost:6379

development: *local
test: *local

Then you need to add the mapping to the “config/routes.rb” file:

# Serve websocket cable requests in-process
mount ActionCable.server => '/cable'

and just add a location config to your nginx configuration like in the host bellow:

server {
    listen 80;
    server_name www.foo.com;
    root /path-to-your-app/public;
    passenger_enabled on;

    ### INSERT THIS!!! ###
    location /cable {
        passenger_app_group_name YOUR_APP_NAME_HERE_action_cable;
        passenger_force_max_concurrent_requests_per_process 0;
    }
}

Of course you have the option to start the server as a standalone server, and configure the reverse proxy, but that is a subject to another post.

You can send broadcasts to it from a sidekiq job or from rails console, as soon as you do not forget to configure the backend as shown above.

And if you have problems or questions about using or deploying ActionCable please leave a comment bellow, I’ll answer as fast as possible.

 

3 common problems of rails application deployments (or any platform really, these problems happens to everyone)

Rails is a really cool framework to work with, but it is not fall proof, and it will not prevent you from doing stupid things, having that said, even with the best tools available, putting a new software in production, or doing a significant upgrade to a software that is already in production is always a high adrenaline operation.
I can bet you’ve already found one of these problems:
  1. QA and Production have a different OS version and a software you have very well tested will not install in production
  2. Production database has a lot more data than your test database, and that is causing performance problems
  3. QA and Production, for financial or any other reason, use a different number of machines for different services
We’ll talk about each of these problems and about some ways of identifying the side effects, fixing them or adding a workaround for them.

QA and Production have a different OS version and a software you have very well tested will not install in production

Once upon a time, there was a system in QA, a major upgrade to a system that was already in production, as such, many libraries were upgraded, rails was upgrades from 4.x to 5.x, and many other upgrades were made. Everything was working fine, engineers tested the system, select users tested the system, the company CEO tested the system, there was no chance of having problems during the deploy to production.
Except that all the engineers forgot to check if the QA server was using the same Linux version as the production servers, this caused lots of different problems, starting by sidekiq not being able to use the redis version available in the Linux installed in the production server.
To prevent this problem, simply verify the version of the operational system in all environments, it is better to use the same version, at least in QA and production servers, the only exception to that rule is if you are planning to upgrade the version of the production server, in that case it is better to use the QA server to test the upgrade.
As a workaround, the incompatible software can be compiled from source, it is usually enough to install from source a compatible version. Never copy a binary version from one server to another, because that can have lots of unexpected problems due to library differences.

Production database has a lot more data than your test database, and that is causing performance problems

This problem is really hard to identify in QA, and happens usually in systems that have some kind of report interface or sometimes in the rendering of an edit interface.
I’ve seen this problem for example in a system user’s editor, in the user’s list screen, that had no server side pagination, and in a user profile editor.
The user list had problem because the QA had a really smaller number of users (around 100 users in QA and 60k users in production), this difference made the listing of users to freeze the screen, since no browser could handle the workload of adding 60k users to the DOM at the same time.
The user profile editor had a similar problem, because the properties being edited were added from the database, and some users in production had a significantly greater number of properties than the number tested in QA.
The only solution for this problem is to test with data as close to production as possible.
As a workaround, you’ll need to identify what is causing the slowness of the application, if it is screen rendering or database time.
For screen rendering, the easiest solution is to use screen pagination and similar techniques.
For database slowness, usually changing and optimizing queries is the only solution, for this problem, rails do a small help printing the query plan for slower queries, but it is even better to use a service similar to appoptics with an application plugin to help identify slower paths in the application code.

QA and Production, for financial or any other reason, use a different number of machines for different services

You’ll never need the same scalability in the test environment and in the production environment, but sometimes, at least it happened for me, in QA you have all services for the application in the same machine, and in production you have these services running on multiple machines for scalability and performance.
This can cause deploy problems when you add a feature and for some reason references one of these services as being in the same machine, the QA environment will not accuse any problem, everything will work as expected, but when you deploy your application to production, strange things can happen.
If you are very lucky, the problem will be simple and you’ll have an “Invalid URL”, “Connection Refused” or something like that.
If you are unlucky like me, you can just have one operation that usually takes less than a second, running in 5 minutes due to a routing problem caused by a request being made to an IPv6 address with no application listening on, and some ‘Execution Expired’ messages in the log file from a completely different service.
Of course this could have been prevented with good practices, using always host names and correct configuration in the respective environment file, but the ideal way to prevent this is is you’ll run this service split in multiple machines in production, try to use at least one machine per service in QA, if you’ll use 10 machines for the same service in production to scale it, it would probably not be economically viable to use the same number of machines in QA, but try to use at least one for service, for example, one for the web server, one for the WebSockets server, one for database, one for sidekiq queues, ans so on.

WebPack on Rails! – the easiest way to use the new Javascript syntax on your rails apps with the newest frameworks

I had some rails projects that needed a better UI or a different feature in the UI, and there was the perfect javascript library, the problem was that it needed “require.js” and it is not really easy to integrate require.js in asset pipeline.

The good news is that there is a webpack gem that will do all the work for us…

And after rails 5.1 you can just

rails new myapp --webpacker

but lets assume you have an existing app, the changes are a little bigger, but we can use both the old asset pipeline and the new webpacker.

Lets start adding the webpacker gem to the Gemfile

gem 'webpacker'

Then just run:

bundle install
rails webpacker:install

After this, you have a new file, called app/javascript/packs/application.js where you can use

var mylib = require('myjslibrary');

You’ll be able to require there any javacript you create in the app/javascript directory (instead of app/assets/javascript), and any library you add to the application using the yarn executable.

To add a library requirement use:

yarn add myjslibraryname

And do no forget to run in your deploy server:

yarn install

After commiting the yarn.lock file of course, that file will make sure you have the same library versions in all the machines your project is running.

And last, but not least, do not forget to add the the script tag to call that file to your layout using the code:

<%= javascript_pack_tag 'application' %>

And of course you do not need to remove the old javascript_tag file, allowing you to keep using both asset pipeline version and the new webpacker version.

In this new file, you can use the all new requirejs syntax, and of course that is not all, you can add css to the app/javascript directory and insert in the layout with the <%= stylesheet_pack_tag ‘application’ %>, and the gem has shortcuts to use all the new and fancy javascript APIs, for example:

rails webpacker:install:angular          # Install everything needed for Angular
rails webpacker:install:coffee           # Install everything needed for Coffee
rails webpacker:install:elm              # Install everything needed for Elm
rails webpacker:install:erb              # Install everything needed for Erb
rails webpacker:install:react            # Install everything needed for React
rails webpacker:install:stimulus         # Install everything needed for Stimulus
rails webpacker:install:typescript       # Install everything needed for Typescript
rails webpacker:install:vue              # Install everything needed for Vue

Any of these shortcuts will install the required files to use the specified library in your existing rails app.

This is it for now, it is a good start I think.

Please comment any questions you have and I’ll answer it the fastest I can!

Git deploy – how to implement git deploy in a project

Lately I’m becoming a lazy developer, and this reflects in my work.

I tend to choose the easiest solution that will work for any project, and sometimes a simple project, is still in it’s early stages and it does not pay to configure a capistrano deploy or anything fancy, so I’m just using git to do the deployment, and it almost feels like heroku for me.

And the setup is pretty simple, might help you your projects too.

To setup that, we’ll use git hooks, and a bare git repository.

I’ll use a simplified version of my scripts in this post, to create a simple step by step.

In the server, create a directory for the bare git repo and initialize the  repository:

mkdir myproj.git
cd myproj.git
git init --bare
cd ..
git clone myproj.git

After that, we’ll setup the post-receive hook in the bare repository, to do that, create a file called post-receive in the myproj.git/hooks directory with this content:

#!/bin/bash
/bin/bash --login <<_EOF_
export GIT_DIR=/home/urubatan/myproj/.git
rvm use 2.4.0
cd /home/urubatan/myproj
git pull
npm install
bundle install
RAILS_ENV=production bundle exec rake db:migrate
bundle exec rake assets:precompile
touch tmp/restart.txt
_EOF_

Since we want this hook to execute every time we push something to that repository, do not forget o make the script executable:

chmod 755 myproj.git/hooks/post-receive

now back to your machine, just create your rails project as usual:

rails new myproj_client

add the bare repository as “deploy” remote

git remote add deploy user@server:~/myproj.git

and when you are done, push your changes to the server:

git add .
git commit -m "sample commit for the blog"
git push deploy master

Of course, you still need to configure the server, using for example, nginx + passenger, or puma, or any other thing, but that is subject to another post.

Please add any question to the comments of this post, I’ll answer everything as soon as possible.

Git + Lazyness = happy Rails developer

I had some problems that you probably have too, if you are a ruby developer that works in a team…

Check if you have at least one of these problems:

  • Someone commits something that does not passes the test suit
  • Someone have deployed a version of the rails application without precompiling assets
  • Similar to the above but forgot to bundle install or npm install

The list can go on and on…

This short post will just give some tips of cool uses for the .git/hooks scripts to help solve some of these problems…

I have a small web application running, and the deploy of that application is just a git pull in the server (ok, blame on me, I’m not using docker for all my apps…)

And to prevent some of the above problems in this application, I created a .git/hooks/post-merge file with the bellow code:

bundle install
bundle exec rake db:migrate
bundle exec rake assets:precompile
touch tmp/restart.txt

just do not forget to “chmod u+x .git/hooks/post-merge”

With this small script, every time you run “git pull” the hook will fire and do all the dirty work for you.

The problem is that sometimes you are just updating a controller and does not need to run all that, and that is fine.

Of course you can make a more complex script to run only the commands you need, but this is good enough for simple scenarios, and at least in my case, I do no do that many deploys a day (most days there aren’t any deploys)

Another problem I had was some developers in the team, not running the tests before pushing changes to the central repository, and for this, a pre-push hook was just perfect, but I wanted to harden the things a little, and used a pre-commit hook, so to commit any small change, the developer should run the app tests.

To do that I created a .git/hooks/pre-commit (again, do not forget to make it executable) with this code:

#!/bin/bash
STAGED_FILES=$(git diff --cached --name-only)
if [[ "$STAGED_FILES" = "" ]]; then
    exit 0
fi
grep migrations $STAGED_FILES
if [[ $? == 0 ]]; then
  bundle exec rake db:migrate
fi
TESTS=""
CUCUMBER=0
grep models $STAGED_FILES
if [[ $? == 0 ]]; then
  TESTS="tests/models"
  CUCUMBER=1
fi
grep controllers $STAGED_FILES
if [[ $? == 0 ]]; then
  TESTS="$TESTS tests/controllers"
  CUCUMBER=1
fi
grep features $STAGED_FILES
if [[ $? == 0 ]]; then
  CUCUMBER=1
fi
if [[ "$TESTS" != "" ]]; then
  bundle exec rake test $TESTS
  if [[ $? == 1 ]]; then
    exit 1
  fi
fi
if [[ $CUCUMBER == 1 ]]; then
 bundle exec cucumber
 if [[ $? == 1 ]]; then
   exit 1
  fi
fi
exit 0;

we have some more verifications in the real file, but this is the idea, if you changed a file, we’ll run the tests before allowing you to commit.

We have some more ideas about how to make git help us, and some of them is building a “heroku like” experience, but we do not really need it but the “trick” that makes it possible, and makes the github web hooks possible too, is the “post-receive” hook.

Since we use github, we have not implemented a post-receive patch, but we have a webhook calling a “cgi script” that was written in ruby (just for fun) that fires a deploy, the script is stupidly simple, only the following:

#!/bin/env ruby
 Dir.chdir('applicationdir')
`git pull`
puts "<html></html>"

with this script, protected by authentication of course, and the hooks we mentioned before, I have github firing a deploy in my development/test environment every time a pull-request is merged to the master branch.

Of course we do not do anything that simple and unsecure to production, but this helps a lot our test environment 😀

 

I hope these git/rails tips help you to improve your project, probably not with the exactly same scripts, but the ideas can be adjusted to your environment.

If you need more ideas or have questions about anything that I wrote here, please leave a comment.