How to use docker to have an uniform development environment for your rails project

Lets say you work on a company, and there is more than one developer at the company, and sometimes other developer is hired and need to configure the development environment.

Or maybe you work on an opensource project and you want to make the life of anyone that is contributing to the project easier.

Or you might want to deploy your application to production without worrying if the environment in the production server is different from the development environment where the application was tested, this way preventing the infamous “it works on my machine”.

These are all valid reasons to learn a little docker, as we’ll see here, docker will help you configure your environment once, and deploy your application to any environment (we’ll have posts in the next few days showing how to deploy it in all major clouds…).

So lets start installing docker, you can get the right Docker CE  for your platform in the official website. Do not forget to also install docker-compose.

After this you’ll just create a new rails application with a command like this (or work on an existing app you have around…)

 rails new rails_docker_sample -d mysql --skip-coffee

(why I’m using MySQL? just because I’m used to 😛 )

(why I’m skipping coffee script? because I do not like it 😛 )

Now, we need to create a “Dockerfile” and I use almost the same for all my rails projects, with very small differences currently.

FROM ruby:2.5.0

RUN apt-get update -qq && apt-get install -y build-essential  apt-transport-https

# Node.js
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - \
    && apt-get install -y nodejs

# yarn
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -\
    && echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list \
    && apt-get update \
    && apt-get install -y yarn


#install app
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
COPY . /myapp
RUN yarn install
ENTRYPOINT ["/myapp/bin/rails", "s", "-b", "0.0.0.0"]

The main differences between projects will be the database driver library and any other specificity of your project, the ruby version…

What is important in this Dockerfile:

  • FROM specify the base image we are using, I’m starting with the image that contains ruby 2.5.0
  • RUN runs a command inside the VM that is building the image
  • WORKDIR sets the work directory inside the image
  • COPY copies one file from your machine to the image
  • ENTRYPOINT specifies the command that will start your app when this image is executed as a container, the important thing here is that to maintain compatibility with most cloud servers were we’ll be running this containers later, we need to use this array variant, the array will be the same “ARGV” parameter to the command later.

Now, lets make some changes to our app to enable it to use environment variables to configure what is where.

First, I changed the config/database.yml file so that it will get the database address always from environment variables.

# MySQL. Versions 5.1.10 and up are supported.
#
# Install the MySQL driver
#   gem install mysql2
#
# Ensure the MySQL gem is defined in your Gemfile
#   gem 'mysql2'
#
# And be sure to use new-style password hashing:
#   https://dev.mysql.com/doc/refman/5.7/en/password-hashing.html
#
default: &default
  adapter: mysql2
  encoding: utf8
  pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
  username: <%= ENV['DATABASE_USERNAME'] %>
  password: <%= ENV['DATABASE_PASSWORD'] %>
  host: <%= ENV['DATABASE_HOST'] %>

development:
  <<: *default
  database: rails_docker_sample

# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
test:
  <<: *default
  database: rails_docker_sample_test

# As with config/secrets.yml, you never want to store sensitive information,
# like your database password, in your source code. If your source code is
# ever seen by anyone, they now have access to your database.
#
# Instead, provide the password as a unix environment variable when you boot
# the app. Read http://guides.rubyonrails.org/configuring.html#configuring-a-database
# for a full rundown on how to provide these environment variables in a
# production deployment.
#
# On Heroku and other platform providers, you may have a full connection URL
# available as an environment variable. For example:
#
#   DATABASE_URL="mysql2://myuser:mypass@localhost/somedatabase"
#
# You can use this database configuration with:
#
#   production:
#     url: <%= ENV['DATABASE_URL'] %>
#
production:
  <<: *default
  database: rails_docker_sample

The only database with a different name is the test DB, because we do not want trash in any other environment.

then I changed the config/cable.yml to also use environment variables to connect to redis, making it possible to use it in production later.

development:
  adapter: redis
  url: <%= ENV.fetch("REDIS_URL") { "redis://localhost:6379/1" } %>
  channel_prefix: rails_docker_sample_production

test:
  adapter: async

production:
  adapter: redis
  url: <%= ENV.fetch("REDIS_URL") { "redis://localhost:6379/1" } %>
  channel_prefix: rails_docker_sample_production

Again, except for the test environment

Now you can build your docker image, and to make it easier to reference later you can add a tag, the command will be similar to this one:

sudo docker build -t rails_docker_sample  .

we are invoking the build command, tagging the image with “rails_docker_sample” and using the current directory as the source for building.

Ok, that is pretty, but pretty useless also, to setup our development environment, we’ll use docker-compose, to do that, we’ll create a docker-compose.yml file similar to this one, describing all the images we need.

version: '3'
services:
  mysqlhost:
    image: mysql:5.7
    environment:
      - MYSQL_ROOT_PASSWORD=password
    volumes:
      - ../mysqldata:/var/lib/mysql
    ports:
      - "3306:3306"
    restart: always
  redishost:
    container_name: redis
    image: redis
    restart: always
  web:
    build: .
    container_name: "myapp"
    image: ubuntu/latest
    environment:
      - DATABASE_HOST=mysqlhost
      - DATABASE_USER=root
      - DATABASE_PASSWORD=password
      - REDIS_URL=redis://redishost:6379/1
    volumes:
      - .:/myapp
    ports:
      - "3000:3000"
    depends_on:
      - mysqlhost
      - redishost

and we can run it with the command (do not forget to create the ../mysqldata directory first):

sudo docker-compose up

But what exactly that will do?

It will download any needed images (like the mysql and redis ones).

It will build your docker image, based on your Dockerfile

It will start a docker container for your app passing the configured variables

And there is some magic there also, the “volumes” section for each service, allow the mapping of one local directory to one container directory, for example, the ../mysqldata that was created before, now contains the mysql databases, you can erase the container and still have access to your data, we can use a similar technique while deploying the app to the cloud later.

We are also mapping the project base directory to the app directory in the container, ans since the RAILS_ENV there is “development”  any changes we do in the files will reflect in the running container.

The “ports” section is also interesting, it allows mapping a container TCP/IP port to your local machine, allowing you to access http://localhost:3000 to access your rails app, and if you do it right now, you’ll notice that you’ll receive one error that the database does not exists.

We can fix that easily, just go to another terminal window in the same project directory and type these command:

sudo docker-compose run --entrypoint "bash -c" web "bundle exec rake db:create"

We had to override the entrypoint specified in the Dockerfile because everything we pass as parameters is passed to that entrypoint, another option we have is to not specify the ENTRYPOINT in the docker file, and specify a command in the docker-compose.yml.

That would allow us to simplify this and access a “bash” in the container with this command:

sudo docker-compose run  web bash

So after this, you just need to share your project with any coworker and they can just “sudo docker-compose up” and start working with all the same environment you have.

Of course this is just a quick and dirty introduction to how to use docker with a Rails app, but we’ll expand this with some posts in the next days about how to use what we learned here to deploy to any of the major cloud providers.

If you want to download the code I used to crate this sample, you can get it in my github page https://github.com/urubatan/rails_docker_sample

If you have any questions about this post or suggestions about the next ones, please leave a comment and I’ll answer it ASAP.

 

Quick and Dirty introduction to ActionCable – the best WebSockets for Rails!

This post is a followup and a translation of my presentation from “The Developers Conference Florianopolis 2018

What are WebSockets good for?

  • Update the screen of many clients simultaneously when the database is updated
  • Allow many users to edit the same resource at the same time
  • Notify users that something happened

Among many other things.

I’ll not try to convince you that websockets are the best solution for these, and of course you have many options to use, for example:

  • Node.js
  • Websocket-rails
  • ActionCable

I’ll focus here in how to easily use ActionCable that is the default rails implementation and it made my life a lot easier in the last few months (I used websocket-rails before but it’s not being actively developed for a long time now…)

ActionCable basics

Besides being an awesome and simple API, ActionCable has one excellent performance (according to my tests) and has a really good connection handling.

ActionCable is a pub/sub implementation, and that makes things a lot simpler, and to simplify the pub/sub implementation it uses channels.

Each client connection connects to a channel in the server, each channel implementation, streams to a named channel defined when the client connects, allowing to use parameters to define the channel name.

Then the server can send back messages to any of the defined named channels.

Ok, writing it like that, it seems kinda complicated, but it is really simple.

For example, if you wanna send from Ruby a notification to any client, you’ll send data to one of these named channels, with a code similar to this:

ActionCable.server.broadcast 'broadcast_sample', data

where “broadcast_sample” is the name of a channel, and data is any object, for me, usually a hash with the information I want to send back to the clients.

Of course you need to define the name of the channel when the users connect, and this is done in the “ActionCable::Channel” instances in the “subscribed” method, like in the sample bellow:

class MyChannel < ApplicationCable::Channel
  def subscribed
    stream_from "broadcast_sample"
    stream_from "nome#{params[:name]}"
    stream_for current_user
  end
  def unsubscribed
    # Any cleanup needed when channel is unsubscribed
  end
end

As you can see above, from that method, it is possible to define a constant name for a topic/channel, use parameters sent by the user to define the name, and you can use the “model” variant, that is just a shortcut for creating a string name for that model.

The key is to use the “stream_from” or “stream_for” methods and use the same name later in the broadcast name.

Just to make it clearer how to send a broadcast to each of these 3 samples above, I’ll show bellow a sample code for each:

ActionCable.server.broadcast 'broadcast_sample', data

ActionCable.server.broadcast ‘nomeRodrigo’, comment: ‘Teste’, from_id: 47

ActionCable.server.broadcast_to @post, @comment

Receiving messages in Javascript

Ok, but how do you receive these messages in Javascript? it is almost as easy, just need to implement the “received” method like in the sample bellow:

App.bcsample = App.cable.subscriptions.create("BcsampleChannel", {
    connected: function () {
        // Called when the subscription is ready for use on the server
    },

    disconnected: function () {
        // Called when the subscription has been terminated by the server
    },

    received: function (data) {
        // Called when there's incoming data on the websocket for this channel
        var message = $("<div/>");
        message.text(data.message);
        $('.message-list').append(message);
    },

    speak_to_all: function (message) {
        return this.perform('speak_to_all', {user_id: window.name, message: message});
    }
});

Important points in this sample:

  • BcsampleChannel is the class name of the channel in Ruby
  • the data parameter in the received function is the data passet to the broadcast function, it should always be an object, a string does not works, I’ve tried it.

And how to call ruby code from javascript?

Just take a look at the last part of the sample above, in the “speak_to_all” method, the “perform” method, will call a  method with the same name, passing the hash parameter as the data parameter to a method “speak_to_all” in the “BcsampleChannel” class.

Of course we need to update that class to receive this call, like in the sample bellow:

class BcsampleChannel < ApplicationCable::Channel
  def subscribed
    stream_from "broadcast_sample"
  end

  def unsubscribed
    # Any cleanup needed when channel is unsubscribed
  end

  def speak_to_all(data)
    ActionCable.server.broadcast 'broadcast_sample', data
  end
end

This sample, will receive any data and broadcast it to all connected clients.

There is one last question, how do we pass parameters to subscribed method? simple, just take a quick look at the sample bellow:

App.privatesample = App.cable.subscriptions.create({channel:"PrivatesampleChannel", windowid: window.name}, {
  connected: function() {
    // Called when the subscription is ready for use on the server
  },

  disconnected: function() {
    // Called when the subscription has been terminated by the server
  },

  received: function(data) {
    // Called when there's incoming data on the websocket for this channel
  },
});

in the create method, instead of passing the name as a string, we need to pass an object, and the “channel” property is required, anything else will be a parameter to the channel in Ruby to use as needed.

But how about deploying?

  • You can use Redis or a database as a backend
  • If you are using passenger and nginx your are almost done!
  • Remember to setup the server path in the routes.rb
  • test and be happy

The first step is to edit the “config/cable.yml” file like the sample bellow:

production:
  adapter: redis
  url: redis://redis.example.com:6379

local: &local
  adapter: redis
  url: redis://localhost:6379

development: *local
test: *local

Then you need to add the mapping to the “config/routes.rb” file:

# Serve websocket cable requests in-process
mount ActionCable.server => '/cable'

and just add a location config to your nginx configuration like in the host bellow:

server {
    listen 80;
    server_name www.foo.com;
    root /path-to-your-app/public;
    passenger_enabled on;

    ### INSERT THIS!!! ###
    location /cable {
        passenger_app_group_name YOUR_APP_NAME_HERE_action_cable;
        passenger_force_max_concurrent_requests_per_process 0;
    }
}

Of course you have the option to start the server as a standalone server, and configure the reverse proxy, but that is a subject to another post.

You can send broadcasts to it from a sidekiq job or from rails console, as soon as you do not forget to configure the backend as shown above.

And if you have problems or questions about using or deploying ActionCable please leave a comment bellow, I’ll answer as fast as possible.

 

3 common problems of rails application deployments (or any platform really, these problems happens to everyone)

Rails is a really cool framework to work with, but it is not fall proof, and it will not prevent you from doing stupid things, having that said, even with the best tools available, putting a new software in production, or doing a significant upgrade to a software that is already in production is always a high adrenaline operation.
I can bet you’ve already found one of these problems:
  1. QA and Production have a different OS version and a software you have very well tested will not install in production
  2. Production database has a lot more data than your test database, and that is causing performance problems
  3. QA and Production, for financial or any other reason, use a different number of machines for different services
We’ll talk about each of these problems and about some ways of identifying the side effects, fixing them or adding a workaround for them.

QA and Production have a different OS version and a software you have very well tested will not install in production

Once upon a time, there was a system in QA, a major upgrade to a system that was already in production, as such, many libraries were upgraded, rails was upgrades from 4.x to 5.x, and many other upgrades were made. Everything was working fine, engineers tested the system, select users tested the system, the company CEO tested the system, there was no chance of having problems during the deploy to production.
Except that all the engineers forgot to check if the QA server was using the same Linux version as the production servers, this caused lots of different problems, starting by sidekiq not being able to use the redis version available in the Linux installed in the production server.
To prevent this problem, simply verify the version of the operational system in all environments, it is better to use the same version, at least in QA and production servers, the only exception to that rule is if you are planning to upgrade the version of the production server, in that case it is better to use the QA server to test the upgrade.
As a workaround, the incompatible software can be compiled from source, it is usually enough to install from source a compatible version. Never copy a binary version from one server to another, because that can have lots of unexpected problems due to library differences.

Production database has a lot more data than your test database, and that is causing performance problems

This problem is really hard to identify in QA, and happens usually in systems that have some kind of report interface or sometimes in the rendering of an edit interface.
I’ve seen this problem for example in a system user’s editor, in the user’s list screen, that had no server side pagination, and in a user profile editor.
The user list had problem because the QA had a really smaller number of users (around 100 users in QA and 60k users in production), this difference made the listing of users to freeze the screen, since no browser could handle the workload of adding 60k users to the DOM at the same time.
The user profile editor had a similar problem, because the properties being edited were added from the database, and some users in production had a significantly greater number of properties than the number tested in QA.
The only solution for this problem is to test with data as close to production as possible.
As a workaround, you’ll need to identify what is causing the slowness of the application, if it is screen rendering or database time.
For screen rendering, the easiest solution is to use screen pagination and similar techniques.
For database slowness, usually changing and optimizing queries is the only solution, for this problem, rails do a small help printing the query plan for slower queries, but it is even better to use a service similar to appoptics with an application plugin to help identify slower paths in the application code.

QA and Production, for financial or any other reason, use a different number of machines for different services

You’ll never need the same scalability in the test environment and in the production environment, but sometimes, at least it happened for me, in QA you have all services for the application in the same machine, and in production you have these services running on multiple machines for scalability and performance.
This can cause deploy problems when you add a feature and for some reason references one of these services as being in the same machine, the QA environment will not accuse any problem, everything will work as expected, but when you deploy your application to production, strange things can happen.
If you are very lucky, the problem will be simple and you’ll have an “Invalid URL”, “Connection Refused” or something like that.
If you are unlucky like me, you can just have one operation that usually takes less than a second, running in 5 minutes due to a routing problem caused by a request being made to an IPv6 address with no application listening on, and some ‘Execution Expired’ messages in the log file from a completely different service.
Of course this could have been prevented with good practices, using always host names and correct configuration in the respective environment file, but the ideal way to prevent this is is you’ll run this service split in multiple machines in production, try to use at least one machine per service in QA, if you’ll use 10 machines for the same service in production to scale it, it would probably not be economically viable to use the same number of machines in QA, but try to use at least one for service, for example, one for the web server, one for the WebSockets server, one for database, one for sidekiq queues, ans so on.

WebPack on Rails! – the easiest way to use the new Javascript syntax on your rails apps with the newest frameworks

I had some rails projects that needed a better UI or a different feature in the UI, and there was the perfect javascript library, the problem was that it needed “require.js” and it is not really easy to integrate require.js in asset pipeline.

The good news is that there is a webpack gem that will do all the work for us…

And after rails 5.1 you can just

rails new myapp --webpacker

but lets assume you have an existing app, the changes are a little bigger, but we can use both the old asset pipeline and the new webpacker.

Lets start adding the webpacker gem to the Gemfile

gem 'webpacker'

Then just run:

bundle install
rails webpacker:install

After this, you have a new file, called app/javascript/packs/application.js where you can use

var mylib = require('myjslibrary');

You’ll be able to require there any javacript you create in the app/javascript directory (instead of app/assets/javascript), and any library you add to the application using the yarn executable.

To add a library requirement use:

yarn add myjslibraryname

And do no forget to run in your deploy server:

yarn install

After commiting the yarn.lock file of course, that file will make sure you have the same library versions in all the machines your project is running.

And last, but not least, do not forget to add the the script tag to call that file to your layout using the code:

<%= javascript_pack_tag 'application' %>

And of course you do not need to remove the old javascript_tag file, allowing you to keep using both asset pipeline version and the new webpacker version.

In this new file, you can use the all new requirejs syntax, and of course that is not all, you can add css to the app/javascript directory and insert in the layout with the <%= stylesheet_pack_tag ‘application’ %>, and the gem has shortcuts to use all the new and fancy javascript APIs, for example:

rails webpacker:install:angular          # Install everything needed for Angular
rails webpacker:install:coffee           # Install everything needed for Coffee
rails webpacker:install:elm              # Install everything needed for Elm
rails webpacker:install:erb              # Install everything needed for Erb
rails webpacker:install:react            # Install everything needed for React
rails webpacker:install:stimulus         # Install everything needed for Stimulus
rails webpacker:install:typescript       # Install everything needed for Typescript
rails webpacker:install:vue              # Install everything needed for Vue

Any of these shortcuts will install the required files to use the specified library in your existing rails app.

This is it for now, it is a good start I think.

Please comment any questions you have and I’ll answer it the fastest I can!

Git deploy – how to implement git deploy in a project

Lately I’m becoming a lazy developer, and this reflects in my work.

I tend to choose the easiest solution that will work for any project, and sometimes a simple project, is still in it’s early stages and it does not pay to configure a capistrano deploy or anything fancy, so I’m just using git to do the deployment, and it almost feels like heroku for me.

And the setup is pretty simple, might help you your projects too.

To setup that, we’ll use git hooks, and a bare git repository.

I’ll use a simplified version of my scripts in this post, to create a simple step by step.

In the server, create a directory for the bare git repo and initialize the  repository:

mkdir myproj.git
cd myproj.git
git init --bare
cd ..
git clone myproj.git

After that, we’ll setup the post-receive hook in the bare repository, to do that, create a file called post-receive in the myproj.git/hooks directory with this content:

#!/bin/bash
/bin/bash --login <<_EOF_
export GIT_DIR=/home/urubatan/myproj/.git
rvm use 2.4.0
cd /home/urubatan/myproj
git pull
npm install
bundle install
RAILS_ENV=production bundle exec rake db:migrate
bundle exec rake assets:precompile
touch tmp/restart.txt
_EOF_

Since we want this hook to execute every time we push something to that repository, do not forget o make the script executable:

chmod 755 myproj.git/hooks/post-receive

now back to your machine, just create your rails project as usual:

rails new myproj_client

add the bare repository as “deploy” remote

git remote add deploy user@server:~/myproj.git

and when you are done, push your changes to the server:

git add .
git commit -m "sample commit for the blog"
git push deploy master

Of course, you still need to configure the server, using for example, nginx + passenger, or puma, or any other thing, but that is subject to another post.

Please add any question to the comments of this post, I’ll answer everything as soon as possible.

Git + Lazyness = happy Rails developer

I had some problems that you probably have too, if you are a ruby developer that works in a team…

Check if you have at least one of these problems:

  • Someone commits something that does not passes the test suit
  • Someone have deployed a version of the rails application without precompiling assets
  • Similar to the above but forgot to bundle install or npm install

The list can go on and on…

This short post will just give some tips of cool uses for the .git/hooks scripts to help solve some of these problems…

I have a small web application running, and the deploy of that application is just a git pull in the server (ok, blame on me, I’m not using docker for all my apps…)

And to prevent some of the above problems in this application, I created a .git/hooks/post-merge file with the bellow code:

bundle install
bundle exec rake db:migrate
bundle exec rake assets:precompile
touch tmp/restart.txt

just do not forget to “chmod u+x .git/hooks/post-merge”

With this small script, every time you run “git pull” the hook will fire and do all the dirty work for you.

The problem is that sometimes you are just updating a controller and does not need to run all that, and that is fine.

Of course you can make a more complex script to run only the commands you need, but this is good enough for simple scenarios, and at least in my case, I do no do that many deploys a day (most days there aren’t any deploys)

Another problem I had was some developers in the team, not running the tests before pushing changes to the central repository, and for this, a pre-push hook was just perfect, but I wanted to harden the things a little, and used a pre-commit hook, so to commit any small change, the developer should run the app tests.

To do that I created a .git/hooks/pre-commit (again, do not forget to make it executable) with this code:

#!/bin/bash
STAGED_FILES=$(git diff --cached --name-only)
if [[ "$STAGED_FILES" = "" ]]; then
    exit 0
fi
grep migrations $STAGED_FILES
if [[ $? == 0 ]]; then
  bundle exec rake db:migrate
fi
TESTS=""
CUCUMBER=0
grep models $STAGED_FILES
if [[ $? == 0 ]]; then
  TESTS="tests/models"
  CUCUMBER=1
fi
grep controllers $STAGED_FILES
if [[ $? == 0 ]]; then
  TESTS="$TESTS tests/controllers"
  CUCUMBER=1
fi
grep features $STAGED_FILES
if [[ $? == 0 ]]; then
  CUCUMBER=1
fi
if [[ "$TESTS" != "" ]]; then
  bundle exec rake test $TESTS
  if [[ $? == 1 ]]; then
    exit 1
  fi
fi
if [[ $CUCUMBER == 1 ]]; then
 bundle exec cucumber
 if [[ $? == 1 ]]; then
   exit 1
  fi
fi
exit 0;

we have some more verifications in the real file, but this is the idea, if you changed a file, we’ll run the tests before allowing you to commit.

We have some more ideas about how to make git help us, and some of them is building a “heroku like” experience, but we do not really need it but the “trick” that makes it possible, and makes the github web hooks possible too, is the “post-receive” hook.

Since we use github, we have not implemented a post-receive patch, but we have a webhook calling a “cgi script” that was written in ruby (just for fun) that fires a deploy, the script is stupidly simple, only the following:

#!/bin/env ruby
 Dir.chdir('applicationdir')
`git pull`
puts "<html></html>"

with this script, protected by authentication of course, and the hooks we mentioned before, I have github firing a deploy in my development/test environment every time a pull-request is merged to the master branch.

Of course we do not do anything that simple and unsecure to production, but this helps a lot our test environment 😀

 

I hope these git/rails tips help you to improve your project, probably not with the exactly same scripts, but the ideas can be adjusted to your environment.

If you need more ideas or have questions about anything that I wrote here, please leave a comment.

How to integrate Ruby on Rails and Google Firebase to send offline notifications to your users!

First, firebase is not the only solution for this, but I like their approach, it is simple and multi platform, and really easy to integrate in a rails application.

Of course Firebase has a lot more features, but to keep this post short, we’ll focus only on this feature today.

Remember that to use it in production your application needs to be acessed through SSL, the ServiceWorker API only works through SSL.

And before we start coding, you’ll need to go to the Firebase Console and create a new application for you there.

But lets start with the Rails application, create a new rails app with the ” rails new app_name”  command.

Now create a file named manifest.json in the public directory, this file is simple and will be your Portable Web Application manifest.

{
  "name": "My  First PWA On Rails",
  "short_name": "PWAOnRails",
  "start_url": "/",
  "icons": [
    {
      "src": "/my_icon.png",
      "sizes": "256",
      "type": "image/png"
    }
  ],
  "theme_color": "#000000",
  "background_color": "#FFFFFF",
  "display": "standalone",
  "orientation": "portrait",
  "gcm_sender_id": "103953800507"

}

The sender ID 103953800507 is a fixed number Firebase uses, do not put your project firebase id there.

Then we need to create also in the public directory a file named “firebase-messaging-sw.js” to host the Firebase javascript initialization code, the content of this file is provided by the firebase web framework.

importScripts('https://www.gstatic.com/firebasejs/4.8.1/firebase-app.js');
importScripts('https://www.gstatic.com/firebasejs/4.8.1/firebase-messaging.js');
importScripts('/firebase/init.js');

firebase.messaging();

The only missing piece is the firebase/init.js file that will hold your firebase application configuration, the values for this file will come from the application you’ve created previously in the console.

// Initialize Firebase
var config = {
    apiKey: "YOUR_API_KEY",
    authDomain: "YOUR_APP.firebaseapp.com",
    databaseURL: "https://YOUR_APP.firebaseio.com",
    projectId: "YOUR_PROJECT_ID",
    storageBucket: "",
    messagingSenderId: "THIS_IS_YOUR_REAL_SENDER_ID"
};
firebase.initializeApp(config);

We are almost done with the application initialization, now we need to tell the browsers that we want the service worker loaded, to do that, lets use the firebase javascript code, we’ll just add a similar piece of code to the main application layout.

    <script src="https://www.gstatic.com/firebasejs/4.8.1/firebase.js"></script>
    <script>
        // Initialize Firebase
        var config = {
            apiKey: "YOUR_API_KEY",
            authDomain: "YOUR_APP.firebaseapp.com",
            databaseURL: "https://YOUR_APP.firebaseio.com",
            projectId: "YOUR_PROJECT_ID",
            storageBucket: "",
            messagingSenderId: "THIS_IS_YOUR_REAL_SENDER_ID"
        };
        firebase.initializeApp(config);
    </script>

With that done, we need to start integrating the Firebase API with our rails application (yes, I know, we didn’t do anything in rails yet…), and to start we’ll create another javascript file, now in our application assets file, I’ll call it “first_pwa.js”.

function FirstApp() {
    this.saveMessagingDeviceToken = function () {
        firebase.messaging().getToken().then(function (currentToken) {
            if (currentToken) {
                console.log('Got FCM device token:', currentToken);
                $.post("/push_registrations", {subscription: currentToken});
            } else {
                // Need to request permissions to show notifications.
                this.requestNotificationsPermissions();
            }
        }.bind(this)).catch(function (error) {
            console.error('Unable to get messaging token.', error);
        });
    }
    this.requestNotificationsPermissions = function() {
        console.log('Requesting notifications permission...');
        firebase.messaging().requestPermission().then(function() {
            // Notification permission granted.
            this.saveMessagingDeviceToken();
        }.bind(this)).catch(function(error) {
            console.error('Unable to get permission to notify.', error);
        });
    };
}
var firstApp = new FirstApp();
firstApp.saveMessagingDeviceToken();

This code will ask the user for permissions to show notifications, these notifications work online or offline, and more importantly will send the firebase messaging token to the “push_registrations”  controller, now we just need to create this controller, use the approach you prefer, I just create the file using a text editor, the content for now is really simple, just to show how to use it…

class PushRegistrationsController < ApplicationController
  def create
    puts  params[:subscription]
    User.find_or_create_by push_sub: params[:subscription]
  end
end

We are saving the user subscription ID in a User model, for this sample, I just created the model with the command:

rails g model user push_sub:string

And we can create another controller to broadcast messages to everyone that has already opened the application, but to do that we’ll need a REST client, the easier to use for this sample is the ‘rest-client’ gem, please add the following entry to the Gemfile and run “bundle install”

gem 'rest-client'

You’ll need to get a server application key for your Firebase Messaging app from their web site.

And the broadcast controller will look similar to this:

class BroadcastsController < ApplicationController
	def index
		headers = {"Content-Type": "application/json",
			"Authorization": "key=YOUR_SERVER_KEY"}
		url = "https://fcm.googleapis.com/fcm/send"
		User.find_each do |user|
			payload = {
				"notification": {
					"title": "We have a message for you!",
					"body": "Answer please, we are cool!",
					"icon": "/app_icon.png",
					"click_action": "https://oursecureurl.domain.com/chats"
					},
				"to": user.push_sub
			}
			RestClient.post(url, payload, headers)
     	end
	end
end

If the application is not running in the moment you send the message, a notification will be displayed for the user automatically, but if the application is running, meaning, if the user is in your web site, you need to handle the message in your code, the code is still simple, just a little more javascript.

Lets open the “firebase-messaging-sw.js” and change the last line and add a few more:

importScripts('https://www.gstatic.com/firebasejs/4.8.1/firebase-app.js');
importScripts('https://www.gstatic.com/firebasejs/4.8.1/firebase-messaging.js');
importScripts('/firebase/init.js');

const messaging = firebase.messaging();

messaging.onMessage(function(payload) {
  console.log("Message received. ", payload);
  // ...
});

Of course you can use the firebase API to create topics and device groups to make it easy to send message to many devices of one user, or to notify everyone that a specific product is on sale.

But this is the basics for the first PWA on Rails with offline notifications.

 

 

 

8 things that will save your time working with Xamarin and Rails (Or the summary of my presentation about mobile development with Xamarin at TDC Porto Alegre 2017)

In the last “The Developers Conference Porto Alegre 2017” I presented a talk about common pitfals found while creating a mobile application using the Xamarin.Forms mobile platform.

If you want to check the slides they are in my slideshare, just Click Here.

But right to the point, the idea of this post is to summarize the 8 main points in my presentation that will save your time.

The best REST client for Xamarin mobile is FubarCoder.RestSharp

I’ve tested some other libraries, and there are some good ones, but the best one I’ve found is the FubarCoder.RestSharp.

The library is almost a full port of RestSharp to Xamarin Mobile, of course, if you are writing a .Net Core application use the full blown RestSharp, but for mobile, this port is great and keep the same API

this.client = new RestClient("http://address:port/");
var request = new RestRequest("sessions.json", Method.POST);
request.AddJsonBody(new { session = new { username = Username, password = Password } });
var response = await client.Execute(request);

A REST client will not solve all your problems if you use session

In most Rails applications, you use session to keep track of who is logged in, and if you are creating a front end for one existing application and using the already existing API this can be a problem since Rails use cookies to track the user session and RestClient does not store or send cookies, but that is easy enough to solve, just use this:

var cookieContainer = new System.Net.CookieContainer();
client.CookieContainer = cookieContainer;

Adding a cookie container to the RestClient will teach it to store and send back the cookies, and that will solve all problems with authentication.

Always use async/await

C# has a perfect feature to help handle asynchronous calls, the async and await keywords, the trick here is that await can only be used inside async methods, and to solve that, use this snippet:

Task.Factory.StartNew(async () =>
{
var articles = await clientApi.ListArticles();
})

With the task factory you’ll be able to create async blocks to handle your API calls.

You cannot update the UI from another thread

The problem with async blocks is that they do not run in the main UI thread, and you can only update the UI components from that thread.

To solve that you need to use Device.BeginInvokeOnMainThread, we can update the previous snippet with this new method to update the UI with the article list:

Task.Factory.StartNew(async () =>
{
var articles = await clientApi.ListArticles();

Device.BeginInvokeOnMainThread(() =>
{
this.ArticlesList.ItemsSource = articles;
});
});

Rails tries to prevent simple form submission to prevent form submission attacks and that might be a problem

Rails protect_from_forgery will cause you some trouble, and you have two ways to fix that.

  • simply remove protect_from_forgery, of course that might open your application for attacks. If you are creating one API it is not a problem, but be careful with this solution.
  • add “skip_before_action :verify_authenticity_token” to the APIs you’ll use from your mobile application
  • before sending any “form” do a request and get the authenticity_token from the returned page and send it back on every request (this is the safest)

Show details and navigation

Each mobile platform has it’s own way to display the in app navigation and go back to the previous pages, to help with that, Xamarin.Forms has the NavigationPage class

MainPage = new NavigationPage(new tdc2017poa_xam.MainPage(this.clientApi));
Navigation.PushAsync(new ArticlePage(this.clientApi, (e.SelectedItem as Article).Id));

Saving user preferences in a platform independent way is probably a good idea

In every application you need to save user preferences and similar things, and of course that is platform dependent, but Xamarin.Forms had a Properties helper that will map the calls to each supported platform, the code is really simple, just use these classes:

  • Application.Current.Properties
  • Application.Current.SavePropertiesAsync
  • Application.Current.Properties.ContainsKey

Display notifications for users

Again each platform has it’s own requirements to send async notifications to the users, if you are using Azure to build the backend for your application, just use the Microsoft.Azure.Mobile.Client class/module/library and it will save you loads of times.

Of course you’ll still need to register your application in each platform, but you’ll be able to use only one API for all…

Last but not least

I hope this presentation summary will help you save some time, if you want to check the actual source code for the presentation samples, just go to my github page.

Brain Hack – Trabalho remoto e produtividade

Atualmente eu estou digamos assim, apaixonado por trabalho remoto, estou trabalhando desta forma desde dezembro de 2013, ou seja já deu tempo de aprender algumas coisinhas.

Claro que existem diversos tipos de pessoas, e nada é perfeito, tudo tem vantagens e desvantagens.

A foto do post é de onde estou escrevendo ele hoje, não estou de férias, só resolvi trabalhar uma semana longe de casa.

Na minha opinião esta é uma das vantagens de trabalhar remoto, ou home office, ou chame como quiser, eu não chamo de home office por que não estou sempre em casa.

Existem desvantagens também, uma grande na minha opinião é não ter colegas de trabalho que fazem algo parecido com o que você faz, isto as vezes faz você se sentir sozinho e um pouco perdido.

Eu não trabalho sozinho, tenho um colega desenvolvedor na empresa, mas eu trabalho no Brasil e ele na Europa (muda de pais de vez em quando 😛 ).

Outro ponto negativo é que com menos contato com o seu chefe, ser promovido ou qualquer coisa parecida pode se tornar um pouco mais complicada.

Mas na empresa em que eu trabalho hoje eu não teria muito espaço para promoção mesmo se trabalhasse no escritório, que por acaso fica em New York, e eu nunca fui lá, até estou planejando uma viajem para conhecer meu chefe pessoalmente, mas ainda não fui, só conheço ele via Skype até hoje.

Mas de volta ao assunto do post.

Trabalhando remoto eu consigo ser mais produtivo do que no escritório, tenho menos interrupções, menos pessoas para conversar (sim, mesmo sendo um nerd eu converso com quem estiver por perto 😛 ).

Mas também acontece de as vezes você simplesmente não estar em um bom momento para produzir, um dia que você tem muita coisa para terminar, e não esta conseguindo produzir, neste caso eu uso alguns “brain hacks” para me ajudar.

* uma forma de recuperar o foco é esquecer um pouco do trabalho, eu aproveito que trabalho remoto, cuido os horários de reuniões (sim, eu tenho reuniões), e simplesmente pego meu cachorro e saio pra caminhar em uma praça no meio do dia, relaxo um pouco, e quando volto para o computador com uma xícara de café recém passado, o trabalho flui muito melhor.

* “men sana in corpore sano”, uma variação da técnica acima, que ocupa um pouco mais de tempo, mas é útil em casos mais extremos é fazer um intervalo para exercicios, isto mesmo, no meio do dia, se o trabalho esta estressante demais, ou você não esta conseguindo produzir, troque de roupa e saia para correr, vá na academia, vá nadar, ou qualquer coisa que você goste de fazer, garanto que quando voltar vai conseguir produzir muito melhor. Men Sana in Corpore Sano

* tem aquele dia que tu não consegue produzir, e não esta afim de se mexer muito também, neste caso eu aproveito uma praça que tem atrás do prédio onde eu moro, e faço 10 minutos de meditação. Existem diversos tipos de meditação com propósitos diferentes, e isto fugiria um pouco do foco deste post, mas pode escolher qualquer um, a idéia é parecida com o passeio com o cachorro, só tirar alguns minutos para descansar a cabeça, parecido com ir para o “cafézinho” em um escritório cheio de gente e conversar sobre algo aleatório com qualquer um que estiver lá.

* mude de ambiente, as vezes quando sei que o dia vai ser mais pesado, que tenho muita coisa para fazer, gosto de simplesmente pegar minhas coisas e ir trabalhar fora de casa, em um café, um coworking, etc. Só escolha um lugar que lhe agrade e pronto.

E por último, isto é uma coisa que eu gosto de fazer, mas tem gente que prefere de outra forma, como eu trabalho remoto, eu prefiro não ter horário fixo, eu tenho um contrato para trabalhar um numero fixo de horas por dia, mas não tenho um horário fixo, e mesmo a quantidade de horas é só uma referência, é mais importante eu fazer o que tem que ser feito do que cumprir a quantidade de horas.

Neste caso, eu tiro alguns horários na semana para fazer coisas que me agradam, tipo duas vezes por semana pela manha, eu trabalho por aproximadamente 1h, paro o que estou fazendo, levo meu filho pro Karate, depois volto e continuo trabalhando, mas poderia ser qualquer coisa, fazer uma aula de música, ter um horário alternativo para a academia e pegar todos os equipamentos livres, …

Use sua imaginação e faça com que trabalhar remoto tenha mais vantagens do que desvantagens.

Uma idéia é trabalhar da praia no verão…

Na verdade esta é uma boa idéia, vou procurar um lugar barato pra fazer isto, mando fotos pra quem ta trabalhando todo dia no mesmo escritório 😛

GTD – acho que finalmente entendi :D

Já li bastante sobre o tal do GTD (Getting Things Done), li também o livro sobre ZTD (Zen to Done), e a pouco resolvi finalmente ler o livro do David Allen sobre GTD, e acho que finalmente entendi a brincadeira, pelo menos de uma forma que esta funcionando para mim.

Uma coisa que percebi é que é uma grande mudança de hábitos, e que eu funciono melhor mudando um hábito de cada vez, e foi o que fiz, adicionando passo a passo, ainda não estou usando todas as praticas mas já melhorou bastante minha organização.

E claro que como qualquer maluco de TI eu preciso de softwares para fazer qualquer coisa, e neste caso estou usando só o Evernote, por que ele torna a primeira prática extremamente simples, vou descrever passo a passo abaixo o que do GTD eu estou usando, e se alguém mais usa GTD e perceber que entendi alguma coisa errado, por favor estejam a vontade para me avisar 😀

Coleta
A coleta é a fase mais importante e mais difícil de começar na minha opinião, a idéia básica é se acostumar a colocar no mesmo lugar todas as possíveis tarefas que chegarem, mesmo antes de ter certeza se é uma tarefa.

Eu criei uma pasta chamada “Caixa de entrada” no Evernote, onde eu coloco todas as tarefas do trabalho, emails que precisam de ação, foto ou scan de contas que chegam, artigos que posso me interessar no futuro, idéias que eu tenho para novos projetos.

Quando for começar a fazer isto, um exercício interessante é sentar em algum lugar em casa e anotar tudo o que tu lembrar que precisa fazer, qualquer pequeno concerto, qualquer coisa que tu acha que precisa comprar, qualquer coisa que passar pela cabeça.

Um ponto importantissimo é que tem que ser um item em casa nota do Evernote, pense que esta escrevendo cada item em um pequeno pedaço de papel, ou se se sentir mais confortável, escreva cada item em um pedaço de papel e coloque tudo na mesma caixa 😀

Processamento Diário
Depois de encher a caixa de entrada, coisa que vai acontecer com frequencia, já que todos os dias recebemos emails, tarefas, e tudo isto deve ir para a caixa de entrada, temos de processar estas informações.

Este processamento tem um passo a passo bem simples e que deve ser seguido, para evitar que comecemos a fazer apenas o que queremos, gostamos e lembramos, e deixemos o resto por lá.

O processamento começa com o primeiro item da caixa de entrada, e termina no último, não devemos pular, olhar o de baixo que eu já sei o que fazer, começar pelo meio, …
Se não estiver usando o Evernote, a indicação é começar de cima para baixo, mas o importante é seguir uma ordem e não pular items.

Para cada um dos items, segue-se este script:

2. Precisa de algum tipo de ação sua neste item?
2. Não precisa
* Se precisar deste item como referencia no futuro, arquive ele
* Se não precisar dele como referencia no futuro, apague
2. Precisa
* Se ele é uma tarefa simples e leva menos de 2 minutos, faça agora
* Se ele é uma tarefa simples e leva mais de 2 minutos, adicione a lista de atividades
2. Se ele precisa de uma ação, mas é maior que uma única tarefa, agora este item é um projeto
1. Quebre o item em tarefas simples e adicione elas a lista de atividades
2. Anote no projeto quais tarefas pertencem aquele projeto
3. Nada por enquanto, mas um dia quem sabe
1. Idéias, coisas que não sabe se vai comprar, … tudo isto vai para uma lista chamada “Um dia quem sabe”

Sobre adicionar a lista de atividades, na lista acima isto é um conceito simplista demais, na verdade este item é mais complexo, por exemplo um item destes pode ser agendado para algum momento especifico no futuro, para isto você pode usar uma agenda, o calendário do seu celular, eu prefiro utilizar os lembretes do Evernote.

Quanto a listas de atividade, quando eu comecei eu tinha só uma, mas como é mencionado no livro, é muito mais produtivo quando as listas tem contexto, ou seja, hoje eu tenho um caderno no Evernote para todas as tarefas, e uso tags para separar por contexto, os contextos que eu criei por enquanto são:
@Fone
@Compras
@Pessoal
@Trabalho

Alem de eu também criar uma tag por projeto com o nome do projeto, e na hora de arquivar eu tenho várias tags por assunto.

Para facilitar a visualização, eu tenho também tags por dia da semana que uso na hora de organizar as tarefas da semana.

Fazer

Isto não esta explicito no livro, eu acho que todos deveriam sub intender isto, já que este é o objetivo, mas tem um monte de gente reclamando que o GTD não foca no “D”.

Eu acho que foca sim, e todos os dias eu pego a lista de tarefas, por contexto por exemplo, se estou no mercado ou no shopping eu pego a lista @Compras e compro o que esta ali.

Se estou no horário de trabalho pego a lista @Trabalho e começo a fazer os itens da lista.

Se aparecer alguma outra coisa durante o trabalho, o que sempre aparece, se for urgente vai direto para a lista @Trabalho e se não for vai pra caixa de entrada para processar amanha.

Assim que termino uma tarefa, marco ela como pronta e passo para a próxima.

Processamento Semanal

Bem parecido com o processamento diário, mas aqui você deve revisar todos os projetos para ver o progresso de cada, e lembrando que no conceito do GTD um projeto é simplesmente uma coisa que precisa de mais de um passo para ser feita, por exemplo, comprar uma bicicleta não é uma tarefa, e sim um projeto por que inclui escolher um estilo de bicicleta (speed, mountain, hibrida, dobravel, …), escolher uma bicicleta no estilo definido, pesquisar preço, e por último ir até a loja e fechar a compra.

Neste momento gosto de anotar nos cartões de projeto como anda o progresso de cada um.

Também é o momento para revisar a lista “Um dia quem sabe”, dar uma olhada nas idéias de lá, nas coisas que pensou em comprar um dia para ver se já esta na hora ou se não quer mais, apagar alguns itens daquela lista, …

E eu gosto também de re-checar os itens marcados como concluídos durante a semana, isto ajuda na motivação, ver a quantidade de coisas que conseguimos terminar durante a semana passada.

Vantagens que vi no GTD até agora

*Motivacional* – Antes de começar a ter todas as minhas anotadas, eu sempre acreditei ter uma boa memória e fazia tudo de cabeça, claro que as vezes eu esquecia de alguma coisa, mas o pior para a motivação e para o bom humor, é que várias vezes, durante o trabalho, temos uma tarefa principal e daqui a pouco começam a chegar inúmeras outras que precisam ser feitas no mesmo dia, mas a sensação que fica no final do dia é que não fizemos nada de produtivo pois aquela tarefa principal não foi finalizada.
Pois agora não tenho mais isto, posso ver uma lista grade de coisas finalizadas naquele dia, e como uma tarefa grande se torna um projeto, eu com certeza vou ver algum progresso naquela tarefa grande e importante que eu tinha, pois alguns dos itens serão marcados como concluídos.

*Organização* – Não estou mais pagando juros de contas atrasadas, não tenho deixado de fazer coisas que preciso fazer, e quando demoro muito para fazer algo, só o fato de ver todos os dias aquilo na minha lista de tarefas me incomoda o suficiente para fazer o mais rápido possível.

*Produtividade* – É incrível como o nosso cérebro perde tempo nos lembrando de coisas que não podemos fazer agora, e segundo algumas teorias temos uma quantidade limitada de atenção que podemos usar no mesmo dia.

Depois que você confiar no seu sistema, quando colocar uma tarefa na lista, o seu cérebro vai descansar e parar de bater naquela tecla por que ele sabe que a tarefa vai ser finalizada no momento necessário.
Isto poupa bastante energia, e tenho conseguido focar mais nas tarefas que preciso realizar, a quantidade de vezes que estou fazendo uma coisa e o cérebro começa sozinho a lembrar de outra que não posso fazer agora caiu para praticamente zero, e tenho completado muito mais coisas durante o dia.
Ou seja, percebo um aumento de produtividade no meu trabalho, e normalmente termino o dia menos cansado.

—————————-

Bom, acho que é isto, se eu esqueci de alguma coisa, por favor me avisem, e se gostaram da brincadeira, recomendo ler um pouco sobre GTD, gostei bastante do método, mesmo que eu ainda não esteja usando ele todo 😀