Wednesday, August 3, 2016

Hiding resting splits on Garmin Connect (aka injecting JS and messing with an existing site via a Chrome Extension)

In October 2015, I was diagnosed with hypertriglyceridemia... basically, I wasn't eating healthy and I wasn't doing enough (ehem... any) exercise.

I started a serious diet (one provided by a doctor) and a serious training plan... I started doing spinning but then I tried running... in 3 months all my blood tests were great and I was on a healthy weight... but the best part of this experience is... I discovered a new passion... RUNNING!

Since then, I joined a running team, I'm beating my records and I'm enjoying it a lot. My coach is great and the training plan I'm following is pretty standard. As the good nerd I am, I bought a running watch (because... what good is running if I don't have METRICS???). I started with the Fitbit Surge but I soon found its limitations unbareable... that's why I bought a Garmin Forerunner 630, and I'm super happy with it.

One of the coolest features it has is giving you the chance to do interval training, and setting up a plan on the computer that you can send to the watch so that when you're running it vibrates and tells you to run, rest, or whatever you planned to do. BUT... and there's always a but... all the resting activities are stored as "splits" then lowering your numbers.

Here's an example... I had to run 12 1km intervals (0.62 miles) resting 1:30


You can see that all the even numbers last 1:30, have a crappy Avg Pace (because I was resting) and basically screw my activity numbers... because it says that my Average Page was 9:21 minutes/mile... I was super frustrated by this... so instead of just doing the numbers in excel... I developed an extension that hides those splits and updates the numbers for me :) you can install the extension here and view its source code here. Basically, you define the criteria of which laps you want to hide and it converts that table to this (now satisfying) one:


Yayyy!!! 7:33 minutes/mile!!! now we're talking :)

I was able to build this in 3 days, basically extensions are just HTML/JS pages so you can leverage a lot of that knowledge. I also found the documentation super straightforward... so let's start digging into the meaty parts of this:
  • I wanted to make it configurable per activity (I don't want it to hide ALL the splits that are below a given threshold, sometimes it's part of the training to run slow)
  • I wanted to pick up from the garmin website if the user uses miles or kms (I use kms, but I really think this extension could be useful for people all over the world)
  • I wanted all the numbers in the activity page to be updated (not just the splits and their averages)
  • I wanted to make it easy to set up
You can see the activity I used for this screenshot here. Basically, if you check the source code and the DOM, you'll see it's using backbone.js. Honestly, I didn't feel particularly inclined to learn that... I was hoping there would be some kind of global object where I would be able to see all the data and basically just tweak it... but... nope... all what backbone exposes are functions that the Garmin people is calling with well defined parameters (sometimes, even deleting the global variables once they've been used to initialize their objects).

So... not really understanding how their webapp works (and not feeling particularly curious about it) I started checking out other options... I checked the XHR requests and I noticed they were querying this url which includes the laps info... eureka! that's it... all I needed then was to have a way for my extension to tweak this XHR response so that Garmin's webapp would show my injected version instead of their own.

Thanks to a StackOverflow answer I lost, I found out this excellent blog post that explains clearly how to achieve that (so all props to that guy... minus the points he looses for posting how to exploit a vulnerability before letting the company know about it). So with that, I was able to easily modify any request and tweak it... and the webapp would consider my data the truth... yay!!

Well, then an interesting thing happened... the webapp does a request to an activity url that contains the summary BEFORE requesting the splits data... so what I ended up doing was intercepting that request as well, fetching the splits, filtering them and updating the summary (you can see that here).

Once I had that ready, I wanted to pick up the units the user was using... I found out that the HTML of the page declares a global variable VIEWER_USERPREFERENCES with that info... but in the common.js file, I saw that they're deleting it after using it... so I ended up doing something waaay uglier (but that gets the job done). On the background script, I just give it 5 seconds (so that the DOM elements get initialized) and then just get an element that contains the distance :)

The hardest part of this project was letting go my initial idea... I really wanted to see the backbone data somewhere. Once I broke free from that and found an easy way to modify the XHR responses, it all flowed naturally... so if you're on the fence about writing a chrome extension, by all means give it a try!


Thursday, June 30, 2016

Asterisk 13 on an Ubuntu Docker container

Hello world! It's been a while! but I'm trying to get back at blogging :)

I've been reading tweets about how great Docker is but I've never had the chance to try it out... until... I decided it was my next adventure... and boy am I enjoying it!

I started my Docker journey by buying The Docker Book, it's awesome... I read it in two days just because I couldn't stop.

Then, I started thinking about how I would dockerize all the services I run on my servers... I started a migration a few months ago and I've never got the nerve to complete it... so getting to try Docker to actually do something useful (and finish that migration) sounds like a great plan.

I started by moving gitlab with my nginx server... that's fairly easy and I could either use the built in images or build my own ones using Ubuntu 14.04 (so that I don't have lots of base images)... and it was Asterisk's turn.

I found the dougbtv/asterisk image that honestly does all the heavy lifting... he figured out how to compile it in a docker-compatible way, so all the credits to him... but I wanted to:
  1. Keep the image size to the minimum <- #1 requirement
  2. Pick the modules I want to compile (I'm a control freak and it's also related to the previous point)
  3. Mount the etc, spool and log directories so that I can do modifications and see them on the host
  4. If the etc folder is empty, copy the default files so that I have something to work with... if it has files, just leave it as it is
Soooo... I ended up building gmcuy/asterisk (you can see the Dockerfile in there).

After several trials and errors (all using different RUNs so that I could reuse the intermediate images and either find out the different menuselect/menuselect options or debug which libraries I was missing) I ended up putting everything on a single run so that:
  • The source code is never included in a layer
  • wget is never included in a layer
I also put the COPY of the init.sh and default-conf.tgz last so that if I tweak them it reuses the asterisk compiled layer... oh, and as dougbtv suggested... I'm using network_options: host... I was able to map the RTP UDP port range from 10000-10100 by doing '10000-10100/udp' but I couldn't get audio through... I tried fiddling with the nat settings but I couldn't figure it out, so I just exposed it at the host level and that was it. Do I like it? nope... but it still feels better (and way easier to move) than having asterisk compiled on the server (which I've found out is not as a repeatable process as I'd like).

Hope you can at least get some inspiration from here! and if you have something that can be improved, please let me know :)

Saturday, September 12, 2015

Compiling PhantomJS 2.0 on an odroid U3

As I was working on a project to selectively use Unblock Us on my network devices, I wanted to host the whole thing in my odroid... but compiling it right out of the box didn't work for me. Also, for some reason this precompiled binary didn't work for me. Doing this

 
wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.0.0-source.zip
unzip phantomjs-2.0.0-source.zip
cd phantomjs-2.0.0
nohup ./build.sh --confirm > build.sh.out 2> build.sh.err &
 

Took a whole a lot of time and eventually failed with a bunch of cryptic errors... but when I checked the build.sh.err file, the first one I found is
 
floatmath.cpp:44:5: warning: unused parameter ‘argv’ [-Wunused-parameter]
g++: error: unrecognized command line option ‘-msse2’
make: *** [sse2.o] Error 1
 
It makes sense, as SSE2 is an extension that's not available on ARM... then I ran
 
find src -type f -print0 | xargs -0 sed -i 's/-msse2//g'
 
And I tried compiling again, but it failed... and the problem was pretty stupid (and it took me a while to figure it out)... the previous .o files were still around, so the make process wasn't building them again... sooo I completely deleted the folder, unzipped the file, removed the msse2 flag and this time it worked flawlessly!

Here are all the steps in an easy-to-copy format
 
wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.0.0-source.zip
unzip phantomjs-2.0.0-source.zip
cd phantomjs-2.0.0
find src -type f -print0 | xargs -0 sed -i 's/-msse2//g'
nohup ./build.sh --confirm > build.sh.out 2> build.sh.err &
 
Just be prepared for it to last a while. If you want to just use my version, you can find it here (it includes the sources and the build output).

You can retrieve it by doing
 
curl -OL https://github.com/gmc-dev/phantomjs-v2.0.0-odroidu3/raw/master/phantomjs-2.0.0.compiled.tgz.000
curl -OL https://github.com/gmc-dev/phantomjs-v2.0.0-odroidu3/raw/master/phantomjs-2.0.0.compiled.tgz.001
curl -OL https://github.com/gmc-dev/phantomjs-v2.0.0-odroidu3/raw/master/phantomjs-2.0.0.compiled.tgz.002
cat phantomjs-2.0.0.compiled.tgz.00? > phantomjs-2.0.0.compiled.tgz
tar -xzf phantomjs-2.0.0.compiled.tgz
 

The executable file is phantomjs-2.0.0/bin/phantomjs

Monday, August 31, 2015

Mac OS Notifications from Python - with PyCharm debugging :)

I wanted to create an application that notifies me of interesting things through different means (it could be an SMS, an email or... Mac OS neat notification system).

I found this stack overflow answer that explained how to do that from Python, and even letting you add an action button (something like "View"). Here's that answer with a minor tweak on the init code (as the original way doesn't work anymore)

 
import Foundation
import objc


class MountainLionNotification(Foundation.NSObject):
    # Based on http://stackoverflow.com/questions/12202983/working-with-mountain-lions-notification-center-using-pyobjc

    def init(self):
        self = objc.super(MountainLionNotification, self).init()
        if self is None: return None

        # Get objc references to the classes we need.
        self.NSUserNotification = objc.lookUpClass('NSUserNotification')
        self.NSUserNotificationCenter = objc.lookUpClass('NSUserNotificationCenter')

        return self

    def clearNotifications(self):
        """Clear any displayed alerts we have posted. Requires Mavericks."""

        NSUserNotificationCenter = objc.lookUpClass('NSUserNotificationCenter')
        NSUserNotificationCenter.defaultUserNotificationCenter().removeAllDeliveredNotifications()

    def notify(self, title, subtitle, text, url):
        """Create a user notification and display it."""

        notification = self.NSUserNotification.alloc().init()
        notification.setTitle_(str(title))
        notification.setSubtitle_(str(subtitle))
        notification.setInformativeText_(str(text))
        notification.setSoundName_("NSUserNotificationDefaultSoundName")
        notification.setHasActionButton_(True)
        notification.setActionButtonTitle_("View")
        notification.setUserInfo_({"action":"open_url", "value":url})

        self.NSUserNotificationCenter.defaultUserNotificationCenter().setDelegate_(self)
        self.NSUserNotificationCenter.defaultUserNotificationCenter().scheduleNotification_(notification)

        # Note that the notification center saves a *copy* of our object.
        return notification

    # We'll get this if the user clicked on the notification.
    def userNotificationCenter_didActivateNotification_(self, center, notification):
        """Handler a user clicking on one of our posted notifications."""

        userInfo = notification.userInfo()
        if userInfo["action"] == "open_url":
            import subprocess
            # Open the log file with TextEdit.
            subprocess.Popen(['open', "-e", userInfo["value"]])
 

... but things weren't that simple. In order for an application to be able to send notifications, it needs to be part of an application bundle and have, on its Info.plist the CFBundleIdentifier key populated. As I'm using virtualenv, the application that's being run is /path/to/my/virtualenv/bin/python and that obviously is not a bundle. Also, that's what PyCharm uses, and I want to set up my script to be run as a launchd script. Some people suggested using py2app, but I wanted to be able to debug as needed.

Do you want to show alert notifications? (the ones that don't disappear automatically and that let you use an action button) you need your Info.plist to have NSUserNotificationAlertStyle = alert (and enable them on System Preferences, or have your code signed)

The way I found to make all of that happen was:

  • create a python.app bundle containing an Info.plist inside of the environment, with a link to the python executable
  • create a python bash script on the environment that calls the python.app application
With your virtualenv activated, paste this and it will take care of everything for you
 
# disable bash history so that we can paste it without issues
set +o history

if [ -z $VIRTUAL_ENV ];then echo "please activate a virtualenv";set -o history;else

# choose application name
read -p "What do you want to use as application name? [python]" APPNAME;if [ -z $APPNAME ];then APPNAME="python";fi;

if [ -d ${VIRTUAL_ENV}/bin/${APPNAME}.app ]; then
 echo "The application ${APPNAME}.app already exists"
 set -o history
else

# create bundle directory and Info.plist
mkdir -p ${VIRTUAL_ENV}/bin/${APPNAME}.app/Contents/MacOS
cat >${VIRTUAL_ENV}/bin/${APPNAME}.app/Contents/Info.plist <<EOL
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
 <key>CFBundleExecutable</key>
 <string>python</string>
 <key>NSUserNotificationAlertStyle</key>
 <string>alert</string>
 <key>CFBundleIdentifier</key>
 <string>${APPNAME}.app</string>
 <key>CFBundleName</key>
 <string>${APPNAME}</string>
 <key>CFBundlePackageType</key>
 <string>APPL</string>
 <key>NSAppleScriptEnabled</key>
 <true/>
</dict>
</plist>
EOL

# doing this so that I can have multiple apps in a virtual environment, to have different icons
if [ ! -f ${VIRTUAL_ENV}/bin/realpython ]; then
    ln -s `readlink ${VIRTUAL_ENV}/bin/python` ${VIRTUAL_ENV}/bin/realpython
fi

# create symbolic link
ln -s ../../../`readlink ${VIRTUAL_ENV}/bin/realpython` ${VIRTUAL_ENV}/bin/${APPNAME}.app/Contents/MacOS/python

# only delete the original symlink. After the first execution, leave the bash script as is
if [ -L ${VIRTUAL_ENV}/bin/python ]; then
 # delete the python one (as we'll use a shell script, so that it loads the app bundle info)
 rm ${VIRTUAL_ENV}/bin/python

 # create shell script
 echo "#!/bin/bash
${VIRTUAL_ENV}/bin/${APPNAME}.app/Contents/MacOS/python $@" >> ${VIRTUAL_ENV}/bin/python
 chmod +x ${VIRTUAL_ENV}/bin/python
fi;

# enable history back
set -o history
fi;
fi;
 
If you hate pasting so many lines (as I do), you can just do
 
bash <(curl -sL https://gmc.uy/appify_with_notifications.sh)
 

And that should be it! you now have a python file on your environment that's a bundle, and on pycharm you'll be able to debug the notifications code and see the notifications pop up :) You can see my project here.

Wednesday, February 25, 2015

Defeating OAuth2's purpose with PhantomJS and Selenium

I'm taking a small break from the SIP registration project for twilio to work on a quick app that automatically updates the time spent on my JIRA issues based on my TimeDoctor logs. I also want it to send me a weekly report and a monthly report. I originally designed everything to be extremely modular (so that you could use different "notification" plugins and time tracking services) but I got discouraged by the time it was taking me. This needs to be something that makes my life easier, not an extra project :). I'm using Python here because... I want to!

TimeDoctor has an API that lets me retrieve my worklogs... and that's cool, but the only way to authenticate a user is through OAuth2 without the possiblity of using grant_type = password. That is great if you're creating an app that lots of people are going to use and most importantly: if the app you're building is a web app. I want to do this as a console app, so that I can put it in a cron and just forget about it. This approach requires me to put my TimeDoctor credentials in the application, but I'm ok with that.

My first approach was using Flask, and launching a web server if the tokens (access and refresh) failed. That seemed like a decent workaround, given that refresh tokens should get me access for as long as the application remains approved by the user... but that's not enough, I want the cron to be completely independent from me...

So, that's where I remembered I had read about PhantomJS and how cool I thought it sounded (I've built some scrapers and having to figure out what a JS could be doing is one of the most painful things I've worked on). PhantomJS is just a headless browser... it means that it has no window but it processes everything as a regular browser would do. It then lets you interact with the different elements programatically. This seems like the exact challenge it's able to solve... javascripts, redirections and redirections done through javascript. I would have my system use PhantomJS to complete all the steps in the authentication... Spoiler alert: it works great.

This particular API implements the Authorization Code Grant flow. In plain English, this is how it's meant to work:

  1. You should register your app and provide a redirection url. That's where your users would land once they complete the registration process. They give you a client_id and a client_secret
  2. You show a link to your user saying "Log in with TimeDoctor" that points to https://webapi.timedoctor.com/oauth/v2/auth?client_id=<YOUR_CLIENT_ID>&response_type=code&redirect_uri=<REDIRECT_URI>
  3. The user enters their credentials in the TimeDoctor site, and then choose to allow your application to access their data
  4. TimeDoctor's site redirects the user to whatever you set as redirect_uri when redirecting the user to them and appends ?code=<SOME_WEIRD_CODE> (the hostname needs to match with what you used when registering your app on the very first step)
  5. Once your server has the code, it should query https://webapi.timedoctor.com/oauth/v2/token?client_id=<YOUR_CLIENT_ID>&client_secret=<YOUR_CLIENT_SECRET>&grant_type=authorization_code&redirect_uri=<REDIRECT_URI>&code=<RETURNED_CODE_FROM_POINT.a> which now will retrieve the access_token (that you need to access the API) and the refresh_token (that you need to use when the access_token expires and you want a new one)
One of my goals is not to have a server. In order to accomplish that, I'm going to have PhantomJS tell me what's the URL on step 4 and parse the code from it.

I originally set it so that my redirect url was http://127.0.0.1:1234 but that didn't work. For some (extremely weird) reason, if PhantomJS doesn't find a server in the port you specify, it doesn't change the page. So what I ended up doing was using a server that whatever you pass, works... and that's... http://example.com their own server... I set up my TimeDoctor app to redirect to https://webapi.timedoctor.com/oauth/v2/token with the code.

Requirements for this code to work:

  • Have PhantomJS installed (running npm -g install phantomjs if you have nodejs installed)
  • Have selenium's package (running pip install selenium)
  • If you're on a mac and installed nodejs with brew, you need to do sudo ln -s /usr/local/bin/node /usr/bin/node (thanks to this answer!) 

Here is the extremely simple code
 
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from urllib import quote
from urlparse import urlparse

timedoctor_oauth_url = ('https://webapi.timedoctor.com/oauth/v2/auth?'
                       'client_id=%s&response_type=code&redirect_uri=%s')
                       % (config['client_id'], 
                          quote('https://webapi.timedoctor.com/oauth/v2/token'))

# Initialize driver
driver = webdriver.PhantomJS(executable_path=config['phantomjs_path'])
driver.get(timedoctor_oauth_url)

# Fill username
username_field = driver.find_element_by_id('username')
username_field.send_keys(config['username'])

# Fill password
password_field = driver.find_element_by_id('password')
password_field.send_keys(config['password'])

# Submit authentication form
password_field.send_keys(Keys.ENTER)

# In the second form, where the user is asked to give access to your 
# app or not, click on the "Accept" button. 
# The element's id is 'accepted'
accept_button = driver.find_element_by_id('accepted')
accept_button.click()

# The browser is now in the example.com url, get that url and extract the code
url = urlparse(driver.current_url)
query_dict = dict([tuple(x.split('=')) for x in url.query.split('&')])
code = query_dict['code']

r = requests.post('https://webapi.timedoctor.com/oauth/v2/token', {
    'client_id': config['client_id'],
    'client_secret': config['client_secret'],
    'grant_type': 'authorization_code',
    'code': code,
    'redirect_uri': 'https://webapi.timedoctor.com/oauth/v2/token'
})

if r.status_code != 200:
    print 'Unable to retrieve code, token service status code %i' % r.status_code
    sys.exit(-1)

data = json.loads(r.content)

access_token = data['access_token']
refresh_token = data['refresh_token']
 
And if at any time you want to really see what PhantomJS is seeing, you can do driver.save_screenshot('screen.png') and it creates an image for you... soooo cool!

Hope it helps someone! Oh, and I wouldn't have been able to figure out how to use PhantomJS via selenium without this answer. If you're interested in this particular project, you can take a look and clone the repo here.

Thursday, February 5, 2015

My take on AngularJS authentication with a .NET backend

Working on the twilio registration project I want to show on the Signal conf, I decided to go on things as deep as I want to... after all, it's my project to play with :)

I'm new with AngularJS, so I'm dealing with the same stuff everyone has dealt with on their beginnings. The first thing on my way is authentication... What I've read around suggested generating an OAuth bearer token and having it expire in 24 hours (or in any time you feel comfortable with and implementing the refresh mechanism).

Coming from traditional MVC applications, I really like the idea of sliding expiration... giving away a long lived token feels... dirty. I couldn't find anything baked in that took care of this, so I started coding my own algorithm.

While I was at it, my first decision was how to hash my passwords... fortunately, I got to this awesome article about it and it even included C# code, so that's what I'm using.

Then, I got to think on my authentication code... my idea is to give every authenticated user a token with an expiration... and every time they use it, have that token change its expiration (the idea isn't to change the expiration only if half the time has already passed... I don't really see the value in it, but we'll see what the good folks of stack exchange have to say about it).

In order to handle the tokens on my side (and their expiration), I decided to use Redis. The reasoning behind that is:
  • I don't store temporary data on my database (so that I don't need to make queries on it every time a token needs to change its expiration or to be validated... leaving the database for relational data)
  • Redis has a really convenient expiration logic baked in (you can basically say "store this for N seconds")
  • Redis is fast
So, every time a log in attempt is made, I would just store the token on Redis (using the token as key and the account id as value, setting its TTL to the time I want it to be valid for) and verifying if the token is valid would be just verify if it's in there, and the value would get me the account id. I also wanted to block accounts after N unsuccessful login attempts. The code I came up with is something like this (rewrote it a little to improve its readability, you can find the actual code here):
 
public static LogInResultDT LogIn(string email, string password)
{
    var res = new LogInResultDT() { Status = LogInStatus.INVALID_USER_PWD };
    using (var context = new Context())
    {
        var account = GetAccount(email);
        if (account != null)
        {
            if (!account.IsActive)
            {
                res.Status = LogInStatus.INACTIVE;
                return res;
            }
            if (account.ReactivationTime.HasValue)
            {
                if (account.ReactivationTime.Value < DateTime.UtcNow)
                {
                    account.ReactivationTime = null;
                    account.FailedLoginAttempts = 0;
                }
                else
                {
                    res.Status = LogInStatus.TEMPORARILY_DISABLED;
                    return res;
                }
            }
            if (account.PasswordMatches(password))
            {
                res.Status = LogInStatus.SUCCESS;
                res.Token = System.Guid.NewGuid().ToString();

                StoreTokenOnRedis(res.Token, account.Id, GetFromConfig("Account.TokenExpirationSeconds"))
                account.FailedLoginAttempts = 0;
            }
            else
            {
                account.FailedLoginAttempts++;
                int maxFailedLogins = GetFromConfig("Account.MaxFailedLogins"));
                if (account.FailedLoginAttempts >= maxFailedLogins)
                {
                    int deactivateSeconds = GetFromConfig("Account.AccountDeactivationSeconds");
                    account.ReactivationTime = DateTime.UtcNow.AddSeconds(deactivateSeconds);
                }
            }
            context.SaveChanges();
        }
    }
    return res;
}
 

The idea behind returning the Invalid user or password is not letting an attacker figure out if an account exists just by entering any user or password (if the message said "invalid user", an attacker would know that when they get "invalid password" they got the user right). However, I realized that this code would let an attacker do basically the same thing. Considering that the code is open source, they would see this and instead of trying 1 user / password they could try N... and the accounts that get temporarily disabled are the ones that exist on the system.

In order to avoid this, I came up with two options:
  • Return an "Invalid user or password or your account has been temporarily disabled" message. I, as a user, would hate to see this message...
  • Temporarily disable all accounts (even non existent ones)
The second approach is definitely the most user friendly... but I was deactivating the accounts on the database... sooo... that's where Redis helps again :)

The basic idea is: every time there's an unsuccessful log in attempt, increase the amount of failed attempts and set the TTL to the time I want to have the account deactivated... and before verifying if the account is valid, I can see how many log in attempts an email had. As the TTL takes care of removing that from Redis, I don't have to do anything else.

The code is also nicer looking, and that's a plus! (here is the real version)
 
public static LogInResultDT LogIn(string email, string password)
{
    var res = new LogInResultDT() { Status = LogInStatus.INVALID_USER_PWD };
    using (var context = new Context())
    {
        int maxFailedLogins = GetFromConfig("Account.MaxFailedLogins");
        int failedLogins = GetAmountOfFailedLoginsFromRedis(email);

        if (failedLogins >= maxFailedLogins)
        {
            res.Status = LogInStatus.TEMPORARILY_DISABLED;
            return res;
        }
   
        var account = GetAccount(email);
        if (account != null && account.PasswordMatches(password))
        {
            // verify if the account is active once we know that the user knows their pwd and that their account isn't temporarily disabled
            if (!account.IsActive)
            {
                res.Status = LogInStatus.INACTIVE;
                return res;
            }

            res.Status = LogInStatus.SUCCESS;
            res.Token = System.Guid.NewGuid().ToString();

            StoreTokenOnRedis(res.Token, account.Id, GetFromConfig("Account.TokenExpirationSeconds"))
        }
        else
        {
            AddFailedLoginToRedis(email);
        }
    }
}
 

That's ok... then, on my Angular side of things, to log in I was doing
 
(function () {
    app.factory('accountService', function ($resource, $http, $q, $log, baseUrl) {
        resource = $resource(baseUrl + 'accounts/:id', { id: "@Id" }, null, {stripTrailingSlashes: false})
        return {
            logIn: function (email, password) {
                var deferred = $q.defer()
                $http.post(baseUrl + 'accounts/log-in', { 'Email': email, 'Password': password })
                    .success(function (response) {
                        if (response.Status == 'SUCCESS') {
                            deferred.resolve(response.Token)
                        } else {
                            deferred.reject(response.Status)
                        }
                    })
                    .error(function (data, code, headers, config, status) {
                        $log.error('Code: ' + code + '\nData: ' + data + '\nStatus: ' + status)
                        deferred.reject(code)
                    })
                return deferred.promise
            },
            resource: resource
        }
    })
})();
 

I'm storing the token on the local session storage, so I just add it as a header on every request by doing this:
 
app.run(function ($rootScope, $window, $http) {
    $rootScope.$on("$routeChangeError", function (event, current, previous, eventObj) {
        if (eventObj.authenticated === false) {
            $window.location.href = '/'
        }
    });

    $http.defaults.headers.common.Authorization = 'gmc-auth ' + $window.sessionStorage.token
});
 

This was all good... but then I realized I had a major flaw... passing the token like that, an attacker could just try different tokens until they got one right. That would be quite a time consuming task, as there are 2^122 or 5,316,911,983,139,663,491,615,228,241,121,400,000 possible combinations (source with extremely interesting comments about it) but... it still felt wrong.

That's when I included OAuth... If I can give a user a signed token, then I feel safe enough. The idea here is to give a token that expires in 24 hours, and have that token have, as a claim, the GUID I'm using to authenticate them on Redis. Then, even if the OAuth token is valid, I'd verify if it's expired (using the Redis data).

Adding OAuth to the solution was extremely easy following this great article, but I did a few changes to make it fit my particular scenario:
  • I created a TWRAuthorizationServerProvider (deriving from OAuthAuthorizationServerProvider) using my AccountsMgr to handle the log in. It also adds the roles from the database and the permissions (so that I don't need to query the database, I just use the data on the token).
  • I created a TokenValidationAttribute authentication filter (implementing IAuthenticationFilter) that takes care of validating if a GUID is valid as a token and adding the accountId as a claim. As I also want to have some users act as other users (usually admins, or... me) I'm here also changing the claims if they need to be changed.
  • I created a ClaimsAuthorizeAttribute to use on the controllers to validate that users have the appropriate claims on their tokens
  • I created a BaseApiController that my WebApi controllers derive from that it just makes available the AccountId so that the controllers can use it freely.

So now, a .NET controller with special claims requirements looks like this:
 
[ClaimsAuthorize]
[RoutePrefix("api/accounts")]
public class AccountsController : BaseApiController
{
    [ClaimsAuthorize("permission", "view-all-accounts")]
    public async Task<IEnumerable<AccountDT>> Get()
    {
        return await AccountsMgr.GetAccountsAsync();
    }

    [HttpGet]
    [Route("current")]
    public async Task<AccountDT> CurrentAccountId()
    {
        return await AccountsMgr.GetAccountAsync(_AccountId);
    }
}
 

And a .NET controller that just requires a user logged in looks like this:
 
[ClaimsAuthorize]
public class DevicesController : BaseApiController
{
    public async Task<IEnumerable<DeviceDT>> Get()
    {
        return await DevicesMgr.GetDevicesAsync(_AccountId);
    }
}
 

The angular accountService is pretty straightforward:
 
(function () {
    app.factory('accountService', function ($resource, $http, $q, $log, baseUrl) {
        resource = $resource(baseUrl + 'accounts/:id', { id: "@id" }, {
            current: {
                method: 'GET',
                url: baseUrl + 'accounts/current',
                isArray: false
            }
        })
        return {
            logIn: function (email, password) {
                var deferred = $q.defer()
                data = "grant_type=password&username=" + email + "&password=" + password;
                $http.post(baseUrl + 'token', data, { headers: { 'Content-Type': 'application/x-www-form-urlencoded' } })
                    .success(function (response, status) {
                        deferred.resolve(response.access_token)
                    })
                    .error(function (data, code, headers, config, status) {
                        if (data.error) {
                            deferred.reject(data.error)
                        } else {
                            $log.error('Code: ' + code + '\nData: ' + data + '\nStatus: ' + status)
                            deferred.reject(code)
                        }
                    })
                return deferred.promise
            },
            resource: resource
        }
    })
})();
 

The angular controller that logs a user in has this method:
 
_this.logIn = function (loginForm) {
    if (loginForm.$valid) {
        accountService.logIn(_this.email, _this.password).then(
            function (token) {
                $window.sessionStorage.token = token
                $window.location.href = '/control-panel'
            }, function (reason) {
                errors = []
                if (isFinite(reason)) {
                    errors.push('HTTP Error: ' + reason)
                } else {
                    switch (reason) {
                        case 'INVALID_USER_PWD': reason = 'Invalid email or password'; break
                        case 'INACTIVE': reason = 'Your account is inactive'; break
                        case 'TEMPORARILY_DISABLED': reason = 'Your account has been temporarily disabled due to many unsuccessful login attempts. Try again later.'; break
                        default: reason = 'Unknown code: ' + reason
                    }
                    errors.push(reason)
                }
                _this.showErrors(errors)
            }
        )
    }
    else
    {
        errors = []
        if (loginForm.email.$error.required) {
            errors.push('The email is required')
        } else if (loginForm.email.$error.email) {
            errors.push('The email entered is invalid')
        }
        if (loginForm.password.$error.required) {
            errors.push('The password is required')
        }
        _this.showErrors(errors)
    }
}
 

The angular code that takes care of sending the token (and redirecting the user out if we get a 401 due to an invalid token) looks like this
 
(function(){
    app.config(function ($routeProvider, $locationProvider, $httpProvider) {
        // if we receive a 401, delete the token and redirect to the homepage
        $httpProvider.interceptors.push(function ($q, $window) {
            return {
                'responseError': function (response) {
                    var status = response.status;
                    if (status == 401) {
                        $window.sessionStorage.removeItem('token');
                        $window.location.href = '/';
                    }
                    return $q.reject(response);
                },
            };
        });
    })

    app.run(function ($rootScope, $window, $http) {
        if (!$window.sessionStorage.token) {
            $window.location.href = '/'
        }

        $http.defaults.headers.common.Authorization = 'Bearer ' + $window.sessionStorage.token
        if ($window.sessionStorage.actingAs) {
            $http.defaults.headers.common['Acting-As'] = $window.sessionStorage.actingAs
        }
    });
})();
 

And an angular service that uses it just does
 
(function () {
    app.factory('deviceService', function ($resource, baseUrl) {
        resource = $resource(baseUrl + 'devices/:id', { id: "@id" })
        return {
            resource: resource
        }
    })
})();
 

The only database queries that are done on every request are those the managers need to do in order to perform their tasks... mission accomplished! (or so I think).

You can download the code of the whole project from https://gitlab.gmc.uy/gervasio/twilioregistration (I'll move it to github once I get to the first alpha). It's my intention to eventually pack this as a separate thing so that I can use it on other projects without copying and pasting... but until I find the time to do it, it's going to live in there :)

I'd love to hear your thoughts in the comments :) is it too much effort for something that that's pointless? after all, if an attacker got their hands on a token, they could do pretty much everything for 24 hours (except for changing the email / password, as the email will require email confirmation and the password will require knowledge of the old password).

Tuesday, January 13, 2015

Getting ready for #signalconf

It's been a while since I last wrote in here... mainly because I've been working like crazy, but I'm expecting to start blogging on a regular basis at least until the Signal conference (the new version of the twilio conference).

The (first and) last conference I attended was the 2013 360|iDev one... and even if I ended up leaving the iOS world, I got something that changed my (professional) life from it... The conviction that (almost) everything is possible. I was already working with Twilio's APIs, I started playing with Asterisk, I started this blog, people started contacting me about it, I left the company I was working for and I'm proud to say that I'm working exclusively on stuff that I find interesting.

I went to that conference with very little experience on that particular technology, having only worked on an iOS app before, and without enough knowledge to really stand out from the crowd.

I'm in a very different situation for the Signal conference... I've worked with Twilio on several projects, using different technologies (Python and .NET both with websockets) and probably all what they offer. I have a good understanding of Asterisk and I'm eager to meet interesting people that can help me take my game to the next level (while I help them do the same)... but in two days, it's going to be crazy to really connect with so many people... that's why I feel I have to do something about it.

The thing that got me noticed in the virtual world is how I integrated Asterisk and Twilio before there was a single blog post around... so I'm going to try to build on it. Something that's missing in Twilio is the ability to have SIP phones register with it, so that developers can deal with SIP phones instead of regular landline or mobile phones. I'm guessing that's a feature that's coming (given that other companies already offer it and that Twilio has recently added SIP Trunking capabilities). Besides adding an option to work with existing SIP terminals, the cost of calling SIP phones is significantly lower than calling landlines or mobile phones around the world, so it only makes sense for them to integrate it into what they sell... but in the mean time, I'm going to do something about it!

In order to have SIP phones register, you need to have a PBX (Asterisk or Freeswitch)... and dealing with them is usually a headache for application developers, so I'm going to build an Open Source "Asterisk as a service" service. It will have a .NET RESTful backend that will handle the application logic and several Ubuntu servers with Asterisk that will take care of the communication with Twilio and the SIP phones. The code to handle the servers will be pure Python and the frontend will be an AngularJS application that will connect to the backend in the same way that the Ubuntu servers will.

I'm going to have it ready for the Signal conference and I expect it to be an effective way for me to show how I work and what I know. I will also put a big deal of effort in detailing what's missing and how things could be improved (I've never worked with a SIP proxy, and for it to be really scalable and redundant, the solution would need one. I don't think I'll have enough time to learn and tackle that down though).

I expect to keep on sharing my problems and solutions in here... what started as a way to give back has taken my by surprise... Here's to more surprises and to a great 2015!


See you there?