Monday, August 31, 2015

Mac OS Notifications from Python - with PyCharm debugging :)

I wanted to create an application that notifies me of interesting things through different means (it could be an SMS, an email or... Mac OS neat notification system).

I found this stack overflow answer that explained how to do that from Python, and even letting you add an action button (something like "View"). Here's that answer with a minor tweak on the init code (as the original way doesn't work anymore)

 
import Foundation
import objc


class MountainLionNotification(Foundation.NSObject):
    # Based on http://stackoverflow.com/questions/12202983/working-with-mountain-lions-notification-center-using-pyobjc

    def init(self):
        self = objc.super(MountainLionNotification, self).init()
        if self is None: return None

        # Get objc references to the classes we need.
        self.NSUserNotification = objc.lookUpClass('NSUserNotification')
        self.NSUserNotificationCenter = objc.lookUpClass('NSUserNotificationCenter')

        return self

    def clearNotifications(self):
        """Clear any displayed alerts we have posted. Requires Mavericks."""

        NSUserNotificationCenter = objc.lookUpClass('NSUserNotificationCenter')
        NSUserNotificationCenter.defaultUserNotificationCenter().removeAllDeliveredNotifications()

    def notify(self, title, subtitle, text, url):
        """Create a user notification and display it."""

        notification = self.NSUserNotification.alloc().init()
        notification.setTitle_(str(title))
        notification.setSubtitle_(str(subtitle))
        notification.setInformativeText_(str(text))
        notification.setSoundName_("NSUserNotificationDefaultSoundName")
        notification.setHasActionButton_(True)
        notification.setActionButtonTitle_("View")
        notification.setUserInfo_({"action":"open_url", "value":url})

        self.NSUserNotificationCenter.defaultUserNotificationCenter().setDelegate_(self)
        self.NSUserNotificationCenter.defaultUserNotificationCenter().scheduleNotification_(notification)

        # Note that the notification center saves a *copy* of our object.
        return notification

    # We'll get this if the user clicked on the notification.
    def userNotificationCenter_didActivateNotification_(self, center, notification):
        """Handler a user clicking on one of our posted notifications."""

        userInfo = notification.userInfo()
        if userInfo["action"] == "open_url":
            import subprocess
            # Open the log file with TextEdit.
            subprocess.Popen(['open', "-e", userInfo["value"]])
 

... but things weren't that simple. In order for an application to be able to send notifications, it needs to be part of an application bundle and have, on its Info.plist the CFBundleIdentifier key populated. As I'm using virtualenv, the application that's being run is /path/to/my/virtualenv/bin/python and that obviously is not a bundle. Also, that's what PyCharm uses, and I want to set up my script to be run as a launchd script. Some people suggested using py2app, but I wanted to be able to debug as needed.

Do you want to show alert notifications? (the ones that don't disappear automatically and that let you use an action button) you need your Info.plist to have NSUserNotificationAlertStyle = alert (and enable them on System Preferences, or have your code signed)

The way I found to make all of that happen was:

  • create a python.app bundle containing an Info.plist inside of the environment, with a link to the python executable
  • create a python bash script on the environment that calls the python.app application
With your virtualenv activated, paste this and it will take care of everything for you
 
# disable bash history so that we can paste it without issues
set +o history

if [ -z $VIRTUAL_ENV ];then echo "please activate a virtualenv";set -o history;else

# choose application name
read -p "What do you want to use as application name? [python]" APPNAME;if [ -z $APPNAME ];then APPNAME="python";fi;

if [ -d ${VIRTUAL_ENV}/bin/${APPNAME}.app ]; then
 echo "The application ${APPNAME}.app already exists"
 set -o history
else

# create bundle directory and Info.plist
mkdir -p ${VIRTUAL_ENV}/bin/${APPNAME}.app/Contents/MacOS
cat >${VIRTUAL_ENV}/bin/${APPNAME}.app/Contents/Info.plist <<EOL
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
 <key>CFBundleExecutable</key>
 <string>python</string>
 <key>NSUserNotificationAlertStyle</key>
 <string>alert</string>
 <key>CFBundleIdentifier</key>
 <string>${APPNAME}.app</string>
 <key>CFBundleName</key>
 <string>${APPNAME}</string>
 <key>CFBundlePackageType</key>
 <string>APPL</string>
 <key>NSAppleScriptEnabled</key>
 <true/>
</dict>
</plist>
EOL

# doing this so that I can have multiple apps in a virtual environment, to have different icons
if [ ! -f ${VIRTUAL_ENV}/bin/realpython ]; then
    ln -s `readlink ${VIRTUAL_ENV}/bin/python` ${VIRTUAL_ENV}/bin/realpython
fi

# create symbolic link
ln -s ../../../`readlink ${VIRTUAL_ENV}/bin/realpython` ${VIRTUAL_ENV}/bin/${APPNAME}.app/Contents/MacOS/python

# only delete the original symlink. After the first execution, leave the bash script as is
if [ -L ${VIRTUAL_ENV}/bin/python ]; then
 # delete the python one (as we'll use a shell script, so that it loads the app bundle info)
 rm ${VIRTUAL_ENV}/bin/python

 # create shell script
 echo "#!/bin/bash
${VIRTUAL_ENV}/bin/${APPNAME}.app/Contents/MacOS/python $@" >> ${VIRTUAL_ENV}/bin/python
 chmod +x ${VIRTUAL_ENV}/bin/python
fi;

# enable history back
set -o history
fi;
fi;
 
If you hate pasting so many lines (as I do), you can just do
 
bash <(curl -sL https://gmc.uy/appify_with_notifications.sh)
 

And that should be it! you now have a python file on your environment that's a bundle, and on pycharm you'll be able to debug the notifications code and see the notifications pop up :) You can see my project here.

Wednesday, February 25, 2015

Defeating OAuth2's purpose with PhantomJS and Selenium

I'm taking a small break from the SIP registration project for twilio to work on a quick app that automatically updates the time spent on my JIRA issues based on my TimeDoctor logs. I also want it to send me a weekly report and a monthly report. I originally designed everything to be extremely modular (so that you could use different "notification" plugins and time tracking services) but I got discouraged by the time it was taking me. This needs to be something that makes my life easier, not an extra project :). I'm using Python here because... I want to!

TimeDoctor has an API that lets me retrieve my worklogs... and that's cool, but the only way to authenticate a user is through OAuth2 without the possiblity of using grant_type = password. That is great if you're creating an app that lots of people are going to use and most importantly: if the app you're building is a web app. I want to do this as a console app, so that I can put it in a cron and just forget about it. This approach requires me to put my TimeDoctor credentials in the application, but I'm ok with that.

My first approach was using Flask, and launching a web server if the tokens (access and refresh) failed. That seemed like a decent workaround, given that refresh tokens should get me access for as long as the application remains approved by the user... but that's not enough, I want the cron to be completely independent from me...

So, that's where I remembered I had read about PhantomJS and how cool I thought it sounded (I've built some scrapers and having to figure out what a JS could be doing is one of the most painful things I've worked on). PhantomJS is just a headless browser... it means that it has no window but it processes everything as a regular browser would do. It then lets you interact with the different elements programatically. This seems like the exact challenge it's able to solve... javascripts, redirections and redirections done through javascript. I would have my system use PhantomJS to complete all the steps in the authentication... Spoiler alert: it works great.

This particular API implements the Authorization Code Grant flow. In plain English, this is how it's meant to work:

  1. You should register your app and provide a redirection url. That's where your users would land once they complete the registration process. They give you a client_id and a client_secret
  2. You show a link to your user saying "Log in with TimeDoctor" that points to https://webapi.timedoctor.com/oauth/v2/auth?client_id=<YOUR_CLIENT_ID>&response_type=code&redirect_uri=<REDIRECT_URI>
  3. The user enters their credentials in the TimeDoctor site, and then choose to allow your application to access their data
  4. TimeDoctor's site redirects the user to whatever you set as redirect_uri when redirecting the user to them and appends ?code=<SOME_WEIRD_CODE> (the hostname needs to match with what you used when registering your app on the very first step)
  5. Once your server has the code, it should query https://webapi.timedoctor.com/oauth/v2/token?client_id=<YOUR_CLIENT_ID>&client_secret=<YOUR_CLIENT_SECRET>&grant_type=authorization_code&redirect_uri=<REDIRECT_URI>&code=<RETURNED_CODE_FROM_POINT.a> which now will retrieve the access_token (that you need to access the API) and the refresh_token (that you need to use when the access_token expires and you want a new one)
One of my goals is not to have a server. In order to accomplish that, I'm going to have PhantomJS tell me what's the URL on step 4 and parse the code from it.

I originally set it so that my redirect url was http://127.0.0.1:1234 but that didn't work. For some (extremely weird) reason, if PhantomJS doesn't find a server in the port you specify, it doesn't change the page. So what I ended up doing was using a server that whatever you pass, works... and that's... http://example.com their own server... I set up my TimeDoctor app to redirect to https://webapi.timedoctor.com/oauth/v2/token with the code.

Requirements for this code to work:

  • Have PhantomJS installed (running npm -g install phantomjs if you have nodejs installed)
  • Have selenium's package (running pip install selenium)
  • If you're on a mac and installed nodejs with brew, you need to do sudo ln -s /usr/local/bin/node /usr/bin/node (thanks to this answer!) 

Here is the extremely simple code
 
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from urllib import quote
from urlparse import urlparse

timedoctor_oauth_url = ('https://webapi.timedoctor.com/oauth/v2/auth?'
                       'client_id=%s&response_type=code&redirect_uri=%s')
                       % (config['client_id'], 
                          quote('https://webapi.timedoctor.com/oauth/v2/token'))

# Initialize driver
driver = webdriver.PhantomJS(executable_path=config['phantomjs_path'])
driver.get(timedoctor_oauth_url)

# Fill username
username_field = driver.find_element_by_id('username')
username_field.send_keys(config['username'])

# Fill password
password_field = driver.find_element_by_id('password')
password_field.send_keys(config['password'])

# Submit authentication form
password_field.send_keys(Keys.ENTER)

# In the second form, where the user is asked to give access to your 
# app or not, click on the "Accept" button. 
# The element's id is 'accepted'
accept_button = driver.find_element_by_id('accepted')
accept_button.click()

# The browser is now in the example.com url, get that url and extract the code
url = urlparse(driver.current_url)
query_dict = dict([tuple(x.split('=')) for x in url.query.split('&')])
code = query_dict['code']

r = requests.post('https://webapi.timedoctor.com/oauth/v2/token', {
    'client_id': config['client_id'],
    'client_secret': config['client_secret'],
    'grant_type': 'authorization_code',
    'code': code,
    'redirect_uri': 'https://webapi.timedoctor.com/oauth/v2/token'
})

if r.status_code != 200:
    print 'Unable to retrieve code, token service status code %i' % r.status_code
    sys.exit(-1)

data = json.loads(r.content)

access_token = data['access_token']
refresh_token = data['refresh_token']
 
And if at any time you want to really see what PhantomJS is seeing, you can do driver.save_screenshot('screen.png') and it creates an image for you... soooo cool!

Hope it helps someone! Oh, and I wouldn't have been able to figure out how to use PhantomJS via seleniom without this answer. If you're interested in this particular project, you can take a look and clone the repo here.

Thursday, February 5, 2015

My take on AngularJS authentication with a .NET backend

Working on the twilio registration project I want to show on the Signal conf, I decided to go on things as deep as I want to... after all, it's my project to play with :)

I'm new with AngularJS, so I'm dealing with the same stuff everyone has dealt with on their beginnings. The first thing on my way is authentication... What I've read around suggested generating an OAuth bearer token and having it expire in 24 hours (or in any time you feel comfortable with and implementing the refresh mechanism).

Coming from traditional MVC applications, I really like the idea of sliding expiration... giving away a long lived token feels... dirty. I couldn't find anything baked in that took care of this, so I started coding my own algorithm.

While I was at it, my first decision was how to hash my passwords... fortunately, I got to this awesome article about it and it even included C# code, so that's what I'm using.

Then, I got to think on my authentication code... my idea is to give every authenticated user a token with an expiration... and every time they use it, have that token change its expiration (the idea isn't to change the expiration only if half the time has already passed... I don't really see the value in it, but we'll see what the good folks of stack exchange have to say about it).

In order to handle the tokens on my side (and their expiration), I decided to use Redis. The reasoning behind that is:
  • I don't store temporary data on my database (so that I don't need to make queries on it every time a token needs to change its expiration or to be validated... leaving the database for relational data)
  • Redis has a really convenient expiration logic baked in (you can basically say "store this for N seconds")
  • Redis is fast
So, every time a log in attempt is made, I would just store the token on Redis (using the token as key and the account id as value, setting its TTL to the time I want it to be valid for) and verifying if the token is valid would be just verify if it's in there, and the value would get me the account id. I also wanted to block accounts after N unsuccessful login attempts. The code I came up with is something like this (rewrote it a little to improve its readability, you can find the actual code here):
 
public static LogInResultDT LogIn(string email, string password)
{
    var res = new LogInResultDT() { Status = LogInStatus.INVALID_USER_PWD };
    using (var context = new Context())
    {
        var account = GetAccount(email);
        if (account != null)
        {
            if (!account.IsActive)
            {
                res.Status = LogInStatus.INACTIVE;
                return res;
            }
            if (account.ReactivationTime.HasValue)
            {
                if (account.ReactivationTime.Value < DateTime.UtcNow)
                {
                    account.ReactivationTime = null;
                    account.FailedLoginAttempts = 0;
                }
                else
                {
                    res.Status = LogInStatus.TEMPORARILY_DISABLED;
                    return res;
                }
            }
            if (account.PasswordMatches(password))
            {
                res.Status = LogInStatus.SUCCESS;
                res.Token = System.Guid.NewGuid().ToString();

                StoreTokenOnRedis(res.Token, account.Id, GetFromConfig("Account.TokenExpirationSeconds"))
                account.FailedLoginAttempts = 0;
            }
            else
            {
                account.FailedLoginAttempts++;
                int maxFailedLogins = GetFromConfig("Account.MaxFailedLogins"));
                if (account.FailedLoginAttempts >= maxFailedLogins)
                {
                    int deactivateSeconds = GetFromConfig("Account.AccountDeactivationSeconds");
                    account.ReactivationTime = DateTime.UtcNow.AddSeconds(deactivateSeconds);
                }
            }
            context.SaveChanges();
        }
    }
    return res;
}
 

The idea behind returning the Invalid user or password is not letting an attacker figure out if an account exists just by entering any user or password (if the message said "invalid user", an attacker would know that when they get "invalid password" they got the user right). However, I realized that this code would let an attacker do basically the same thing. Considering that the code is open source, they would see this and instead of trying 1 user / password they could try N... and the accounts that get temporarily disabled are the ones that exist on the system.

In order to avoid this, I came up with two options:
  • Return an "Invalid user or password or your account has been temporarily disabled" message. I, as a user, would hate to see this message...
  • Temporarily disable all accounts (even non existent ones)
The second approach is definitely the most user friendly... but I was deactivating the accounts on the database... sooo... that's where Redis helps again :)

The basic idea is: every time there's an unsuccessful log in attempt, increase the amount of failed attempts and set the TTL to the time I want to have the account deactivated... and before verifying if the account is valid, I can see how many log in attempts an email had. As the TTL takes care of removing that from Redis, I don't have to do anything else.

The code is also nicer looking, and that's a plus! (here is the real version)
 
public static LogInResultDT LogIn(string email, string password)
{
    var res = new LogInResultDT() { Status = LogInStatus.INVALID_USER_PWD };
    using (var context = new Context())
    {
        int maxFailedLogins = GetFromConfig("Account.MaxFailedLogins");
        int failedLogins = GetAmountOfFailedLoginsFromRedis(email);

        if (failedLogins >= maxFailedLogins)
        {
            res.Status = LogInStatus.TEMPORARILY_DISABLED;
            return res;
        }
   
        var account = GetAccount(email);
        if (account != null && account.PasswordMatches(password))
        {
            // verify if the account is active once we know that the user knows their pwd and that their account isn't temporarily disabled
            if (!account.IsActive)
            {
                res.Status = LogInStatus.INACTIVE;
                return res;
            }

            res.Status = LogInStatus.SUCCESS;
            res.Token = System.Guid.NewGuid().ToString();

            StoreTokenOnRedis(res.Token, account.Id, GetFromConfig("Account.TokenExpirationSeconds"))
        }
        else
        {
            AddFailedLoginToRedis(email);
        }
    }
}
 

That's ok... then, on my Angular side of things, to log in I was doing
 
(function () {
    app.factory('accountService', function ($resource, $http, $q, $log, baseUrl) {
        resource = $resource(baseUrl + 'accounts/:id', { id: "@Id" }, null, {stripTrailingSlashes: false})
        return {
            logIn: function (email, password) {
                var deferred = $q.defer()
                $http.post(baseUrl + 'accounts/log-in', { 'Email': email, 'Password': password })
                    .success(function (response) {
                        if (response.Status == 'SUCCESS') {
                            deferred.resolve(response.Token)
                        } else {
                            deferred.reject(response.Status)
                        }
                    })
                    .error(function (data, code, headers, config, status) {
                        $log.error('Code: ' + code + '\nData: ' + data + '\nStatus: ' + status)
                        deferred.reject(code)
                    })
                return deferred.promise
            },
            resource: resource
        }
    })
})();
 

I'm storing the token on the local session storage, so I just add it as a header on every request by doing this:
 
app.run(function ($rootScope, $window, $http) {
    $rootScope.$on("$routeChangeError", function (event, current, previous, eventObj) {
        if (eventObj.authenticated === false) {
            $window.location.href = '/'
        }
    });

    $http.defaults.headers.common.Authorization = 'gmc-auth ' + $window.sessionStorage.token
});
 

This was all good... but then I realized I had a major flaw... passing the token like that, an attacker could just try different tokens until they got one right. That would be quite a time consuming task, as there are 2^122 or 5,316,911,983,139,663,491,615,228,241,121,400,000 possible combinations (source with extremely interesting comments about it) but... it still felt wrong.

That's when I included OAuth... If I can give a user a signed token, then I feel safe enough. The idea here is to give a token that expires in 24 hours, and have that token have, as a claim, the GUID I'm using to authenticate them on Redis. Then, even if the OAuth token is valid, I'd verify if it's expired (using the Redis data).

Adding OAuth to the solution was extremely easy following this great article, but I did a few changes to make it fit my particular scenario:
  • I created a TWRAuthorizationServerProvider (deriving from OAuthAuthorizationServerProvider) using my AccountsMgr to handle the log in. It also adds the roles from the database and the permissions (so that I don't need to query the database, I just use the data on the token).
  • I created a TokenValidationAttribute authentication filter (implementing IAuthenticationFilter) that takes care of validating if a GUID is valid as a token and adding the accountId as a claim. As I also want to have some users act as other users (usually admins, or... me) I'm here also changing the claims if they need to be changed.
  • I created a ClaimsAuthorizeAttribute to use on the controllers to validate that users have the appropriate claims on their tokens
  • I created a BaseApiController that my WebApi controllers derive from that it just makes available the AccountId so that the controllers can use it freely.

So now, a .NET controller with special claims requirements looks like this:
 
[ClaimsAuthorize]
[RoutePrefix("api/accounts")]
public class AccountsController : BaseApiController
{
    [ClaimsAuthorize("permission", "view-all-accounts")]
    public async Task<IEnumerable<AccountDT>> Get()
    {
        return await AccountsMgr.GetAccountsAsync();
    }

    [HttpGet]
    [Route("current")]
    public async Task<AccountDT> CurrentAccountId()
    {
        return await AccountsMgr.GetAccountAsync(_AccountId);
    }
}
 

And a .NET controller that just requires a user logged in looks like this:
 
[ClaimsAuthorize]
public class DevicesController : BaseApiController
{
    public async Task<IEnumerable<DeviceDT>> Get()
    {
        return await DevicesMgr.GetDevicesAsync(_AccountId);
    }
}
 

The angular accountService is pretty straightforward:
 
(function () {
    app.factory('accountService', function ($resource, $http, $q, $log, baseUrl) {
        resource = $resource(baseUrl + 'accounts/:id', { id: "@id" }, {
            current: {
                method: 'GET',
                url: baseUrl + 'accounts/current',
                isArray: false
            }
        })
        return {
            logIn: function (email, password) {
                var deferred = $q.defer()
                data = "grant_type=password&username=" + email + "&password=" + password;
                $http.post(baseUrl + 'token', data, { headers: { 'Content-Type': 'application/x-www-form-urlencoded' } })
                    .success(function (response, status) {
                        deferred.resolve(response.access_token)
                    })
                    .error(function (data, code, headers, config, status) {
                        if (data.error) {
                            deferred.reject(data.error)
                        } else {
                            $log.error('Code: ' + code + '\nData: ' + data + '\nStatus: ' + status)
                            deferred.reject(code)
                        }
                    })
                return deferred.promise
            },
            resource: resource
        }
    })
})();
 

The angular controller that logs a user in has this method:
 
_this.logIn = function (loginForm) {
    if (loginForm.$valid) {
        accountService.logIn(_this.email, _this.password).then(
            function (token) {
                $window.sessionStorage.token = token
                $window.location.href = '/control-panel'
            }, function (reason) {
                errors = []
                if (isFinite(reason)) {
                    errors.push('HTTP Error: ' + reason)
                } else {
                    switch (reason) {
                        case 'INVALID_USER_PWD': reason = 'Invalid email or password'; break
                        case 'INACTIVE': reason = 'Your account is inactive'; break
                        case 'TEMPORARILY_DISABLED': reason = 'Your account has been temporarily disabled due to many unsuccessful login attempts. Try again later.'; break
                        default: reason = 'Unknown code: ' + reason
                    }
                    errors.push(reason)
                }
                _this.showErrors(errors)
            }
        )
    }
    else
    {
        errors = []
        if (loginForm.email.$error.required) {
            errors.push('The email is required')
        } else if (loginForm.email.$error.email) {
            errors.push('The email entered is invalid')
        }
        if (loginForm.password.$error.required) {
            errors.push('The password is required')
        }
        _this.showErrors(errors)
    }
}
 

The angular code that takes care of sending the token (and redirecting the user out if we get a 401 due to an invalid token) looks like this
 
(function(){
    app.config(function ($routeProvider, $locationProvider, $httpProvider) {
        // if we receive a 401, delete the token and redirect to the homepage
        $httpProvider.interceptors.push(function ($q, $window) {
            return {
                'responseError': function (response) {
                    var status = response.status;
                    if (status == 401) {
                        $window.sessionStorage.removeItem('token');
                        $window.location.href = '/';
                    }
                    return $q.reject(response);
                },
            };
        });
    })

    app.run(function ($rootScope, $window, $http) {
        if (!$window.sessionStorage.token) {
            $window.location.href = '/'
        }

        $http.defaults.headers.common.Authorization = 'Bearer ' + $window.sessionStorage.token
        if ($window.sessionStorage.actingAs) {
            $http.defaults.headers.common['Acting-As'] = $window.sessionStorage.actingAs
        }
    });
})();
 

And an angular service that uses it just does
 
(function () {
    app.factory('deviceService', function ($resource, baseUrl) {
        resource = $resource(baseUrl + 'devices/:id', { id: "@id" })
        return {
            resource: resource
        }
    })
})();
 

The only database queries that are done on every request are those the managers need to do in order to perform their tasks... mission accomplished! (or so I think).

You can download the code of the whole project from https://gitlab.gmc.uy/gervasio/twilioregistration (I'll move it to github once I get to the first alpha). It's my intention to eventually pack this as a separate thing so that I can use it on other projects without copying and pasting... but until I find the time to do it, it's going to live in there :)

I'd love to hear your thoughts in the comments :) is it too much effort for something that that's pointless? after all, if an attacker got their hands on a token, they could do pretty much everything for 24 hours (except for changing the email / password, as the email will require email confirmation and the password will require knowledge of the old password).

Tuesday, January 13, 2015

Getting ready for #signalconf

It's been a while since I last wrote in here... mainly because I've been working like crazy, but I'm expecting to start blogging on a regular basis at least until the Signal conference (the new version of the twilio conference).

The (first and) last conference I attended was the 2013 360|iDev one... and even if I ended up leaving the iOS world, I got something that changed my (professional) life from it... The conviction that (almost) everything is possible. I was already working with Twilio's APIs, I started playing with Asterisk, I started this blog, people started contacting me about it, I left the company I was working for and I'm proud to say that I'm working exclusively on stuff that I find interesting.

I went to that conference with very little experience on that particular technology, having only worked on an iOS app before, and without enough knowledge to really stand out from the crowd.

I'm in a very different situation for the Signal conference... I've worked with Twilio on several projects, using different technologies (Python and .NET both with websockets) and probably all what they offer. I have a good understanding of Asterisk and I'm eager to meet interesting people that can help me take my game to the next level (while I help them do the same)... but in two days, it's going to be crazy to really connect with so many people... that's why I feel I have to do something about it.

The thing that got me noticed in the virtual world is how I integrated Asterisk and Twilio before there was a single blog post around... so I'm going to try to build on it. Something that's missing in Twilio is the ability to have SIP phones register with it, so that developers can deal with SIP phones instead of regular landline or mobile phones. I'm guessing that's a feature that's coming (given that other companies already offer it and that Twilio has recently added SIP Trunking capabilities). Besides adding an option to work with existing SIP terminals, the cost of calling SIP phones is significantly lower than calling landlines or mobile phones around the world, so it only makes sense for them to integrate it into what they sell... but in the mean time, I'm going to do something about it!

In order to have SIP phones register, you need to have a PBX (Asterisk or Freeswitch)... and dealing with them is usually a headache for application developers, so I'm going to build an Open Source "Asterisk as a service" service. It will have a .NET RESTful backend that will handle the application logic and several Ubuntu servers with Asterisk that will take care of the communication with Twilio and the SIP phones. The code to handle the servers will be pure Python and the frontend will be an AngularJS application that will connect to the backend in the same way that the Ubuntu servers will.

I'm going to have it ready for the Signal conference and I expect it to be an effective way for me to show how I work and what I know. I will also put a big deal of effort in detailing what's missing and how things could be improved (I've never worked with a SIP proxy, and for it to be really scalable and redundant, the solution would need one. I don't think I'll have enough time to learn and tackle that down though).

I expect to keep on sharing my problems and solutions in here... what started as a way to give back has taken my by surprise... Here's to more surprises and to a great 2015!


See you there?

Wednesday, April 9, 2014

Installing Asterisk 12 on Ubuntu 12.04 with pjproject and SRTP

Today I had to install an Asterisk that could deal with WebRTC. I read on the Asterisk wiki that in order for it to work, it needs to be installed with pjproject and SRTP. Until today, I always used menuselect to choose what to install, but these two buddies are kind of different... they aren't selectable unless you install them before Asterisk.

As I couldn't find a guide with the steps to follow (took bits of information from different sources and figured some things by myself) here's what worked for me. I took a lot from
Asterisk 12 on a Raspberry Pi | MatthewJordan.net so thanks to Matthew ;) I also enjoyed how he shows what he's doing, so I'm copying that from there too.

Install the Asterisk dependencies and more stuff we're going to need (I'm also installing libbfd-dev b/c want to use BETTER_BACKTRACES)

 
gmc@blog:~$ sudo apt-get install build-essential libsqlite3-dev libxml2-dev libncurses5-dev libncursesw5-dev libiksemel-dev libssl-dev libeditline-dev libedit-dev curl libcurl4-gnutls-dev libjansson4 libjansson-dev libuuid1 uuid-dev libxslt1-dev liburiparser-dev liburiparser1 git autoconf libbfd-dev -y

Reading package lists... Done
...
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place

gmc@blog:~$ 
 

Install libsrtp

We first donwload and uncompress the files (thanks to Alexander Traud for pointing me out that libsrtp moved to github)
 
gmc@blog:~$ cd ~
gmc@blog:~$ git clone https://github.com/cisco/libsrtp.git
Cloning into 'libsrtp'...
remote: Reusing existing pack: 2037, done.
remote: Total 2037 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (2037/2037), 3.17 MiB | 1.30 MiB/s, done.
Resolving deltas: 100% (1249/1249), done.
Checking connectivity... done.
gmc@blog:~$
 
Then configure and make it!... I figured out the flags for ./configure by trial and error (at one point, Asterisk complained about the -fPIC flag, so I just added it)
 
gmc@blog:~$ cd libsrtp/
gmc@blog:~/libsrtp$ autoconf
gmc@blog:~/libsrtp$ ./configure CFLAGS=-fPIC --prefix=/usr

checking for ranlib... ranlib
checking for gcc... gcc
checking whether the C compiler works... yes
...
config.status: creating doc/Makefile
config.status: creating crypto/include/config.h

gmc@blog:~/libsrtp$ make

gcc -DHAVE_CONFIG_H -Icrypto/include -I./include -I./crypto/include  -fPIC -c srtp/srtp.c -o srtp/srtp.o
gcc -DHAVE_CONFIG_H -Icrypto/include -I./include -I./crypto/include  -fPIC -c crypto/cipher/cipher.c -o crypto/cipher/cipher.o
gcc -DHAVE_CONFIG_H -Icrypto/include -I./include -I./crypto/include  -fPIC -c crypto/cipher/null_cipher.c -o crypto/cipher/null_cipher.o
...
gcc -DHAVE_CONFIG_H -Icrypto/include -I./include -I./crypto/include  -fPIC -L. -o test/rtpw test/rtpw.c test/rtp.c libsrtp.a  -lsrtp
Build done. Please run 'make runtest' to run self tests.

gmc@blog:~$ 
 
I saw that there was a make runtest... it's pretty cool :) here's what you should see
 
gmc@blog:~/libsrtp$ make runtest

gcc -DHAVE_CONFIG_H -Icrypto/include -I./include -I./crypto/include  -fPIC -c crypto/math/math.c -o crypto/math/math.o
crypto/math/math.c: In function 'bitvector_print_hex':
crypto/math/math.c:854:5: warning: format not a string literal and no format arguments [-Wformat-security]
...
libsrtp test applications passed.
...
libcryptomodule test applications passed.
make[1]: Leaving directory `/home/test/srtp/crypto'

gmc@blog:~$ 
 
and then... just install it
 
gmc@blog:~/srtp$ sudo make install

/usr/bin/install -c -d /usr/include/srtp
/usr/bin/install -c -d /usr/lib
cp include/*.h /usr/include/srtp  
cp crypto/include/*.h /usr/include/srtp
if [ -f libsrtp.a ]; then cp libsrtp.a /usr/lib/; fi

gmc@blog:~$ 
 

Install pjproject

Clone the project from its git repo
 
gmc@blog:~/srtp$ cd ~
gmc@blog:~$ git clone https://github.com/asterisk/pjproject pjproject

Cloning into 'pjproject'...
remote: Reusing existing pack: 3636, done.
remote: Total 3636 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (3636/3636), 7.76 MiB | 2.05 MiB/s, done.
Resolving deltas: 100% (1167/1167), done.

gmc@blog:~$ 
 
Configure and make it... again, after a few attempts, I got here
 
gmc@blog:~$ cd pjproject/
gmc@blog:~/pjproject$ ./configure --prefix=/usr --enable-shared --disable-sound --disable-resample --disable-video --disable-opencore-amr --with-external-srtp

checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
...
Further customizations can be put in:
  - 'user.mak'
  - 'pjlib/include/pj/config_site.h'
The next step now is to run 'make dep' and 'make'.

gmc@blog:~/pjproject$ make

for dir in pjlib/build pjlib-util/build pjnath/build third_party/build pjmedia/build pjsip/build pjsip-apps/build ; do \
  if make  -C $dir all; then \
      true; \
...
make[2]: Leaving directory `/home/test/pjproject/pjsip-apps/build'
make[1]: Leaving directory `/home/test/pjproject/pjsip-apps/build'

gmc@blog:~/pjproject$ sudo make install

mkdir -p /usr/lib/
...
sed -e "s!@PJ_LDLIBS@!-lpjsua -lpjsip-ua -lpjsip-simple -lpjsip -lpjmedia-codec -lpjmedia -lpjmedia-videodev -lpjmedia-audiodev -lpjmedia -lpjnath -lpjlib-util  -lgsmcodec -lspeex -lilbccodec -lg7221codec  -lsrtp -lpj -luuid -lm -lrt -lpthread  -lcrypto -lssl!" | \
sed -e "s!@PJ_INSTALL_CFLAGS@!-I/usr/include -DPJ_AUTOCONF=1 -O2 -DPJ_IS_BIG_ENDIAN=0 -DPJ_IS_LITTLE_ENDIAN=1 -fPIC!" > //usr/lib/pkgconfig/libpjproject.pc

gmc@blog:~$ 
 
That should be it! this is an extra step to verify that's correctly set up
 
gmc@blog:~/pjproject$ pkg-config --list-all | grep pjproject
libpjproject     libpjproject - Multimedia communication library

gmc@blog:~$ 
 
Done! pjproject is installed! just one more thing...

Install Asterisk

Download an uncompress...
 
gmc@blog:~/pjproject$ cd ~
gmc@blog:~$ wget http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-12-current.tar.gz

--2014-04-09 21:46:57--  http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-12-current.tar.gz
Resolving downloads.asterisk.org (downloads.asterisk.org)... 76.164.171.238, 2001:470:e0d4::ee
Connecting to downloads.asterisk.org (downloads.asterisk.org)|76.164.171.238|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 56483961 (54M) [application/x-gzip]
Saving to: `asterisk-12-current.tar.gz'
100%[========================================================================================================================>] 56,483,961  10.6M/s   in 5.9s    
2014-04-09 21:47:03 (9.10 MB/s) - `asterisk-12-current.tar.gz' saved [56483961/56483961]

gmc@blog:~$ tar -xzf asterisk-12-current.tar.gz
gmc@blog:~$ 
 
Configure and make menuselect...
 
gmc@blog:~$ cd asterisk*
gmc@blog:~/asterisk-12.1.1$ ./configure --with-pjproject --with-ssl --with-srtp

checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking for gcc... gcc
...
configure: build-cpu:vendor:os: x86_64 : unknown : linux-gnu :
configure: host-cpu:vendor:os: x86_64 : unknown : linux-gnu :

gmc@blog:~/asterisk-12.1.1$ make menuselect
CC="cc" CXX="" LD="" AR="" RANLIB="" CFLAGS="" LDFLAGS="" make -C menuselect CONFIGURE_SILENT="--silent" cmenuselect
make[1]: Entering directory `/home/test/asterisk-12.1.1/menuselect'
gcc  -g -D_GNU_SOURCE -Wall   -c -o menuselect.o menuselect.c

...
 
This is where you can see, under Resource Modules, that we have the res_pjsip_* modules enabled and res_srtp enabled too... you can do changes and tune your Asterisk how you like and quit by hitting x (that's save and quit)... then, we only have to...
 
gmc@blog:~/asterisk-12.1.1$ make

Generating embedded module rules ...
   [CC] astcanary.c -> astcanary.o
   [LD] astcanary.o -> astcanary
...
 +--------- Asterisk Build Complete ---------+
 + Asterisk has successfully been built, and +
 + can be installed by running:              +
 +                                           +
 +                make install               +
 +-------------------------------------------+

gmc@blog:~/asterisk-12.1.1$ sudo make install

Installing modules from channels...
Installing modules from pbx...
...
 +---- Asterisk Installation Complete -------+
 +                                           +
 +    YOU MUST READ THE SECURITY DOCUMENT    +
 +                                           +
 + Asterisk has successfully been installed. +
 + If you would like to install the sample   +
 + configuration files (overwriting any      +
 + existing config files), run:              +
 +                                           +
 +                make samples               +
 +                                           +
 +-----------------  or ---------------------+
 +                                           +
 + You can go ahead and install the asterisk +
 + program documentation now or later run:   +
 +                                           +
 +               make progdocs               +
 +                                           +
 + **Note** This requires that you have      +
 + doxygen installed on your local system    +
 +-------------------------------------------+

gmc@blog:~$ 
 
If you're like me and want it to just set up the init.d scripts (so that a simple service asterisk start works) you can do
 
gmc@blog:~/asterisk-12.1.1$ sudo make config

 Adding system startup for /etc/init.d/asterisk ...
   /etc/rc0.d/K91asterisk -> ../init.d/asterisk
   /etc/rc1.d/K91asterisk -> ../init.d/asterisk
   /etc/rc6.d/K91asterisk -> ../init.d/asterisk
   /etc/rc2.d/S50asterisk -> ../init.d/asterisk
   /etc/rc3.d/S50asterisk -> ../init.d/asterisk
   /etc/rc4.d/S50asterisk -> ../init.d/asterisk
   /etc/rc5.d/S50asterisk -> ../init.d/asterisk

gmc@blog:~$ 
 
And if you're just getting started and want to see some samples...
 
gmc@blog:~/asterisk-12.1.1$ sudo make samples

Installing adsi config files...
/usr/bin/install -c -d "/etc/asterisk"
Installing configs/asterisk.adsi
...
Installing file phoneprov/polycom.xml
Installing file phoneprov/snom-mac.xml

gmc@blog:~$ 
 
And that's it! Congrats! You have what should be a WebRTC compatible Asterisk :)

Here are all the commands without their outputs so that you can run everything without copy/pasting one by one (and b/c I'm sure I'm going to do it again and don't want to remove the responses :P)
 
# dependencies
sudo apt-get install build-essential libsqlite3-dev libxml2-dev libncurses5-dev libncursesw5-dev libiksemel-dev libssl-dev libeditline-dev libedit-dev curl libcurl4-gnutls-dev libjansson4 libjansson-dev libuuid1 uuid-dev libxslt1-dev liburiparser-dev liburiparser1 git autoconf libbfd-dev -y

# srtp
cd ~
git clone https://github.com/cisco/libsrtp.git
cd libsrtp/
autoconf
./configure CFLAGS=-fPIC --prefix=/usr
make

# check that the tests pass
make runtest

sudo make install

# pjproject
cd ~
git clone https://github.com/asterisk/pjproject pjproject
cd pjproject/
./configure --prefix=/usr --enable-shared --disable-sound --disable-resample --disable-video --disable-opencore-amr --with-external-srtp
make
sudo make install

# the following command should return
# libpjproject     libpjproject - Multimedia communication library
pkg-config --list-all | grep pjproject

# asterisk
cd ~
wget http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-12-current.tar.gz
tar -xzf asterisk-12-current.tar.gz
cd asterisk*
./configure --with-pjproject --with-ssl --with-srtp

# after this command, you can select what you want on your Asterisk
make menuselect

make
sudo make install

# if you want the init.d scripts created
sudo make config
 

Saturday, March 22, 2014

Installing paramiko on Windows 8 64-bits witn MinGW

A few weeks ago, I recommended a friend Python as a great language to code... multiplatform and all that. He asked me about connecting to an SSH2 server and I told him that there were tons of libraries for everything, and that there def was one library for that.

I wasn't wrong... but when we met again, he told me he had a big headache installing any of them... I then decided to give the one that looked best (paramiko) and wow... things can go pretty nasty... but I'd like to think that's just b/c of the C part of the world.

I finally figured it out, and as it wasn't straightforward (much less pythonic) at all, I decided to write down my steps here... it may save a lot of hours to the next folk that wants to give it a try.

The (general) problem

paramiko can't be easily installed on windows because it uses pycrypto... which is a C library that deals with the encryption part.

The first approach

Their website has a link to a bunch of precopiled packages... so I gave that a try first. The right way of installing a package is by using easy_install. It did the set up, but when I tried to import the module, it failed with the message ImportError: DLL load failed: %1 is not a valid Win32 application. when its code imported winrandom.

On their site they also say that on some 64-bits systems winrandom just fails, and the only option is to manually compile it. Oh, what a luck.

Compiling it

Ok, if that's what it takes... let's try it out. Even if I have Visual Studio, I'd like to keep it out of the ecuation. This answer pointed me in the right direction, I had to do it with MinGW.

I had never used it before, but it's extremely straightforward. On the Instalation Manager, select the packages
  • mingw32-base
  • mingw32-gcc-gcc+
  • msys-base
We're also going to need another package that's not there. You should go to all packages and select mingw32-gmp (the dev one, triple check that you select that one).

Once those packages are selected, go to the Instalation menu and then to Apply Changes. After the setup, you should add c:\mingw\bin;c:\mingw\mingw32\bin;C:\MinGW\msys\1.0;c:\mingw\msys\1.0\bin;c:\mingw\msys\1.0\sbin to your path.

Now, to enter into the beautiful console, you just need to go to run (Windows + R) and write msys. If that doesn't open a console for you, there's probably something wrong with your path. Be sure to add it after what you already have there.

If you try pip install pycrypto you'll see it fails (and it's actually trying to use Visual Studio). You need to add a file named distutils.cfg inside C:\Python33\Lib\distutils (or whatever Python folder you're using). It should have
 
[build]
compiler=mingw32
 
That will tell python to use mingw32 to compile whatever it needs to compile... and we're one step closer!

Unfortunately, doing pip install pycrypto also throws all types of errors again... at least, they're different :) The message is always error: unknown type name 'off64_t'... which I didn't have a clue of what it meant... but fortunately I found this answer on SO. As he said, it's brutal... time to modify the sys/types.h file :P

Let me save you a few minutes, the same thing happens with the off_t type. Open the file C:\MinGW\include\sys\types.h and search for off_t. You'll find something like
 
#ifndef _OFF_T_
#define _OFF_T_
typedef long _off_t;
#ifndef __STRICT_ANSI__
typedef _off_t off_t;
#endif /* __STRICT_ANSI__ */
#endif /* Not _OFF_T_ */

#ifndef _OFF64_T_
#define _OFF64_T_
typedef __int64 _off64_t;
#ifndef __STRICT_ANSI__
typedef __int64 off64_t;
#endif /* __STRICT_ANSI__ */
#endif /* ndef _OFF64_T */
 

The problem is that the compiler is setting the strict mode... but on the types.h file, if that mode is set, it doesn't add the off_t alias for the _off_t type (I couldn't care less about the strict mode... just want it to run!). In order to fix it, replace that code with
 
#ifndef _OFF_T_
#define _OFF_T_
typedef long _off_t;
typedef _off_t off_t;
#endif /* Not _OFF_T_ */

#ifndef _OFF64_T_
#define _OFF64_T_
typedef __int64 _off64_t;
typedef __int64 off64_t;
#endif /* ndef _OFF64_T */
 
And now... we're always declaring the aliases the pycrypto code uses... almost there! I'm using virtualenv, and if you have more than 1 project, you should too... but here, I'm going to just install it on the entire system to keep it simpler.

Soooo.... we're good to do pip install paramiko and it should work... voilá? nah, not so fast :P I ran my sample program and got ImportError: No module named 'winrandom' wonderful winrandom again...

This time the error was fixed by just copying C:\Python33\Lib\site-packages\Crypto\Random\OSRNG\winrandom.pyd into my project folder and now yeah, voilá!!!

I'm sure there's something missing on my python path or something... but I'd like to set it up just for Python 3.3... does anyone know what's the missing part? If you do, please let me know in the comments.

That's it! you should have paramiko working :) I'm not sure if it's the easiest one (I've seen a couple of wrappers, so my guess is that it isn't) or the fastest one, but I made it work on Windows!!!

You'll see that I mentioned a couple answers on SO... please go check them up and give them upvotes... it's extremely rewarding and those are guys that don't have lots of reputation, so it's triple cool that they're giving so great answers.

Good luck!!!!

A RESTful bridge to send and receive SMS using AT commands

I bought the Portech MV-372 to place free calls to both my and my wife's mobile phones. Despite some minor hiccups (like the web server not being 100% of the time available or stuff like that) it's been working fine.

One of the first things I wanted to do was having my twilio number forward text messages (SMS) to my mobile phone... but using the Portech (so that they're free). The problem with that is that the Portech uses AT Commands that are really something to deal with.

You need to open a telnet connection to the device and then issue the commands required to either verify if there are new texts or send a new one... there's no way to tell that a text arrived to the Portech other than by polling.

Oh, and I forgot... you can only have one concurrent telnet connection open... and that's not all... while there's a connection established, you can't place new calls. That's a lovely scenario to work on, right?

Goals

  1. Have a RESTful service that let me send and receive texts
  2. Have a RESTful service that lets me ask my carrier how much credit I have on my SIMs
I have recently started playing with python... so this is a perfect fit for my ODROID... connect once a minute to check if there were new texts to process and send the ones that were on the queue.

For the second goal my carrier (antel) exposes this through the SIM Applications... that means I had to dig into the STK AT Commands as well. The bright side, is that once I select the appropriate option what I get is just a regular text saying how much credit I have.

After just four days (what made me fall in love with python) I had something that worked. A bridge between the AT Commands and the beautiful REST world... I set up my logic on my windows server and have the bridge running locally on my odroid.

You can find the project on github. There are lots of things to do, but it gets the job done. Basically, when it's running behind nginx, it exposes a service for you to send texts and when a new text arrives it does a RESTful request to a url specified on config.py. In order to see which requests it does and what it expects, you should dig into the code... so it's not exactly ready for production but it has been pretty stable at home.

The setup is pretty raw right now, just an sql script for the database and the only documentation is on the code... but if there's people interested on it, I could see myself making it more user friendly ;)

If you happen to use it and have something to contribute, feel free to send pull requests.