Alexandre Bourget

geek joy

Posts in category: Web development

Job in Montreal, for Python and web lovers

May 12, 2011 at 03:40 PM

Savoir-faire Linux

Are you a fan of the web ? Fond of the latest technologies ? You love Python, and maybe you've tried PHP in the past ? You're a FLOSS lover and know Linux personally ? Savoir-faire Linux is the place for you! Being the Free Software reference in Canada, we're a small and friendly S.M.B. that's growing fast.

We have some great work for you, a great environment, a nice office. We're located in Montreal, Canada -- a very nice place if you haven't visited yet.

We are a FLOSS service company that offers expertise on a variety of open source / free software products. We also do some in-house software development and integration of OSS bricks.

We're looking to hire on two fronts: PHP and Python development, both heavily oriented towards the web.

First, we're looking for some good PHP developers to add to our task force. We currently have projects running with Drupal and Symfony.

Second, we're seeking talented Python developers, to do some web oriented development, as well as system-level programming and integration.

Candidates must know the HTML/CSS/JS stack thoroughly. Being in some small and cross-functional teams, candidates will be called to work on all aspects of web apps (design, usability, integration, back-end programming, database and server-side components integration, etc), depending on the interests and capacities of the team members. Candidates must be eager to learn a lot of new stuff (our teams like a constant dose of innovation). We use the Scrum methodology in house and invest a lot in our employees to help them get up to speed with development best practices.

If you're a student, then obviously you don't need to know all these things already, but you should be ready to work hard, on real projects and learn quite a bunch of new things.

Candidates will have opportunities - if desired - to teach what they learn in some of our training classes, to act as consultants at our customers' place, to learn from our support center and infrastructure development team. We are not a huge company so you won't be a number here. Work atmosphere is great, the office is cool (even in the summer :), we sometimes can travel for the job, we've got flexible schedules, a social club, we often have "techno-lunches" (midi-technos) to share our knowledge, and there's plenty of good food around (we're located near the Marché Jean-Talon).

For more details, see: http://www.savoirfairelinux.com

We're waiting for your resume here: rh@savoirfairelinux.com and mention this blog in the body of your message.

UPDATE June 7th: corrected typos

Read and Post Comments

New and hot, part 1: Meta presentation

April 01, 2011 at 09:50 AM

This is part 1 of my March 9th 2011 Confoo presentation. Refer to the Table of Contents for the other parts.

This is a meta-presentation. It's a presentation of the tools and things I've used to build and perform the actual presentation I gave. It is not directly related to Pyramid or Python.

Presentation of my setup and the tools used.

Some people come to me and express wonder on the flow of my live presentation, because it all seems to just work all the time, even without Internet access. Here is what I've done to make things look like they actually work:

Scripts to handle different tasks with keyboard shortcuts

Here is a script I've written that lists a bunch of events with associated chunks of code to be executed when I launch them. The script allows me to hit Ctrl+Spacebar and type in a couple of letters, and then launches the associated piece of code.

They require the Ubuntu xclip and beep packages to be installed. The scripts are available here:

apps.py, gtkdrive.glade, gtkdrive.py, mod_confoo2011.py, presmod.py

This required setting some global hotkeys like this:

$ # Support for compiz (use `ccsm` to activate the "Commands" plugin)
$ gconftool-2 --type string --set /apps/compiz/plugins/commands/command0 "echo 1 > /tmp/btntrigger"
$ gconftool-2 --type string --set /apps/compiz/plugins/commands/run_command0_key "<Super>space"
$ # Support for metacity:
$ gconftool-2 --type string --set /apps/metacity/keybinding_commands/command_1 "echo 1 > /tmp/btntrigger" 
$ gconftool-2 --type string --set /apps/metacity/global_keybindings/run_command_1 "<Super>space"

Afterwards, hitting <Super>space will pop up a little window like this:

Little box

In there, typing one of the commands that were listed when running the presentation will run the associated method in mod_confoo2011.py.

Emacs tweaks

I've added this snippet of code to my ~/.emacs file to adjust the font size, making sure people can see something from a distance and fits a 1024x768 resolution at 80 characters wide (grabbed here):

;; When doing a presentation:
(set-face-attribute 'default nil :height 150)

Also to speed up things when dealing with boilerplate, I'm using yasnippet for Emacs

;; .emacs
(add-to-list 'load-path "~/.emacs.d/plugins/yasnippet-0.6.1c")
(require 'yasnippet)
(yas/initialize)
(yas/load-directory "~/.emacs.d/snippets")

and these yasnippets in my ~/.emacs.d/snippets directory. You'll find in there only the ones I actually used in the presentation, and not the standard ones.

Bash shortcuts

I've added a couple of aliases to speed things up in the presentation. Here is a list of what I've used:

# To use throughout the code to refer to my custom-built FFmpeg. See the post on FFmpeg
export FFMPEG=/home/abourget/build/ffmpeg-0.6.1/bin/ffmpeg

# To make sure I load ipython using the local virtualenv
alias ipython="python `which ipython`"

# PIP download cache, don't download things twice.
export PIP_DOWNLOAD_CACHE=~/.pip/download_cache

Using PIP_DOWNLOAD_CACHE allowed me to install things more quickly. If I had downloaded the package already, it would take it from my download cache. This doesn't prevent *all* Internet access, as pip will still go look on the PyPI if it's the latest version for example.

You could also have set up a local mirror with the packages you needed, and install everything following this method.

Squid caching proxy

At first, I wanted to use Squid caching to fake I had an Internet connection, but in the end, the presentation room did have Internet and I had a cell phone with tethering enabled so I didn't need the proxy. But still, here is how I did it the year before:

$ sudo apt-get install squid3

Tweak the squid3 proxy config:

# Add:
refresh_pattern .               1440    90%     4320 reload-into-ims ignore-no-cache ignore-reload override-expire
# Comment out:
#refresh_pattern -i (/cgi-bin/|\?) 0    0%      0    
# Enable:
offline_mode on

Restart the server, and you should be all set. In any console you wish to use the proxy, run:

$ export http_proxy=http://localhost:3128

Pre-fetched downloads

I've downloaded the HTML5 boilerplate code from http://html5boilerplate.com/.

I also had a copy of socket.io.js from here.

Video and sound recordings

To do video recording of the screen, I've used this ffmpeg command:

~/build/ffmpeg-0.6.1/bin/ffmpeg -y -f x11grab -r 12 -s 1024x768 -i :0.0 -vcodec libx264 -vpre veryfast Presentation.mkv

Sound recording was made with an old SIM-less HTC Dream (USA's G1), running a free Android application named Virtual Recorder, recording in PCM WAVE using the microphone-earbuds thing that came with the phone.

Someone to tape the presentation, using an external camcorder.

I was using a SANYO Xacti camera, which spat some already-encoded H.264/AVC, that was sitting on a tripod.

Final rendering

The final rendering was done with Cinelerra, using ReframeRT to adjust the frame-rate of the Presentation.mkv file, produced by FFmpeg's x11grab. I also needed to adjust the frame-rate for some reasons

I used the audio from the Android device, the video from tx11grab

YouTube publishing

I've used YouTube to publish the video parts, and this Blogofile blog for the tutorials themselves. Here is the YouTube result:

Read and Post Comments

New and hot, part 6: Redis, Publish and Subscribe

March 31, 2011 at 04:40 PM

This is part 6 of my March 9th 2011 Confoo presentation. Refer to the Table of Contents for the other parts.

The original announcement for my Confoo presentation was to use RabbitMQ to do real-time communications, but then, after finding a way to use Redis to do it, I found it was simpler and more straight-forward for a live demo.

Redis Integration

In this part, we will use Redis's PubSub features to communicate in real-time between the different web browsers connected with a Socket.IO (or a Websocket) to our Gevent-based application.

Redis is a FOSS database which supports the Publish/Subscribe metaphore. It is written in C, and is quite robust. It also comes pre-setup on your Ubuntu machine.

redis Show the install instructions for redis-server.

To install Redis, simply run:

$ sudo apt-get install redis-server

and you're ready to go! To get things into production, you'll want to learn how to secure your installation.

You can try the redis server locally if you kill the system-wide instance (through init.d/service scripts):

$ redis-server
...
[21194] 08 Mar 22:08:54 * Server started, Redis version 1.3.15
...
[21194] 08 Mar 22:08:54 * The server is now ready to accept connections on port 6379
[21194] 08 Mar 22:08:54 - 0 clients connected (0 slaves), 533436 bytes in use

Python Redis bindings and PubSub commands

Let's setup what is needed for messaging using Redis.

First of all, we'll want to connect to the Redis server and publish a new message. We'll add some code to the views.py and set up the publisher. We'll name the channel foo:

...
import redis
from json import loads, dumps
...
def encode_video(filename, request):
    ...
    if p.returncode == 0:
        r = redis.Redis()
        msg = {'type': 'video', 'fileid': str(fileid),
               'url': request.route_url('video', fileid=fileid)}
        r.publish("foo", dumps(msg))

Now, we'll want to have our Socket.IO socket listen for any incoming message from the Redis subscription:

class ConnectIOContext(SocketIOContext):
    def msg_connect(self, msg):
        ...
        def listener():
            r = redis.Redis()
            r.subscribe(['foo'])
            for m in r.listen():
                if not self.io.connected():
                    return
                print "From Redis:", m
                if m['type'] == 'message':
                    self.io.send(loads(m['data']))
        self.spawn(listener)

When we receive a message as the m object, we'll just pass it directly to the current Socket.IO socket. We've subscribed to the foo channel, so we'll receive a copy of any message sent to it.

We will also want the client to handle the new message, so we'll tweak the call to go() and the definition for the go() function, in index.html:

      <script>
        function go(url) {
          $('#video').html('<div>Video ready: ' + url + '</div><video controls preload src="'+ url + '" />');
          ...
        }
      </script>

      ...
      socket.on('message', function(obj) {
        ...
        if (obj.type == "video") {
          go(obj.url);
        }
        ...

In here, if the message is of type video, like the one sent by the encode_video() function, it will trigger the playback immediately.

Let's make sure a background process is spawned when someone asks us to encode a video. The encoding process itself could be totally decoupled from this server, sent to a render farm for example. In views.py, we would tweak:

def get_iframe(request):
    if 'file' in request.params:
        ...
        gevent.spawn(encode_video, '/tmp/video.mp4', request)

That's it! Hope you liked the series! I might add the Android app demo if I have time, so please be patient.

If you would like us to implement anything you've seen here in a real-life project, or to kick start some of your projects, don't hesitate to send out an e-mail at contact@savoirfairelinux.com and mention this blog.

Read and Post Comments

New and hot, part 5: MongoDB integration

March 23, 2011 at 01:32 PM

This is part 5 of my March 9th 2011 Confoo presentation. Refer to the Table of Contents for the other parts.

MongoDB integration

In this part, we will make our system scale, by using a distributed file system to store our encoded files. We will be using GridFS that is backed by MongoDB, which can be sharded and replicated to scale.

First off, let's install MongoDB. You can get Ubuntu installation instructions here.

mongoinstall Display the install instructions on screen.
$ sudo -s
# echo "deb http://downloads.mongodb.org/distros/ubuntu 10.10 10gen" > /etc/apt/sources.list.d/mongodb.list
# apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
# apt-get update
# apt-get install mongodb-stable

We will create a local MongoDB data dir, and we'll run the server as a normal user:

$ mkdir mongodata

To launch MongoDB on the default port, as a normal user:

$ mongod --dbpath ./mongodata

Back to Pyramid, most of the database-related elments will go in resources.py in a Pyramid project. This is where we'll add our code to handle the connections to MongoDB:

## resources.py
...

from gridfs import GridFS
import pymongo

mongo_conn = pymongo.Connection()

def add_mongo(event):
    req = event.request
    req.db = mongo_conn['testdb']
    req.fs = GridFS(req.db)

We import GridFS, the filesystem handler, and pymongo, which is the low-level mongodb library. We keep a reference to a connection in mongo_conn. This connection object will be pooled, so when we get a reference to a particular database, it takes a connection from the pool and returns it upon deletion.

The add_mongo() function that is defined here will be hooked to be executed for each new request. This way, we will have a db and an fs attribute on each request going through our app. To do so, we'll modify __init__.py this way:

def main(...):
    ...
    config.add_subscriber('foo.resources.add_mongo',
                          'pyramid.events.NewRequest')
    ...

And then we're set for database.

GridFS: Using MongoDB as a filestore

Here we will store our compressed video files into the distributed filesystem MongoDB provides. GridFS is backed by MongoDB's features (document-oriented database for metadata) and has elegant python bindings.

If we store videos in MongoDB, we'll want to be able to retrieve them from there as well, so we will need to tweak the route to get them. In __init__.py:

def main(...):
    ...
    config.add_route('video', 'video/{fileid}.webm')
    ...

See that fileid ? We'll get it in the view filled with what matched in the requested URL.

Then, we'll modify the encoding process to dump the output directly to MongoDB, using Python's pipes:

def encode_video(filename, request):
    p = subprocess.Popen('$FFMPEG -y -i %s ... -ac 2 -' % filename,
                         shell=True, stdout=subprocess.PIPE)
    stdout, stderr = p.communicate()
    fileid = request.fs.put(stdout, content_type="video/webm",
                            original_user="123")
    print "Video URL: ", request.route_url('video', fileid=fileid)

What's to note here is the addition of the request parameter in the encode_video() call. This will allow us to create a URL with route_url(). Also we changed the output for FFmpeg, not pointing to /tmp/output.webm anymore, but to stdout using a dash. We've added stdout=subprocess.PIPE also, to get a hand of the output.

The request.fs.put() method will create a new file in the GridFS attached to request.fs and takes FFmpeg's stdout as a file-like object to fetch content from (as it's stdin). The end result is a new file in the distribute file system. put() takes any keyword arguments, and will add that meta-data to the fs.files collection's document objects. That's why original_user="123" can be added, and searched for later on.

Then, we'll make sure we fetch the data from MongoDB when a web request comes in for a video:

objectid YASnippet to add the import statements.
dataapp YASnippet to add the WSGIAPP line.
from pymongo.objectid import ObjectId
from paste.fileapp import DataApp

@view_config(route_name="get_video")
def get_video(request):
    oid = ObjectId(request.matchdict['fileid'])
    filein = request.fs.get(oid)
    wsgiapp = DataApp(filein, headers=[("Content-Type", "video/webm")],
                      filelike=True)
    return request.get_response(wsgiapp)

(Note that we need a patched paste.fileapp.DataApp for this to work.)

This chunk will get the fileid from the URL, make it an ObjectId and query the GridFS instance with request.fs.get(oid). This will return a file-like object that we'll pass to DataApp, which is a simple file-serving mechanisms that deals with Byte-Ranges and ETags and those kind of things.

To try it out, upload a video, and find the link in the server's stdout and load it in your browser (or mplayer). If it plays correctly, then it works!

Next will be Redis integration with its PubSub support, so that when the video is ready, it's pushed directly to all web viewers.

If you would like us to implement anything you've seen here in a real-life project, or to kick start some of your projects, don't hesitate to send out an e-mail at contact@savoirfairelinux.com mentioning this blog.

Read and Post Comments

New and hot, part 4: Pyramid, Socket.IO and Gevent

March 17, 2011 at 12:10 PM

This is part 4 of my March 9th 2011 Confoo presentation. Refer to the Table of Contents for the other parts.

Gevent

In this section, we will change HTTP server from paster's to Gevent's. We will also implement Socket.IO in our Pyramid application and have the client communicate in full duplex with the server. We will then graph the CPU usage (on a Linux machine) directly to the web viewer.

Gevent is a micro-threading library (à la Stackless Python) and it's HTTP server supports a large number of concurrent connections, which yield an event when new data is available. It is based on libevent and Python's greenlets.

gevent Show the install instructions for gevent and pyramid_socketio.

Requirements for Gevent:

(env)$ sudo apt-get install libevent-dev
(env)$ pip install gevent gevent-websocket gevent-socketio

This will also installs greenlet, on which it is based.

Pyramid's Socket.IO integration layer

We'll install the pyramid_socketio package, that will tie in all the Socket.IO support, and allow us to write beautiful stateful classes for each client.

(env)$ pip install pyramid_socketio

This package will bring in it's dependencies: gevent-websocket, gevent-socketio, gevent and greenlet.

gevent-websocket gives us WebSocket support and implements the WebSocket protocol on top of Gevent. It is used by gevent-socketio when dealing with WebSocket, otherwise, Socket.IO's fallbacks are handled directly.

Switching from Paster to Gevent's server

Thankfully, pyramid_socketio provides with a simple script to replace our call to paster serve --reload file.ini or paster serve file.ini. It's socketio-serve-reload file.ini and socketio-serve file.ini, respectively.

(env)$ paster serve --reload development.ini
...
^C^C caught in monitor process
(Killed)
(env)$ socketio-serve-reload development.ini
...

The Socket.IO server handles the initialization of the logging library, the setup of a watcher if you ask for code reloading, will read host/port from your .ini file, just like paster did, it will attempt to listen on port 843 (must be run as root though) to setup the Flash Socket Policy server, if you want to use the Flash WebSocket fallback. Otherwise, it's a drop-in replacement for paster serve.

When using the socketio-serve server, Gevent will automatically be initialized and will monkey patch several modules (like socket and threading) to make sure it has versions that will yield control to Gevent instead of blocking on I/O. From that point on, any process that would start off a new thread will, without knowing it, launch a new Greenlet, using the same APIs, transparently.

Also, if you want subprocess support in your application (which we will need), get this version, by a guy who ported the stdlib subprocess to handle gevent's event loop, and never block on I/O.

subprocess Copy the vendor/subprocess.py file from the web location above, in the ~/Foo/foo directory.. for use within views.py

Quick look at the code

Let's have a quick look of how Gevent works, from a stripped-down version of the socketio-serve script:

# imports...
# get host/port
# init logging
# grab --watch argv parameter, assign do_reload

def socketio_serve():
    cfgfile = "file.ini"

    def main():
        app = paste.deploy.loadapp('config:%s' % cfgfile, relative_to='.')
        server = socketio.SocketIOServer((host, port), app,
                                         resource="socket.io")

        print "Serving on %s:%d (http://127.0.0.1:%d) ..." % (host, port, port)
        server.serve_forever()

   def reloader():
        from paste import reloader
        reloader.install()
        reloader.watch_file(cfgfile)
        for lang in glob.glob('*/locale/*/LC_MESSAGES/*.mo'):
            reloader.watch_file(lang)

    jobs = [gevent.spawn(main)]
    if do_reload:
        jobs.append(gevent.spawn(reloader))
    gevent.joinall(jobs)

This shows how Gevent handles concurrent jobs. You spawn greenlets with gevent.spawn() and wait for them to terminate with gevent.joinall(). The reloader() borrows the reloader code from paste (the same one used when running paster serve --reload) and will exit completely the program with error code number 3. The socketio-serve-reload wraps around this program and catches those errors, and restarts the server when something is modified.

Our Socket.IO aware application, client-side.

Now, let's write some basic WebSocket code, directly in our index.html:

socketio YASnippet to copy the boilerplate
socketio Event to copy socket.io.js to ~/Foo/foo/static/js/socket.io.js, fitting with the boilerplate.
  <script src="http://cdn.socket.io/stable/socket.io.js"></script>

  <script>
    var socket = null;
    $(document).ready(function() {
      socket = new io.Socket(null, {});

      socket.on('connect', function() {
        console.log("Connected");
        socket.send({type: "connect", userid: 123});
      });
      socket.on('message', function(obj) {
        console.log("Message", JSON.stringify(obj));
        if (obj.type == "some") {
          console.log("do some");
        }
      });
      socket.on('error', function(obj) {
        console.log("Error", JSON.stringify(obj));
      });
      socket.on('disconnect', function() {
        console.log("Disconnected");
      });

      console.log("Connecting...");
      socket.connect();
    });
  </script>

When the socket gets connected, we immediately send a message, with the type connect. This will be mapped on the server side (by pyramid_socketio) to the msg_connect method in the SocketIOContext provided to socketio_manage.

The null value and empty object passed to new io.Socket() means we're going to connect to the same host and port as the current request, and the URL will be /socket.io/... with some extra path information like the transport being used, and the session ID (used by Socket.IO to maintain a channel open).

The server-side socket.io handler

Configuring socket.io in our Pyramid app goes like this:

def main(...):
    ...
    #config.add_static_view('socket.io/lib', 'foo:static')
    config.add_route('socket_io', 'socket.io/*remaining')
    ...

If you want Flash fallback support as an alternative WebSockets implementation, uncomment the add_static_view call, to serve the WebSocketMain.swf file. Setting this up is slightly more complicated, requires Flash installed on the client side, a Flash Policy Server on the server side, and an added javascript that you can get at: https://github.com/gimite/web-socket-js. Check out this repository in foo/static:

(env)$ ## Optional, for Flash websockets fallback support
(env)$ cd foo/static
(env)$ git clone https://github.com/gimite/web-socket-js.git
(env)$ cd ../..

The WebSocketMain.swf file must be served from the same domain, otherwise, you'll have to use the insecure one, and change the location in your HTML output to something like:

<script>WEB_SOCKET_SWF_LOCATION = '/path/to/WebSocketMainInsecure.swf';</script>

If you want the Flash support, don't forget to add the script tag in your HTML file. See the documentation of web-socket-js for more info about these things.

You can turn off Flash fallback altogether by passing some parameters to your call to io.Socket(null, {transports: ['websocket', 'flashsocket', 'htmlfile', 'xhr-multipart', 'xhr-polling', 'jsonp-polling']});.

Back to Pyramid. This is the basic setup to handle messages using the pyramid_socketio helpers:

manage Copy and paste the Socket.IO server-side boilerplate.
### In views.py:

from pyramid.response import Response
from pyramid_socketio.io import SocketIOContext, socketio_manage
import gevent

class ConnectIOContext(SocketIOContext):
    # self.io is the Socket.IO socket
    # self.request is the request
    def msg_connect(self, msg):
        print "Connect message received", msg
        self.msg("connected", hello="world")

# Socket.IO implementation
@view_config(route_name="socket_io")
def socketio_service(request):
    print "Socket.IO request running"
    retval = socketio_manage(ConnectIOContext(request))
    return Response(retval)

The first section is a SocketIOContext, which is provided by the pyramid_socketio package. It is a simple objects that maps incoming messages from the socket to class methods. It also provides convenience methods like spawn(), msg() and error() to spawn a new greenlet, send a new packet (or message) or send an error message (in a pre-defined format). The Socket.IO object itself, representing the socket, will be available via self.io (read gevent-socketio's documentation for more information on that object) and the original request for the socket will be held in self.request. If you send a message like {type: "connect", userid: 123} from the web application, it will run the msg_connect() method with a dict representing your Javascript object as a second parameter.

The second section is the pyramid handler. Once the gevent-socketio has done his job (of dealing with the abstraction of the transports), it will launch the request against the normal WSGI application, and will arrive, just like with a normal GET request, to one of ours views. This is where we pass on the control to the pyramid_socketio manager. The manager will listen for incoming packets, and dispatch to the SocketIOContext we've provided.

Using Flot.js to load dynamic stats from the server

What we want to do here is to graph some values coming from the server, being pushed to the client. This paradigm will irrevocably change the way we consume and construct web application in the near future.

flot Open the Flot website with an example.

Flot.js is a nice graphs library that works completely on the client side. See it's website for more examples and details.

Start by getting jquery.flot.js into your project:

(env)$ cd ~/Foo/foo/static
(env)$ mkdir js
(env)$ cd js
(env)$ wget http://people.iola.dk/olau/flot/jquery.flot.js

then let's add in our HTML template, after the code to load jQuery itself:

  <script src="${request.static_url('foo:static/js/jquery.flot.js')}"></script>

somewhere. Add a placeholder for the graph somewhere in your page:

  <div id="graph"></div>

Then, that could be the handler to display some basic data (put that under the Socket.IO stuff):

data YASnippet to add the values for d1, and d2
  <script>
    var d1 = [[1, 2], [2, 4], [3, 0], [4, 5]];
    var d2 = [[1, 4], [2, 6], [3, 7], [4, 2]];
    $.plot($('#graph'), [d1, d2], {});
  </script>

If we modify [d1, d2] to [{label: "Hello", data: d1}, d2], we'll have a label associated with it gratis.

Now we want to have some data fed from the server to the client, in real-time. Let's add a handler for messages labeled showdata on the client side. That'll be in our socket.on('message', ...) handler:

      socket.on('message', function(obj) {
        ...
        if (obj.type == "showdata") {
          d1.push([d1.length, obj.point]);
          $.plot($('#graph'), [{label: "Bob", data: d1}]);
        }
      });

To have those values sent on the server side, we'll modify slightly our server-side code:

sendcpu YASnippet to add the 'sendcpu' stub.
class ConnectIOContext(SocketIOContext):
    ...
    def msg_connect(self, msg):
        ...
        def sendcpu():
            """Calculate CPU utilization"""
            prev = None
            while self.io.connected():
                vals = map(int, [x for x in open('/proc/stat').readlines()
                                 if x.startswith('cpu ')][0].split()[1:5])
                if prev:
                    percent = (100.0 * (sum(vals[:3]) - sum(prev[:3])) / 
                               (sum(vals) - sum(prev)))
                    self.msg("showdata", point=percent)
                prev = vals
                gevent.sleep(0.5)
        self.spawn(sendcpu)
    ...

Except all the CPU usage calculation, there are two relevant parts here: the call to self.spawn() and the call to self.msg().

The spawn() method allows us to spawn a new greenlet and attach it to the SocketIOContext, so that when we kill the Socket.IO session, we kill also all the greenlets that are related to it. It helps prevent memory leaks. It's a thin wrapper around gevent's spawn method that keeps a reference in the SocketIOContext.

The call to self.msg() is a method provided by the pyramid_socketio package. It takes as a first argument the type of the message, and everything specified as keyword arguments afterwards are used to create the JSON object that will be transmitted. You can pass on lists, dicts, etc..

What if...

Remember in the last post, we were dealing with some FFmpeg video encoding, and we had it displayed in a video tag ? What if we could receive a message when the video is done transcoding ? Wouldn't it be cool if we could have such a simple implementation:

      socket.on('message', function(obj) {
        console.log("Message", obj);
        if (obj.type == "video") {
          go(obj.url);
        }
      });

with slight tweaks to the go() function to handle an argument, which would be the URL to ask for the video:

        function go(url) {
          $('#video').html('<div>Video ready: ' + url + '</div><video controls preload src="' + url + '" />');
          ...
        }

We'll do just that in the next episode.

If you would like us to implement anything you've seen here in a real-life project, or to kick start some of your projects, don't hesitate to send out an e-mail at contact@savoirfairelinux.com mentioning this blog. We'll be glad to help.

Read and Post Comments

Next Page »