Alexandre Bourget

geek joy

New and hot, part 6: Redis, Publish and Subscribe

March 31, 2011 at 04:40 PM

This is part 6 of my March 9th 2011 Confoo presentation. Refer to the Table of Contents for the other parts.

The original announcement for my Confoo presentation was to use RabbitMQ to do real-time communications, but then, after finding a way to use Redis to do it, I found it was simpler and more straight-forward for a live demo.

Redis Integration

In this part, we will use Redis's PubSub features to communicate in real-time between the different web browsers connected with a Socket.IO (or a Websocket) to our Gevent-based application.

Redis is a FOSS database which supports the Publish/Subscribe metaphore. It is written in C, and is quite robust. It also comes pre-setup on your Ubuntu machine.

redis Show the install instructions for redis-server.

To install Redis, simply run:

$ sudo apt-get install redis-server

and you're ready to go! To get things into production, you'll want to learn how to secure your installation.

You can try the redis server locally if you kill the system-wide instance (through init.d/service scripts):

$ redis-server
...
[21194] 08 Mar 22:08:54 * Server started, Redis version 1.3.15
...
[21194] 08 Mar 22:08:54 * The server is now ready to accept connections on port 6379
[21194] 08 Mar 22:08:54 - 0 clients connected (0 slaves), 533436 bytes in use

Python Redis bindings and PubSub commands

Let's setup what is needed for messaging using Redis.

First of all, we'll want to connect to the Redis server and publish a new message. We'll add some code to the views.py and set up the publisher. We'll name the channel foo:

...
import redis
from json import loads, dumps
...
def encode_video(filename, request):
    ...
    if p.returncode == 0:
        r = redis.Redis()
        msg = {'type': 'video', 'fileid': str(fileid),
               'url': request.route_url('video', fileid=fileid)}
        r.publish("foo", dumps(msg))

Now, we'll want to have our Socket.IO socket listen for any incoming message from the Redis subscription:

class ConnectIOContext(SocketIOContext):
    def msg_connect(self, msg):
        ...
        def listener():
            r = redis.Redis()
            r.subscribe(['foo'])
            for m in r.listen():
                if not self.io.connected():
                    return
                print "From Redis:", m
                if m['type'] == 'message':
                    self.io.send(loads(m['data']))
        self.spawn(listener)

When we receive a message as the m object, we'll just pass it directly to the current Socket.IO socket. We've subscribed to the foo channel, so we'll receive a copy of any message sent to it.

We will also want the client to handle the new message, so we'll tweak the call to go() and the definition for the go() function, in index.html:

      <script>
        function go(url) {
          $('#video').html('<div>Video ready: ' + url + '</div><video controls preload src="'+ url + '" />');
          ...
        }
      </script>

      ...
      socket.on('message', function(obj) {
        ...
        if (obj.type == "video") {
          go(obj.url);
        }
        ...

In here, if the message is of type video, like the one sent by the encode_video() function, it will trigger the playback immediately.

Let's make sure a background process is spawned when someone asks us to encode a video. The encoding process itself could be totally decoupled from this server, sent to a render farm for example. In views.py, we would tweak:

def get_iframe(request):
    if 'file' in request.params:
        ...
        gevent.spawn(encode_video, '/tmp/video.mp4', request)

That's it! Hope you liked the series! I might add the Android app demo if I have time, so please be patient.

If you would like us to implement anything you've seen here in a real-life project, or to kick start some of your projects, don't hesitate to send out an e-mail at contact@savoirfairelinux.com and mention this blog.

Read and Post Comments

New and hot, part 5: MongoDB integration

March 23, 2011 at 01:32 PM

This is part 5 of my March 9th 2011 Confoo presentation. Refer to the Table of Contents for the other parts.

MongoDB integration

In this part, we will make our system scale, by using a distributed file system to store our encoded files. We will be using GridFS that is backed by MongoDB, which can be sharded and replicated to scale.

First off, let's install MongoDB. You can get Ubuntu installation instructions here.

mongoinstall Display the install instructions on screen.
$ sudo -s
# echo "deb http://downloads.mongodb.org/distros/ubuntu 10.10 10gen" > /etc/apt/sources.list.d/mongodb.list
# apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
# apt-get update
# apt-get install mongodb-stable

We will create a local MongoDB data dir, and we'll run the server as a normal user:

$ mkdir mongodata

To launch MongoDB on the default port, as a normal user:

$ mongod --dbpath ./mongodata

Back to Pyramid, most of the database-related elments will go in resources.py in a Pyramid project. This is where we'll add our code to handle the connections to MongoDB:

## resources.py
...

from gridfs import GridFS
import pymongo

mongo_conn = pymongo.Connection()

def add_mongo(event):
    req = event.request
    req.db = mongo_conn['testdb']
    req.fs = GridFS(req.db)

We import GridFS, the filesystem handler, and pymongo, which is the low-level mongodb library. We keep a reference to a connection in mongo_conn. This connection object will be pooled, so when we get a reference to a particular database, it takes a connection from the pool and returns it upon deletion.

The add_mongo() function that is defined here will be hooked to be executed for each new request. This way, we will have a db and an fs attribute on each request going through our app. To do so, we'll modify __init__.py this way:

def main(...):
    ...
    config.add_subscriber('foo.resources.add_mongo',
                          'pyramid.events.NewRequest')
    ...

And then we're set for database.

GridFS: Using MongoDB as a filestore

Here we will store our compressed video files into the distributed filesystem MongoDB provides. GridFS is backed by MongoDB's features (document-oriented database for metadata) and has elegant python bindings.

If we store videos in MongoDB, we'll want to be able to retrieve them from there as well, so we will need to tweak the route to get them. In __init__.py:

def main(...):
    ...
    config.add_route('video', 'video/{fileid}.webm')
    ...

See that fileid ? We'll get it in the view filled with what matched in the requested URL.

Then, we'll modify the encoding process to dump the output directly to MongoDB, using Python's pipes:

def encode_video(filename, request):
    p = subprocess.Popen('$FFMPEG -y -i %s ... -ac 2 -' % filename,
                         shell=True, stdout=subprocess.PIPE)
    stdout, stderr = p.communicate()
    fileid = request.fs.put(stdout, content_type="video/webm",
                            original_user="123")
    print "Video URL: ", request.route_url('video', fileid=fileid)

What's to note here is the addition of the request parameter in the encode_video() call. This will allow us to create a URL with route_url(). Also we changed the output for FFmpeg, not pointing to /tmp/output.webm anymore, but to stdout using a dash. We've added stdout=subprocess.PIPE also, to get a hand of the output.

The request.fs.put() method will create a new file in the GridFS attached to request.fs and takes FFmpeg's stdout as a file-like object to fetch content from (as it's stdin). The end result is a new file in the distribute file system. put() takes any keyword arguments, and will add that meta-data to the fs.files collection's document objects. That's why original_user="123" can be added, and searched for later on.

Then, we'll make sure we fetch the data from MongoDB when a web request comes in for a video:

objectid YASnippet to add the import statements.
dataapp YASnippet to add the WSGIAPP line.
from pymongo.objectid import ObjectId
from paste.fileapp import DataApp

@view_config(route_name="get_video")
def get_video(request):
    oid = ObjectId(request.matchdict['fileid'])
    filein = request.fs.get(oid)
    wsgiapp = DataApp(filein, headers=[("Content-Type", "video/webm")],
                      filelike=True)
    return request.get_response(wsgiapp)

(Note that we need a patched paste.fileapp.DataApp for this to work.)

This chunk will get the fileid from the URL, make it an ObjectId and query the GridFS instance with request.fs.get(oid). This will return a file-like object that we'll pass to DataApp, which is a simple file-serving mechanisms that deals with Byte-Ranges and ETags and those kind of things.

To try it out, upload a video, and find the link in the server's stdout and load it in your browser (or mplayer). If it plays correctly, then it works!

Next will be Redis integration with its PubSub support, so that when the video is ready, it's pushed directly to all web viewers.

If you would like us to implement anything you've seen here in a real-life project, or to kick start some of your projects, don't hesitate to send out an e-mail at contact@savoirfairelinux.com mentioning this blog.

Read and Post Comments

New and hot, part 4: Pyramid, Socket.IO and Gevent

March 17, 2011 at 12:10 PM

This is part 4 of my March 9th 2011 Confoo presentation. Refer to the Table of Contents for the other parts.

Gevent

In this section, we will change HTTP server from paster's to Gevent's. We will also implement Socket.IO in our Pyramid application and have the client communicate in full duplex with the server. We will then graph the CPU usage (on a Linux machine) directly to the web viewer.

Gevent is a micro-threading library (à la Stackless Python) and it's HTTP server supports a large number of concurrent connections, which yield an event when new data is available. It is based on libevent and Python's greenlets.

gevent Show the install instructions for gevent and pyramid_socketio.

Requirements for Gevent:

(env)$ sudo apt-get install libevent-dev
(env)$ pip install gevent gevent-websocket gevent-socketio

This will also installs greenlet, on which it is based.

Pyramid's Socket.IO integration layer

We'll install the pyramid_socketio package, that will tie in all the Socket.IO support, and allow us to write beautiful stateful classes for each client.

(env)$ pip install pyramid_socketio

This package will bring in it's dependencies: gevent-websocket, gevent-socketio, gevent and greenlet.

gevent-websocket gives us WebSocket support and implements the WebSocket protocol on top of Gevent. It is used by gevent-socketio when dealing with WebSocket, otherwise, Socket.IO's fallbacks are handled directly.

Switching from Paster to Gevent's server

Thankfully, pyramid_socketio provides with a simple script to replace our call to paster serve --reload file.ini or paster serve file.ini. It's socketio-serve-reload file.ini and socketio-serve file.ini, respectively.

(env)$ paster serve --reload development.ini
...
^C^C caught in monitor process
(Killed)
(env)$ socketio-serve-reload development.ini
...

The Socket.IO server handles the initialization of the logging library, the setup of a watcher if you ask for code reloading, will read host/port from your .ini file, just like paster did, it will attempt to listen on port 843 (must be run as root though) to setup the Flash Socket Policy server, if you want to use the Flash WebSocket fallback. Otherwise, it's a drop-in replacement for paster serve.

When using the socketio-serve server, Gevent will automatically be initialized and will monkey patch several modules (like socket and threading) to make sure it has versions that will yield control to Gevent instead of blocking on I/O. From that point on, any process that would start off a new thread will, without knowing it, launch a new Greenlet, using the same APIs, transparently.

Also, if you want subprocess support in your application (which we will need), get this version, by a guy who ported the stdlib subprocess to handle gevent's event loop, and never block on I/O.

subprocess Copy the vendor/subprocess.py file from the web location above, in the ~/Foo/foo directory.. for use within views.py

Quick look at the code

Let's have a quick look of how Gevent works, from a stripped-down version of the socketio-serve script:

# imports...
# get host/port
# init logging
# grab --watch argv parameter, assign do_reload

def socketio_serve():
    cfgfile = "file.ini"

    def main():
        app = paste.deploy.loadapp('config:%s' % cfgfile, relative_to='.')
        server = socketio.SocketIOServer((host, port), app,
                                         resource="socket.io")

        print "Serving on %s:%d (http://127.0.0.1:%d) ..." % (host, port, port)
        server.serve_forever()

   def reloader():
        from paste import reloader
        reloader.install()
        reloader.watch_file(cfgfile)
        for lang in glob.glob('*/locale/*/LC_MESSAGES/*.mo'):
            reloader.watch_file(lang)

    jobs = [gevent.spawn(main)]
    if do_reload:
        jobs.append(gevent.spawn(reloader))
    gevent.joinall(jobs)

This shows how Gevent handles concurrent jobs. You spawn greenlets with gevent.spawn() and wait for them to terminate with gevent.joinall(). The reloader() borrows the reloader code from paste (the same one used when running paster serve --reload) and will exit completely the program with error code number 3. The socketio-serve-reload wraps around this program and catches those errors, and restarts the server when something is modified.

Our Socket.IO aware application, client-side.

Now, let's write some basic WebSocket code, directly in our index.html:

socketio YASnippet to copy the boilerplate
socketio Event to copy socket.io.js to ~/Foo/foo/static/js/socket.io.js, fitting with the boilerplate.
  <script src="http://cdn.socket.io/stable/socket.io.js"></script>

  <script>
    var socket = null;
    $(document).ready(function() {
      socket = new io.Socket(null, {});

      socket.on('connect', function() {
        console.log("Connected");
        socket.send({type: "connect", userid: 123});
      });
      socket.on('message', function(obj) {
        console.log("Message", JSON.stringify(obj));
        if (obj.type == "some") {
          console.log("do some");
        }
      });
      socket.on('error', function(obj) {
        console.log("Error", JSON.stringify(obj));
      });
      socket.on('disconnect', function() {
        console.log("Disconnected");
      });

      console.log("Connecting...");
      socket.connect();
    });
  </script>

When the socket gets connected, we immediately send a message, with the type connect. This will be mapped on the server side (by pyramid_socketio) to the msg_connect method in the SocketIOContext provided to socketio_manage.

The null value and empty object passed to new io.Socket() means we're going to connect to the same host and port as the current request, and the URL will be /socket.io/... with some extra path information like the transport being used, and the session ID (used by Socket.IO to maintain a channel open).

The server-side socket.io handler

Configuring socket.io in our Pyramid app goes like this:

def main(...):
    ...
    #config.add_static_view('socket.io/lib', 'foo:static')
    config.add_route('socket_io', 'socket.io/*remaining')
    ...

If you want Flash fallback support as an alternative WebSockets implementation, uncomment the add_static_view call, to serve the WebSocketMain.swf file. Setting this up is slightly more complicated, requires Flash installed on the client side, a Flash Policy Server on the server side, and an added javascript that you can get at: https://github.com/gimite/web-socket-js. Check out this repository in foo/static:

(env)$ ## Optional, for Flash websockets fallback support
(env)$ cd foo/static
(env)$ git clone https://github.com/gimite/web-socket-js.git
(env)$ cd ../..

The WebSocketMain.swf file must be served from the same domain, otherwise, you'll have to use the insecure one, and change the location in your HTML output to something like:

<script>WEB_SOCKET_SWF_LOCATION = '/path/to/WebSocketMainInsecure.swf';</script>

If you want the Flash support, don't forget to add the script tag in your HTML file. See the documentation of web-socket-js for more info about these things.

You can turn off Flash fallback altogether by passing some parameters to your call to io.Socket(null, {transports: ['websocket', 'flashsocket', 'htmlfile', 'xhr-multipart', 'xhr-polling', 'jsonp-polling']});.

Back to Pyramid. This is the basic setup to handle messages using the pyramid_socketio helpers:

manage Copy and paste the Socket.IO server-side boilerplate.
### In views.py:

from pyramid.response import Response
from pyramid_socketio.io import SocketIOContext, socketio_manage
import gevent

class ConnectIOContext(SocketIOContext):
    # self.io is the Socket.IO socket
    # self.request is the request
    def msg_connect(self, msg):
        print "Connect message received", msg
        self.msg("connected", hello="world")

# Socket.IO implementation
@view_config(route_name="socket_io")
def socketio_service(request):
    print "Socket.IO request running"
    retval = socketio_manage(ConnectIOContext(request))
    return Response(retval)

The first section is a SocketIOContext, which is provided by the pyramid_socketio package. It is a simple objects that maps incoming messages from the socket to class methods. It also provides convenience methods like spawn(), msg() and error() to spawn a new greenlet, send a new packet (or message) or send an error message (in a pre-defined format). The Socket.IO object itself, representing the socket, will be available via self.io (read gevent-socketio's documentation for more information on that object) and the original request for the socket will be held in self.request. If you send a message like {type: "connect", userid: 123} from the web application, it will run the msg_connect() method with a dict representing your Javascript object as a second parameter.

The second section is the pyramid handler. Once the gevent-socketio has done his job (of dealing with the abstraction of the transports), it will launch the request against the normal WSGI application, and will arrive, just like with a normal GET request, to one of ours views. This is where we pass on the control to the pyramid_socketio manager. The manager will listen for incoming packets, and dispatch to the SocketIOContext we've provided.

Using Flot.js to load dynamic stats from the server

What we want to do here is to graph some values coming from the server, being pushed to the client. This paradigm will irrevocably change the way we consume and construct web application in the near future.

flot Open the Flot website with an example.

Flot.js is a nice graphs library that works completely on the client side. See it's website for more examples and details.

Start by getting jquery.flot.js into your project:

(env)$ cd ~/Foo/foo/static
(env)$ mkdir js
(env)$ cd js
(env)$ wget http://people.iola.dk/olau/flot/jquery.flot.js

then let's add in our HTML template, after the code to load jQuery itself:

  <script src="${request.static_url('foo:static/js/jquery.flot.js')}"></script>

somewhere. Add a placeholder for the graph somewhere in your page:

  <div id="graph"></div>

Then, that could be the handler to display some basic data (put that under the Socket.IO stuff):

data YASnippet to add the values for d1, and d2
  <script>
    var d1 = [[1, 2], [2, 4], [3, 0], [4, 5]];
    var d2 = [[1, 4], [2, 6], [3, 7], [4, 2]];
    $.plot($('#graph'), [d1, d2], {});
  </script>

If we modify [d1, d2] to [{label: "Hello", data: d1}, d2], we'll have a label associated with it gratis.

Now we want to have some data fed from the server to the client, in real-time. Let's add a handler for messages labeled showdata on the client side. That'll be in our socket.on('message', ...) handler:

      socket.on('message', function(obj) {
        ...
        if (obj.type == "showdata") {
          d1.push([d1.length, obj.point]);
          $.plot($('#graph'), [{label: "Bob", data: d1}]);
        }
      });

To have those values sent on the server side, we'll modify slightly our server-side code:

sendcpu YASnippet to add the 'sendcpu' stub.
class ConnectIOContext(SocketIOContext):
    ...
    def msg_connect(self, msg):
        ...
        def sendcpu():
            """Calculate CPU utilization"""
            prev = None
            while self.io.connected():
                vals = map(int, [x for x in open('/proc/stat').readlines()
                                 if x.startswith('cpu ')][0].split()[1:5])
                if prev:
                    percent = (100.0 * (sum(vals[:3]) - sum(prev[:3])) / 
                               (sum(vals) - sum(prev)))
                    self.msg("showdata", point=percent)
                prev = vals
                gevent.sleep(0.5)
        self.spawn(sendcpu)
    ...

Except all the CPU usage calculation, there are two relevant parts here: the call to self.spawn() and the call to self.msg().

The spawn() method allows us to spawn a new greenlet and attach it to the SocketIOContext, so that when we kill the Socket.IO session, we kill also all the greenlets that are related to it. It helps prevent memory leaks. It's a thin wrapper around gevent's spawn method that keeps a reference in the SocketIOContext.

The call to self.msg() is a method provided by the pyramid_socketio package. It takes as a first argument the type of the message, and everything specified as keyword arguments afterwards are used to create the JSON object that will be transmitted. You can pass on lists, dicts, etc..

What if...

Remember in the last post, we were dealing with some FFmpeg video encoding, and we had it displayed in a video tag ? What if we could receive a message when the video is done transcoding ? Wouldn't it be cool if we could have such a simple implementation:

      socket.on('message', function(obj) {
        console.log("Message", obj);
        if (obj.type == "video") {
          go(obj.url);
        }
      });

with slight tweaks to the go() function to handle an argument, which would be the URL to ask for the video:

        function go(url) {
          $('#video').html('<div>Video ready: ' + url + '</div><video controls preload src="' + url + '" />');
          ...
        }

We'll do just that in the next episode.

If you would like us to implement anything you've seen here in a real-life project, or to kick start some of your projects, don't hesitate to send out an e-mail at contact@savoirfairelinux.com mentioning this blog. We'll be glad to help.

Read and Post Comments

New and hot, part 3: FFmpeg video, HTML5 and drag'n'drop encoding

March 14, 2011 at 03:15 PM

This is part 3 of my March 9th 2011 Confoo presentation. Refer to the Table of Contents for the other parts.

FFmpeg encoding of an uploaded video file

In this part, we will create a drag'n'drop frame to upload a video file we've encoded with a phone, and have it transcoded server-side in WebM. This way we'll be able to read it back in the browser.

Before doing any WebM related encoding, we need to make sure our FFmpeg encoding has support for it. If yours doesn't have it in, then here are the steps needed to get one:

$ wget http://webm.googlecode.com/files/libvpx-v0.9.5.tar.bz2
$ wget http://webm.googlecode.com/files/ffmpeg-0.6.1_libvpx-0.9.2-3.diff.gz
$ wget http://www.ffmpeg.org/releases/ffmpeg-0.6.1.tar.bz2
$ tar -jxvf ffmpeg-0.6.1.tar.bz2
$ gunzip ffmpeg-0.6.1_libvpx-0.9.2-3.diff.gz
$ cd ffmpeg-0.6.1/
$ patch -p1 < ffmpeg-0.6.1_libvpx-0.9.2-3.diff 
$ cd ..
$ tar -jxvf libvpx-v0.9.5.tar.bz2 
$ cd libvpx-v0.9.5/
$ sudo apt-get install yasm
$ ./configure
$ make
$ sudo make install
$ cd ../ffmpeg-0.6.1/
$ ## You'll need libavformat, libavcodec, libswscale, libavutil, libfaad, libfaac, libvorbis, libx264 and things like that.. 
$ ./configure --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-pthreads --enable-x11grab --enable-libvorbis --enable-libvpx --prefix=$PWD
$ ### Optionally add --enable-libx264 if you want to.., you'll need libx264 dev packages.
$ make -j 9
$ mv libvpx-*ffpreset ffpresets/
$ make install
$ export FFMPEG=$PWD/bin/ffmpeg

We'll use this encoding line to transform my Nexus One's video recording into a web-ready media file:

$ $FFMPEG -y -i "INPUT_FILE" -threads 8 -f webm -aspect 16:9 -vcodec libvpx -deinterlace -g 120 -level 216 -profile 0 -qmax 42 -qmin 10 -rc_buf_aggressivity 0.95 -vb 2M -acodec libvorbis -aq 90 -ac 2 /tmp/OUTPUT.webm
$ ## You can optionally take out the -deinterlace flag if you're dealing with progressive material.

Now, we'll want to be able to upload a file from the web interface. For that, we'll use the browser's drag'n'drop support. Since we're lazy, we'll use the method specified here to speed up things. We'll create an IFRAME with everything required to handle the drag'n'drop upload. We'll call it foo/templates/iframe.html:

iframe YAsnippet to paste that file, need to add the % if request.POST: part
<script>
<!--
  var entered = 0;
-->
</script>
<body ondragenter="entered++;document.getElementById('uploadelement').style.display='block'" ondragleave="entered--;if (!entered) document.getElementById('uploadelement').style.display='none'">
  <form method="post" enctype="multipart/form-data" id="uploadform">

    % if request.POST:
      <div>Uploaded, processing...</div>
    % endif

    Drop a video file here to process...
    <input type="file" id="uploadelement" name="file" onchange="if (this.value) { document.getElementById('uploadform').submit(); }" style="display:none;position:absolute;top:0;left:0;right:0;bottom:0;opacity:0;" />
  </form>
</body>

I've added the % if request.POST, some Mako markup that will show "Uploaded" when we submitted something to the form.

Note that the file-upload field is conveniently named file. We'll refer to that when we want to access the uploaded file.

In our index.html file, we'll add this snippet to create the drag'n'drop iframe, and a video tag for the video to be played:

    ...
    <div id="main" role="main">
      <div id="video"></div>
      <iframe src="${request.route_url('iframe')}"></iframe>
    </div>
    ...

Then we'll need something to get it to the browser, so we'll use this little view in views.py:

@view_config(route_name="iframe", renderer="iframe.html")
def get_iframe(request):
    return {}

and this route configuration in __init__.py:

def main(global_config, **settings):
    ...
    config.add_route('iframe', 'iframe.html')
    ...

This means we have a named route called iframe but the URL to reach it will be /iframe.html. The named route is used only to map code to URL locations.

Now, here is everything required to get the encoding to work, in views.py:

ffmpeg YAsnippet which contains the FFMPEG command line, including quotes.
### tweak get_iframe(), add encode_video():

@view_config(route_name="iframe", renderer="iframe.html")
def get_iframe(request):
    if 'file' in request.params:
        f = request.POST.get('file')
        tmpfile = '/tmp/video.mp4'
        open(tmpfile, 'w').write(f.file.read())
        # send to ffmpeg...
        encode_video(tmpfile)
    return {}

def encode_video(filename):
    import subprocess
    cmd = '$FFMPEG -y -i %s -threads 8 -f webm -aspect 16:9 -vcodec libvpx -deinterlace -g 120 -level 216 -profile 0 -qmax 42 -qmin 10 -rc_buf_aggressivity 0.95 -vb 2M -acodec libvorbis -aq 90 -ac 2 /tmp/output.webm'
    p = subprocess.Popen(cmd % filename, shell=True)
    p.communicate()

Also, we'll need something to serve the video file itself (in views.py):

@view_config(route_name="video")
def get_video(request):
    from paste.fileapp import FileApp
    return request.get_response(FileApp("/tmp/output.webm"))

and a way to wire-in the routes (in __init__.py):

def main(global_config, **settings):
    ...
    config.add_route('video', 'video.webm')
    ...

Now all that's missing, is a way to actually see the video, so let's add a button to do that manually, once we know the video has been encoded. We add the go() function that will start the playback, and add a button to call the function and we're set! We'll do that in index.html:

      ...
      <div id="video"></div>

      <script>
        function go() {
          $('#video').html('<video controls preload src="/video.webm" />');
          $('#video video')[0].play();
        }
      </script>
      <button onclick="go()">Add video</button>

      <iframe src="${request.route_url('iframe')}"></iframe>
      ...

There you go. We now have some encoding going, and we're able to play it back in the web browser!

What if...

Now what if we could have real-time messaging with the application, to know what's going on over there? Why not graph stats about the process, and have the video popped when it's done encoding ? That'll be the concern of our next episodes.

If you would like us to implement anything you've seen here in a real-life project, or to kick start some of your projects, don't hesitate to send out an e-mail at contact@savoirfairelinux.com mentioning this blog. We'll be glad to help.

Read and Post Comments

New and hot, part 2: First Pyramid setup

March 14, 2011 at 02:15 PM

This is part 2 of my March 9th 2011 Confoo presentation. Refer to the Table of Contents for the other parts.

Pyramid installation

Setup of the "Foo" pyramid application

We'll install those packages:

  • pyramid: the web framework.
  • mongokit: which brings in pymongo, the low-level mongo driver.
  • kombu: for AMQP/RabbitMQ communication. To be used later on.
  • redis: to access a Redis server. Used in the last part of this presentation.
  • pyramid_socketio: pyramid and Socket.IO integration for real-time apps. This will bring in all the Gevent machinery. We'll use that later on also.

First, we setup the virtual environment:

$ cd ~
$ virtualenv --distribute env
$ . env/bin/activate
NOTE: each time you will see that (env) prompt prefix, it means we're in the virtual environment. To activate it (after a reboot or whatever), run . env/bin/activate again.

Then install what we need, create a new template and install it in the environment (in development mode):

(env)$ pip install pyramid mongokit kombu pyramid_socketio redis
(env)$ paster create -t pyramid_starter Foo
...
(env)$ python setup.py develop
(env)$ paster serve --reload development.ini

You'll notice that Pyramid was made to use the same Paste server and configuration that we were used to in Pylons 1.0. Let's have a look at the default page at: http://127.0.0.1:6543. Needless to say that it is much prettier than before! We can have a sneak peek at the documentation to notice it is very extensive and beautifully laid out.

site Show/load in the browser the project URL.
docs Show the pre-loaded documentation, or load it.

Exploring the directory layout

(env)$ cd Foo
(env)$ find .
.
./templates
./templates/mytemplate.pt
./static
./static/pyramid.png
./static/pyramid-small.png
./static/transparent.gif
./static/headerbg.png
./static/middlebg.png
./static/favicon.ico
./static/footerbg.png
./static/ie6.css
./static/pylons.css
./resources.py
./__init__.py
./views.py
./tests.py

A notable difference with the original Pylons 1.0, is that Pyramid doesn't impose any structure to the directories you layout. It can easily create a one-file application, as well as allow you to lay your files in the way that fits best with your project. This is very refreshing.

In the pyramid_starter template, the model.py was renamed to resources.py, but you could rename it as you wish. Simply tweak __init__.py accordingly.

The static/ directory contains all the static files, and replaces the public/ directory of Pylons 1.0.

templates/ is our old templates holder, with snippets of HTML in the templating language you like the most. Pyramid isn't stubborn about the templating language you choose - or decide not to choose. It'll run the templating engine you want, based on the extension of your template files. So you can have many templating languages in the same project. By default, it uses the Chameleon templating engine, an XML-based Zope Page Template and Genshi template compiler.

The central part of the projet is in __init__.py... the foo module itself. That's where all the config and wires are tied in. We'll add a couple of config items in here in a moment.

Notice in the __init__.py file that there's nothing about sessions (Beaker) and not much about databases. That's because they're pluggable, and we'll add them as required. Fear not, everything is thoroughly documented.

Adding Mako support

I love Mako, so we'll tack on Mako support:

def main(global_config, **settings):
    ...
    config.add_renderer('.html', 'pyramid.mako_templating.renderer_factory')
    ...

This will make all future rendering of .html files use the Mako rendering engine. I used .html just for convenience. You might want to use the .mako extension if you want to integrate with other components well.

You'll want to add those config in development.ini to have the templates compiled once. mako.directories is required for Mako to work.

### inside Foo/development.ini
[app:Foo]
...
mako.directories = foo:templates
#mako.module_directory = %(here)s/data/templates
...

The second mako.module_directory is used to write compiled Mako files. Make sure to use this in production, as it speeds things up a lot.

Writing our first view

This year, I'm not going to demonstrate Mako, so I'll simply load the excellent and highly recommended HTML5 boiler-plate (by Paul Irish) directly into my templates dir.

$ cd ~/Foo
$ git clone ~/build/html5-boilerplate
$ cp -r html5-boilerplate/* foo/static
$ # We'll put index.html in the templates/ dir though..
$ mv foo/static/index.html foo/templates

We will need to tweak the HTML5 boilerplate so that the URLs pointing to our scripts and resources use Pyramid's routing. Get the full patch here. This is a sneak peek:

html5 YAsnippet for a stripped-down version of the html5boilerplate.
--- a/foo/templates/index.html
+++ b/foo/templates/index.html
@@ -26,18 +26,18 @@
   <meta name="viewport" content="width=device-width, initial-scale=1.0">
 
   <!-- Place favicon.ico & apple-touch-icon.png in the root of your domain and delete these references -->
-  <link rel="shortcut icon" href="/favicon.ico">
-  <link rel="apple-touch-icon" href="/apple-touch-icon.png">
+  <link rel="shortcut icon" href="${request.static_url('foo:static/favicon.ico')}">
+  <link rel="apple-touch-icon" href="${request.static_url('foo:static/apple-touch-icon.png')}">

You can apply it this way:

(env)$ cd ~/Foo
(env)$ patch -p1

and paste the patch in, then hit Ctrl+D.

We'll also add a little style in foo/static/css/style.css after the CSS resets. There's a specially marked section for you:

In the demonstration, I'm actually using a stripped-down version of the html5 boilerplate (it's a yasnippet for Emacs).

styles Copy and paste in foo/static/css/style.css
 /* Primary Styles
    Author: Alexandre Bourget
 */
div#container {
  margin: 0 auto;
  width: 800px;
  border: 1px solid #aaa;
  border-radius: 10px;
  padding: 10px;
  -webkit-box-shadow: 5px 5px 5px #ddd;
}
h1 {
  text-shadow: 2px 2px 2px #ddd;
  font-size: 22px;
  margin-bottom: 10px;
}
footer {
  font-size: 10px;
  text-align: center;
  margin-top: 15px;
}
iframe {
  width: 780px;
  height: 70px;
  border: 1px solid black;
}
div#graph {
  width: 750px;
  height: 300px;
}
div#video {
  text-align: center;
}
video {
  width: 640px;
  height: 480px;
}

We'll go in index.html and add some hello world or something:

     ...
     <header>
       <h1>Hello Excellent World!</h1>
     </header>
     ...

Now let's add the required calls to show that on the front-page:

view YASnippet to add a new view

In views.py:

from pyramid.view import view_config

@view_config(route_name="home", renderer="index.html")
def home(request):
    return {'boo': 'ahh'}

In __init__.py:

def main(...):
    ...
    config.add_route('home', '')
    config.scan('foo.views')
    ...

and hop! http://localhost:6543 should now answer the call

That's it for now! But what if we could actually do someting interesting, like dragging a video file from the desktop and have it encoded on the server-side so that it could be played back via the web ? We'll do just that in our next episode!

If you would like us to implement anything you've seen here in a real-life project, or to kick start some of your projects, don't hesitate to send out an e-mail at contact@savoirfairelinux.com mentioning this blog.

UPDATE May 3rd 2011: Simplified the Mako integration (from 2 lines to one).

Read and Post Comments

« Previous Page -- Next Page »