Alexandre Bourget

geek joy

Using memento with Pylons

January 20, 2010 at 04:57 PM

Tired of using paster serve --reload development.ini and waiting for your code to reload ?

memento allows you to dynamically reload your code -- without restarting your whole server -- or parts of your code.

Install it:

$ easy_install memento

To integrate with Pylons, follow these steps:

Go to your config/middleware.py, and add:

import memento

Then, a bit lower, replace:

def make_app(...):
    ...
    # The Pylons WSGI app
    app = PylonsApp()
    ...

by:

def make_app(...):
    ...
    # The Pylons WSGI app
    #app = PylonsApp()
    app = memento.Assassin('pylons.wsgiapp:PylonsApp()', ['yourpackage'])
    ...

The second parameter to Assassin() is a list of packages you want reloaded on each request. You probably want your whole package to be reloaded, or you can be more granular.

NOTE: don't forget the () after the PylonsApp, because it has to call the object to get a WSGI app. You'll get errors otherwise

The neat thing is I think you can disable that in real-time also, by changing the value of app.mode ('on' = 'on and 'off' = 'off, as you've guessed), so I guess you'd like to keep a copy of that 'app' before you lose it's reference lower in middleware.py.

Read and Post Comments

VirtualBox and importing OVF problems

January 02, 2010 at 07:18 PM

I've struggled with a WinXP machine not being able to be imported from OVF + VMDK.

At first, I tried to create a new machine, and attach it the .vmdk file directly. When booting Windows XP, it stalled with some UNMOUNTABLE_BOOT_VOLUME and some STOP 0x000000ED errors. I checked the knowledgebases, and surfed the web, only to find nothing.

I tried converting the .vmdk file to a .vdi file with VBoxManage convertfromraw old.bin new.vdi --format VDI (after running qemu-img convert file.vmdk old.bin, as suggested on some sites) without success - in fact, the .VDI file wasn't usable at all, and appeared as data, while other VDI files I had appeared as innotek VirtualBox Disk Image.

When trying to run VBoxManage import myfile.ovf, I kept getting errors like:

VirtualBox Command Line Management Interface Version 3.1.2
(C) 2005-2009 Sun Microsystems, Inc.
All rights reserved.

0%...
ERROR: <NULL>
Details: code NS_OK (0x0), component <NULL>, interface <NULL>, callee <NULL>
Context: "ImportAppliance" at line 261 of file VBoxManageImport.cpp

and sometimes the line with 0%... went up to 90%..., only to stall on the same error.

Just to find out that OVF files must be writable! I had my .ovf file copied from a DVD, so it was only -r--r--r--. Running chmod +rw * made the import feature work.

Still, VirtualBox, if it wants to be able to "write" in the .ovf it's going to import, should just state so, or check it before it runs the importation, or even better: remove the need or the bug that stops you from importing a machine that is read-only!

Btw, I'm using VirtualBox 3.1.2 on Ubuntu Karmic.

Hope this helps.

Read and Post Comments

How to remove all HTML tags from a string

August 27, 2009 at 02:51 PM

Using BeautifulSoup, in one line, you can:

' '.join(h.BeautifulSoup.BeautifulSoup(some_content).findAll(text=True))
Read and Post Comments

Exporting SQL schemas from SQLAlchemy table definitions

July 07, 2009 at 09:44 AM

I was trying to extract the CREATE TABLE statements from my Pylons application, to keep some text file in sync with the models, and know from revision to revision which columns were added/modified.

Asking on #sqlalchemy on Freenode, I was refered to this FAQ which told me exactly what to do, so I wrote a little script that would take my Pylons models, and spit out one file per database (in case someone uses my app with another database). That way, I could easily automate the generation of those files and not have to worry about that any more.

Here I share it with you:

#!/usr/bin/env python

from sqlalchemy import *
from StringIO import StringIO
from vigilia import model

buf = StringIO()
engine1 = create_engine('postgres://', strategy='mock', executor=lambda s, p=';': buf.write(s + p))
engine2 = create_engine('mysql://', strategy='mock', executor=lambda s, p=';': buf.write(s + p))
engine3 = create_engine('sqlite://', strategy='mock', executor=lambda s, p=';': buf.write(s + p))


for engine, file in [(engine1, 'SCHEMA.postgres'), (engine2, 'SCHEMA.mysql'), (engine3, 'SCHEMA.sqlite')]:
    print "Writing %s" % file
    buf.truncate(0)
    tables = [x[1] for x in sorted(model.meta.metadata.tables.items(), key=lambda x: x[0])]
    for table in tables:
        table.create(engine)
    f = open(file, 'w')
    f.write(buf.getvalue())
    f.close()

NOTE: this is using Pylons 0.9.7

UPDATED: it was pretty useless in DVCS since the order of the tables wasn't always the same. The updated script corrects this problem, so the tables are always written in alphabetical order

UPDATED 2nd: the script wasn't writing ';' at the end of the statements, so the files were useless to `cat` in a mysql prompt. Fixed it with p=';'.

Read and Post Comments

GStreamer, RTP and live streaming

June 14, 2009 at 11:08 AM

I wanted to stream a live video feed to the Internet, and have some Flash player available on the web that would make my live stream available.

I've searched services, or ways to do that. I've found ustream.tv and justin.tv.

After trying to use my Ubuntu machine to stream video out there (with a simple V4L2 camera), I've had much trouble (ustream.tv still doesn't work) but worked out a way to effectively stream a 2 hours ceremony directly on the web, with 2 computers, a wireless router, a Mini-DV camera, a simple sound card, GStreamer and the justin.tv web service.

Here are the pieces I've used (and the references that helped me out):

Website references I've found useful to understand the whole GStreamer thing and especially the RTP parts:

Here are the scripts I've used. First the video_streamer.py file:

#!/usr/bin/env python
########### VIDEO_STREAMER 

import gobject, pygst
pygst.require("0.10")
import gst
import gobject
import sys
import os
import readline

REMOTE_HOST = '192.168.33.153'
WRITE_VIDEO_CAPS = 'video.caps'

mainloop = gobject.MainLoop()
pipeline = gst.Pipeline('server')
bus = pipeline.get_bus()

dv1394src = gst.element_factory_make("dv1394src", "dv1394src")
dvdemux = gst.element_factory_make("dvdemux", "dvdemux")
q1 = gst.element_factory_make("queue", "q1")
q2 = gst.element_factory_make("queue", "q2")
dvdec = gst.element_factory_make("dvdec", "dvdec")
videoscale = gst.element_factory_make('videoscale')
ffmpegcs = gst.element_factory_make("ffmpegcolorspace", "ffmpegcs")
capsfilter = gst.element_factory_make('capsfilter')
capsfilter.set_property('caps', gst.caps_from_string('video/x-raw-yuv, width=320, height=240'))
tcpsrc = gst.element_factory_make("tcpserversrc", "source")
x264enc = gst.element_factory_make("x264enc", "x264enc")
x264enc.set_property('qp-min', 18)
rtph264pay = gst.element_factory_make("rtph264pay", "rtph264pay")
udpsink_rtpout = gst.element_factory_make("udpsink", "udpsink0")
udpsink_rtpout.set_property('host', REMOTE_HOST)
udpsink_rtpout.set_property('port', 10000)
udpsink_rtcpout = gst.element_factory_make("udpsink", "udpsink1")
udpsink_rtcpout.set_property('host', REMOTE_HOST)
udpsink_rtcpout.set_property('port', 10001)
udpsrc_rtcpin = gst.element_factory_make("udpsrc", "udpsrc0")
udpsrc_rtcpin.set_property('port', 10002)

rtpbin = gst.element_factory_make('gstrtpbin', 'gstrtpbin')

# Add elements
pipeline.add(dv1394src, dvdemux, q1, dvdec, videoscale, ffmpegcs, capsfilter, x264enc, rtph264pay, rtpbin, udpsink_rtpout, udpsink_rtcpout, udpsrc_rtcpin)

# Link them
dv1394src.link(dvdemux)
def dvdemux_padded(dbin, pad):
    print "dvdemux got pad %s" % pad.get_name()
    if pad.get_name() == 'video':
        print "Linking dvdemux to queue1"
        dvdemux.link(q1)

# Create links
dvdemux.connect('pad-added', dvdemux_padded)

gst.element_link_many(q1, dvdec, videoscale, capsfilter, ffmpegcs, x264enc, rtph264pay)

rtph264pay.link_pads('src', rtpbin, 'send_rtp_sink_0')
rtpbin.link_pads('send_rtp_src_0', udpsink_rtpout, 'sink')
rtpbin.link_pads('send_rtcp_src_0', udpsink_rtcpout, 'sink')
udpsrc_rtcpin.link_pads('src', rtpbin, 'recv_rtcp_sink_0')

def go():
    print "Setting locked state for udpsink"
    print udpsink_rtcpout.set_locked_state(gst.STATE_PLAYING)
    print "Setting pipeline to PLAYING"
    print pipeline.set_state(gst.STATE_PLAYING)
    print "Waiting pipeline to settle"
    print pipeline.get_state()
    print "Final caps written to", WRITE_VIDEO_CAPS
    open(WRITE_VIDEO_CAPS, 'w').write(str(udpsink_rtpout.get_pad('sink').get_property('caps')))
    mainloop.run()

go()

And for the audio part, in the file audio_streamer.py:

#!/usr/bin/env python
# -=- encoding: utf-8 -=-

import gobject, pygst
pygst.require("0.10")
import gst
import gobject
import sys
import os
import readline


# To the laptop that will catch everything
REMOTE_HOST = '192.168.33.153'
WRITE_AUDIO_CAPS = 'audio.caps'

mainloop = gobject.MainLoop()
pipeline = gst.Pipeline('server')
bus = pipeline.get_bus()

#alsasrc = gst.element_factory_make("autoaudiosrc")
alsasrc = gst.element_factory_make("alsasrc")
alsasrc.set_property('device', 'plughw:1,0')
q1 = gst.element_factory_make("queue", "q1")
q2 = gst.element_factory_make("queue", "q2")
audioconvert1 = gst.element_factory_make("audioconvert")
audioconvert2 = gst.element_factory_make("audioconvert")
vorbisenc = gst.element_factory_make("vorbisenc")
rtpvorbispay = gst.element_factory_make("rtpvorbispay")
udpsink_rtpout = gst.element_factory_make("udpsink", "udpsink0")
udpsink_rtpout.set_property('host', REMOTE_HOST)
udpsink_rtpout.set_property('port', 11000)
udpsink_rtcpout = gst.element_factory_make("udpsink", "udpsink1")
udpsink_rtcpout.set_property('host', REMOTE_HOST)
udpsink_rtcpout.set_property('port', 11001)
udpsrc_rtcpin = gst.element_factory_make("udpsrc", "udpsrc0")
udpsrc_rtcpin.set_property('port', 11002)

rtpbin = gst.element_factory_make('gstrtpbin', 'gstrtpbin')

# Add elements
pipeline.add(alsasrc, q1, audioconvert1, audioconvert2, vorbisenc, rtpvorbispay, rtpbin, udpsink_rtpout, udpsink_rtcpout, udpsrc_rtcpin)

# Link them
alsasrc.link(audioconvert1)
audioconvert1.link(vorbisenc)
vorbisenc.link(rtpvorbispay)
rtpvorbispay.link_pads('src', rtpbin, 'send_rtp_sink_0')
rtpbin.link_pads('send_rtp_src_0', udpsink_rtpout, 'sink')
rtpbin.link_pads('send_rtcp_src_0', udpsink_rtcpout, 'sink')
udpsrc_rtcpin.link_pads('src', rtpbin, 'recv_rtcp_sink_0')

def go():
    print "Setting locked state for udpsink"
    print udpsink_rtcpout.set_locked_state(gst.STATE_PLAYING)
    print "Setting pipeline to PLAYING"
    print pipeline.set_state(gst.STATE_PLAYING)
    print "Waiting pipeline to settle"
    print pipeline.get_state()
    print "Final caps writte to", WRITE_AUDIO_CAPS
    open(WRITE_AUDIO_CAPS, 'w').write(str(udpsink_rtpout.get_pad('sink').get_property('caps')))
    mainloop.run()

go()

Note that the video.caps and audio.caps file must be shared between hosts, using Samba, NFS, or sshfs for example

For testing purposes, I've used video_receiver.py

#!/usr/bin/env python
# -=- encoding: utf-8 -=-
################ VIDEO RECEIVER

import gobject, pygst
pygst.require("0.10")
import gst


# TODO: detect from the RTPSource element inside the GstRtpBin
REMOTE_HOST = '192.168.34.150'
READ_VIDEO_CAPS = 'video.caps'

pipeline = gst.Pipeline('server')

caps = open(READ_VIDEO_CAPS).read().replace('\\', '')
rtpbin = gst.element_factory_make('gstrtpbin', 'rtpbin')
rtpbin.set_property('latency', 400)
udpsrc_rtpin = gst.element_factory_make('udpsrc', 'udpsrc0')
udpsrc_rtpin.set_property('port', 10000)
udpsrc_caps = gst.caps_from_string(caps)
udpsrc_rtpin.set_property('caps', udpsrc_caps)
udpsrc_rtcpin = gst.element_factory_make('udpsrc', 'udpsrc1')
udpsrc_rtcpin.set_property('port', 10001)
udpsink_rtcpout = gst.element_factory_make('udpsink', 'udpsink0')
udpsink_rtcpout.set_property('host', REMOTE_HOST)
udpsink_rtcpout.set_property('port', 10002)

rtph264depay = gst.element_factory_make('rtph264depay', 'rtpdepay')
q1 = gst.element_factory_make("queue", "q1")
q2 = gst.element_factory_make("queue", "q2")
avimux = gst.element_factory_make('avimux', 'avimux')
filesink = gst.element_factory_make('filesink', 'filesink')
filesink.set_property('location', '/tmp/go.avi')

ffmpegcs = gst.element_factory_make("ffmpegcolorspace", "ffmpegcs")
ffdec264 = gst.element_factory_make('ffdec_h264', 'ffdec264')
autovideosink = gst.element_factory_make('autovideosink')

pipeline.add(rtpbin, udpsrc_rtpin, udpsrc_rtcpin, udpsink_rtcpout,
             rtph264depay, q1, avimux, ffdec264, autovideosink)

# Receive the RTP and RTCP streams
udpsrc_rtpin.link_pads('src', rtpbin, 'recv_rtp_sink_0')
udpsrc_rtcpin.link_pads('src', rtpbin, 'recv_rtcp_sink_0')
# reply with RTCP stream
rtpbin.link_pads('send_rtcp_src_0', udpsink_rtcpout, 'sink')
# Plus the RTP into the rest of the pipe...
def rtpbin_pad_added(obj, pad):
    print "PAD ADDED"
    print "  obj", obj
    print "  pad", pad
    rtpbin.link(rtph264depay)
rtpbin.connect('pad-added', rtpbin_pad_added)
gst.element_link_many(rtph264depay, q1, ffdec264, autovideosink)

def start():
    pipeline.set_state(gst.STATE_PLAYING)
    udpsink_rtcpout.set_locked_state(gst.STATE_PLAYING)
    print "Started..."

def loop():
    print "Running..."
    gobject.MainLoop().run()

if __name__ == '__main__':
    start()
    loop()

Here is the actual script to forward the incoming video to the virtual vloopback device. In video_forwarder.py:

#!/usr/bin/env python
# -=- encoding: utf-8 -=-
############### VIDEO FORWARDER

import gobject, pygst
pygst.require("0.10")
import gst


# TODO: detect from RTPSource
REMOTE_HOST = '192.168.34.150'
READ_VIDEO_CAPS = 'video.caps'

pipeline = gst.Pipeline('server')

caps = open(READ_VIDEO_CAPS).read().replace('\\', '')
rtpbin = gst.element_factory_make('gstrtpbin', 'rtpbin')
rtpbin.set_property('latency', 400)
udpsrc_rtpin = gst.element_factory_make('udpsrc', 'udpsrc0')
udpsrc_rtpin.set_property('port', 10000)
udpsrc_caps = gst.caps_from_string(caps)
udpsrc_rtpin.set_property('caps', udpsrc_caps)
udpsrc_rtcpin = gst.element_factory_make('udpsrc', 'udpsrc1')
udpsrc_rtcpin.set_property('port', 10001)
udpsink_rtcpout = gst.element_factory_make('udpsink', 'udpsink0')
udpsink_rtcpout.set_property('host', REMOTE_HOST)
udpsink_rtcpout.set_property('port', 10002)

rtph264depay = gst.element_factory_make('rtph264depay', 'rtpdepay')
q1 = gst.element_factory_make("queue", "q1")
q2 = gst.element_factory_make("queue", "q2")
avimux = gst.element_factory_make('avimux', 'avimux')

ffmpegcs = gst.element_factory_make("ffmpegcolorspace", "ffmpegcs")
ffdec264 = gst.element_factory_make('ffdec_h264', 'ffdec264')
autovideosink = gst.element_factory_make('autovideosink')
y4menc = gst.element_factory_make('y4menc')
filesink = gst.element_factory_make('filesink', 'filesink')
filesink.set_property('location', '/tmp/go.pipe')

pipeline.add(rtpbin, udpsrc_rtpin, udpsrc_rtcpin, udpsink_rtcpout,
             rtph264depay, q1, avimux, ffdec264, y4menc, filesink)

# Receive the RTP and RTCP streams
udpsrc_rtpin.link_pads('src', rtpbin, 'recv_rtp_sink_0')
udpsrc_rtcpin.link_pads('src', rtpbin, 'recv_rtcp_sink_0')
# reply with RTCP stream
rtpbin.link_pads('send_rtcp_src_0', udpsink_rtcpout, 'sink')
# Plus the RTP into the rest of the pipe...
def rtpbin_pad_added(obj, pad):
    print "PAD ADDED"
    print "  obj", obj
    print "  pad", pad
    rtpbin.link(rtph264depay)
rtpbin.connect('pad-added', rtpbin_pad_added)
gst.element_link_many(rtph264depay, q1, ffdec264, y4menc, filesink)

def start():
    pipeline.set_state(gst.STATE_PLAYING)
    udpsink_rtcpout.set_locked_state(gst.STATE_PLAYING)
    print "Started..."

def loop():
    print "Running..."
    gobject.MainLoop().run()

if __name__ == '__main__':
    import os
    os.system('rm /tmp/go.pipe')
    os.system('mkfifo /tmp/go.pipe')
    pipeline.get_state()
    os.system('cat /tmp/go.pipe | mjpegtools_yuv_to_v4l /dev/video2 &')
    start()
    loop()

The last bit is to pipe audio through the PulseAudio daemon, so that Flash thinks it's coming from the microphone. In audio_receiver.py:

#!/usr/bin/env python
# -=- encoding: utf-8 -=-
########### AUDIO RECEIVER

import gobject, pygst
pygst.require("0.10")
import gst


# Stream to:
REMOTE_HOST = '192.168.34.150'
READ_AUDIO_CAPS = 'audio.caps'

caps = open(READ_AUDIO_CAPS).read().replace('\\', '')

pipeline = gst.Pipeline('audio-receiver')

rtpbin = gst.element_factory_make('gstrtpbin')
rtpbin.set_property('latency', 1000)
udpsrc_rtpin = gst.element_factory_make('udpsrc')
udpsrc_rtpin.set_property('port', 11000)
udpsrc_caps = gst.caps_from_string(caps)
udpsrc_rtpin.set_property('caps', udpsrc_caps)
udpsrc_rtcpin = gst.element_factory_make('udpsrc')
udpsrc_rtcpin.set_property('port', 11001)
udpsink_rtcpout = gst.element_factory_make('udpsink')
udpsink_rtcpout.set_property('host', REMOTE_HOST)
udpsink_rtcpout.set_property('port', 11002)

rtpvorbisdepay = gst.element_factory_make('rtpvorbisdepay')
q1 = gst.element_factory_make("queue", "q1")
q2 = gst.element_factory_make("queue", "q2")

audioconvert = gst.element_factory_make("audioconvert")
vorbisdec = gst.element_factory_make('vorbisdec')
autoaudiosink = gst.element_factory_make('pulsesink')

pipeline.add(rtpbin, udpsrc_rtpin, udpsrc_rtcpin, udpsink_rtcpout, audioconvert,
             rtpvorbisdepay, q1, vorbisdec, autoaudiosink)

# Receive the RTP and RTCP streams
udpsrc_rtpin.link_pads('src', rtpbin, 'recv_rtp_sink_0')
udpsrc_rtcpin.link_pads('src', rtpbin, 'recv_rtcp_sink_0')
# reply with RTCP stream
rtpbin.link_pads('send_rtcp_src_0', udpsink_rtcpout, 'sink')
# Plus the RTP into the rest of the pipe...

def rtpbin_pad_added(obj, pad):
    print "PAD ADDED"
    print "  obj", obj
    print "  pad", pad
    rtpbin.link(rtpvorbisdepay)
rtpbin.connect('pad-added', rtpbin_pad_added)

gst.element_link_many(rtpvorbisdepay, q1, vorbisdec, audioconvert,
                      autoaudiosink)

def start():
    pipeline.set_state(gst.STATE_PLAYING)
    udpsink_rtcpout.set_locked_state(gst.STATE_PLAYING)
    print "Started..."

def loop():
    print "Running..."
    gobject.MainLoop().run()

if __name__ == '__main__':
    start()
    loop()

Here are pitfalls about RTP I've fell in:

  • the CAPS had to be set and exchanged between the different RTP parts. That's what video.caps and audio.caps are used in those scripts.
Read and Post Comments

« Previous Page -- Next Page »