Using rst for presentations

September 12, 2010 at 06:03 PM | categories: Programming | View Comments


Presentations are hard

Not even just giving them, but the process of taking an idea and putting it down in a concise and clear manner, that will be simple to show and distribute.

I am not going to make a claim that I am good at giving or writing up presentations, but I do think I have a pretty slick setup going for making them simple to show and distribute with minimal effort.

reStructuredText

People may be familiar with it already, but it's a markup language that's well supported in python and used in the Sphinx project. It fits my brain well (in most places) and only requires a text editor.

Since it comes with some tools to take a simple plain text document and transform it into other well known formats (latex, html, and pdf), caused me to use it in a number of projects and for notes without much friction.

How I use reST

I mentioned in a previous post about my setup and how I leverage fabric to build and post my presentations, but I didn't really get too much into the specifics of how the rst document was formatted, and built.

I write a presentation in rst just like I would write any other document in rst. The goal I had was to write once, and have that document change into other formats without an issue. I do leverage a few classes and directives that aren't in the normal rst toolbox, to get my presentations just so. But these aren't out of line, and after a tweak or two in my pipeline don't break the other formats I build to.

s5

S5 is a slide show format based entirely on XHTML, CSS, and JavaScript.

And rst2s5 takes a reStructuredText document and complies it into the corresponding s5 representation. Giving back a plain html page with some JavaScript magic that is simple to post and host.

No need for server side scripting, or fancy Apache/lighttpd/nginx setups or any need for proxies or their kin. So using s5 alone will give me the goal of simple to show, since I can post a presentation and have accesses to it anywhere there is internet and a browser. I can even keep a copy on a thumb drive in case the internet dies, and browse the slides locally.

reST with s5

To meet the last part of my goal, I have to have a simple distribution medium. For a presentation that is a pdf. It's akin to a paper, even though it is much more broken down and split up into slides. Leveraging the handout class that the s5 and pdf converters from rst know, I am able to have parts of the presentation invisible in slide form, and show up only when the presentation is expanded, or complied into a pdf.

eg

=========
GNU tools
=========
----------------------------
*mostly for text processing*
----------------------------

.. class:: right

    `Morgan Goose http://morgangoose.com`
    January 2010

.. class:: handout

    This work is licensed under the Creative Commons
    Attribution-Noncommercial-Share Alike 3.0 United States License.
    To view a copy of this license, visit
    http://creativecommons.org/licenses/by-nc-sa/3.0/us/ or send a letter
    to Creative Commons, 171 Second Street, Suite 300, San Francisco,
    California, 94105, USA.

So the above will show my name on the first slide, but not the license. In the pdf though that will all be on the first page. More tips on the s5 rst meshing can be found on the docutils site's section for slide-shows

Code blocks

Normal reST had code directives that will differentiate the code, and in most instances (Sphinx/Trac) will attempt to highlight the code accordingly. I ran into issues here because rst2pdf and rst2s5 had different ideas on what these should be named and neither really was highlighting the code. After searching a bit I found that pygments, a code highlighter in python, already had some docutils hooks that they mention on their site.

Using that as a stepping stone I added in code, code, and source code, directives to use pygments for the code they contained. In my presentations though I made sure to only use code because this is the directive that rst2pdf is expecting when it goes to format the document.

After that my code goes through the ringer as shown in the post I gave for fabric, in the line executing the rst-directive.py file and passing in the pygments css for the theme that I prefer.

the final rst-directive.py looks like this though:

# -*- coding: utf-8 -*-
"""
    The Pygments reStructuredText directive
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    This fragment is a Docutils_ 0.5 directive that renders source code
    (to HTML only, currently) via Pygments.

    To use it, adjust the options below and copy the code into a module
    that you import on initialization.  The code then automatically
    registers a ``sourcecode`` directive that you can use instead of
    normal code blocks like this::

        .. sourcecode-block:: python

            My code goes here.

    If you want to have different code styles, e.g. one with line numbers
    and one without, add formatters with their names in the VARIANTS dict
    below.  You can invoke them instead of the DEFAULT one by using a
    directive option::

        .. sourcecode-block:: python
            :linenos:

            My code goes here.

    Look at the `directive documentation`_ to get all the gory details.

    .. _Docutils: http://docutils.sf.net/
    .. _directive documentation:
       http://docutils.sourceforge.net/docs/howto/rst-directives.html

    :copyright: Copyright 2006-2009 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

# Options
# ~~~~~~~

# Set to True if you want inline CSS styles instead of classes
INLINESTYLES = False
STYLE = "fruity"

from pygments.formatters import HtmlFormatter

# The default formatter
DEFAULT = HtmlFormatter(noclasses=INLINESTYLES, style=STYLE)

# Add name -> formatter pairs for every variant you want to use
VARIANTS = {
    'linenos': HtmlFormatter(noclasses=INLINESTYLES, linenos=False),
}


from docutils import nodes
from docutils.parsers.rst import directives, Directive

from pygments import highlight
from pygments.lexers import get_lexer_by_name, TextLexer

class Pygments(Directive):
    """ Source code execution.
    """
    required_arguments = 1
    optional_arguments = 0
    final_argument_whitespace = True
    option_spec = dict([(key, directives.flag) for key in VARIANTS])
    has_content = True

    def run(self):
        self.assert_has_content()
        try:
            lexer = get_lexer_by_name(self.arguments[0])
        except ValueError:
            # no lexer found - use the text one instead of an exception
            lexer = TextLexer()
        # take an arbitrary option if more than one is given
        formatter = self.options and VARIANTS[self.options.keys()[0]] or DEFAULT

        print >>open('pygments.css', 'w'), formatter.get_style_defs('.highlight')
        parsed = highlight(u'\n'.join(self.content), lexer, formatter)
        return [nodes.raw('', parsed, format='html')]

directives.register_directive('sourcecode', Pygments)
directives.register_directive('code', Pygments)
directives.register_directive('code', Pygments)

from docutils.core import publish_cmdline, default_description

description = ('Generates S5 (X)HTML slideshow documents from standalone '
               'reStructuredText sources.  ' + default_description)

publish_cmdline(writer_name='s5', description=description)

And in combination with my fabric setup I can make new posts, publish to html and pdf, and republish with relative ease:

$ fab new:new_stuff
$ vim new_stuff/new_stuff.rst
$ fab upload:new_stuff






Fedora KVM with simple network forwards

June 02, 2010 at 10:52 PM | categories: Servers, Linux | View Comments


Recently I've been teaching python to some high school students. It has been going well, but the development environment we had access to left a little bit to be desired. We were working with ages old solaris, vi only, and no real access to newer gnu (or other) tools. So a new setup was required, I went off to investigate.

I started with chroot, since a buddy, Daniel Thau, had used it extensively for running multiple operating systems side by side. He'd pointed me in the directions of febootstrap and that seemed like it'd work fine. I was able to make a sandbox, get ssh running on 2022 and then have my dlink route that to my box. Success!

But I found that a bit messy, and a bit limited. I wanted to lock down how much of my resources they could use, and I didn't want to have to give access to some of my root file systems directly; /proc, /dev, etc. So I looked around a bit more, and stumbled on using KVM indirectly via the new virt-manager toolset that fedora 12 and 13 provide. Installation was as simple as:

$ yum install qemu-kvm virt-manager virt-viewer python-virtinst

But it also seems that from the techotopia article I followed for some of this that one could also just do:

$ yum groupinstall 'Virtualization'

I have to say it's a pretty swank set of tools. It's free, it works on KVM or Xen. KVM usage requires no special kernel and as such, no reboot. The setup was simple, and gave out a vnc port to connect to from the get go. It is also trivial to connect to a setup on machine A with virt-manager on machine B over ssh. If you want more information, fedora has a nice writeup, and libvirt has a more distro agnostic set of docs.

Problem was though that the networking was virtual, and didn't pull an IP address from my router, so it wasn't public. There were a few sections here and there describing how to switch to bridged, and I tried them. They didn't work for me, either I suck at following directions, or they just won't work how I expect them to. You can see for yourself here at how I attempted network bridging.

What I did was much more in my realm of knowledge, is simpler than all the other options, and is something I can make changes to w/o killing my network connectivity. iptables! I just used NAT forwarding. It was 2 lines, put in my pre-existing firewall script. So to get my local box 192.168.1.199 on port 2022 to forward to its internal virtual network of 192.168.100.2 at port 22 was as plain as this:

$ iptables -t nat -I PREROUTING -p tcp --dport 2022 -j DNAT\
    --to-destination 192.168.100.2:22
$ iptables -I FORWARD -p tcp --dport 22 -d 192.168.100.2 -j ACCEPT

One preroute rule to grab the port incoming, and one forward rule to pass said packets along. Now I have connectivity into my class virtual machine, and I don't have to do much to add more ports as needed. I am pretty happy with the setup so far. It's really nice to be able to connect remotely, vnc or ssh now, as well as know that I've limited the ram and cpu time the class can use on my box. I am interested to hear if anyone else is doing similar things with virtualization on their desktops.







« Previous Page -- Next Page »