- Go to a Python related conference in
North America, South America,Europe,Asia, Africa, Australia, and New Zealand. Attend at least one JavaScript related conference or event.- Upload all my outstanding pictures to Flickr!
- Make Consumer Notebook profitable.
Find more ways to make Audrey Roy happy.- Pull off an Aú sem Mão during a Capoeira Roda.
- Attend my first Capoeira Batizado.
See a place in the USA I've never been.- Work out at least three times a week.
Drop to a 32 waistVisit friends and family back east. Been over a year since I've seen my sister!Blog once a week. That is at least 52 blog entries!Visit a Theme park.- Learn how to surf or snowboard.
Implement something in node.js, backbone.js, and handlebars.js- Take a high level Python class from the likes of Raymond Hettiger or David Beazly.
Teach some Python or Django.- Have a beer with Thomas, Andy, Andy, Tony, Garrick, Bernd, and the rest of Ye Aulde Gange.
- See my old DC area friends such as
Eric, Chris, Steve, Beth, Sarah, Daye, Renee, Kenneth, Leslie,Whitney, Dave, and many others. Visit my Son.
Thursday, December 29, 2011
Resolutions for 2012
Tuesday, December 27, 2011
2011 Resolution Summary
Items that are crossed out are completed.
- Travel to Europe again. Travel to Asia or Africa.(Went to Australia instead. Which was an very, very acceptable substitute.)
- Visit a Disney park.
See a place in the USA I've never been.- Drop the waist size 2 inches and
not break any bones. Go to Pycon and present or teach.Go to DjangoCon and present or teach.Present at LA DjangoContinue my Muay Thai and Capoeira studies, get back into Eskrima, learn some more BJJ, andpractice the forms I know.- Work out at least three times a week.
- Go back east and teach martial arts for a day.
Finish some outstanding legal proceedings.Launch a site that does cool stuff and somehow brings in money.(Consumer Notebook)- Get to the point with LISP where I can do cool stuff in it without needing a textbook. (I seem to have spent this time working on JavaScript instead).
- Blog once a week. That is at least 52 blog entries! (almost there!)
Explain why I wrote Diversity Rocks.
Thursday, December 22, 2011
New Year’s Python Meme
I love these blog memes, so I give you my version of Tarek Ziade's New Year's Python Meme.
For python libraries, that would have to be Kenneth Reitz' python-requests library. I've used it for an amazing amount of stuff and blogged about it. It took the grunge out of doing HTTP actions with Python. The API is clean and elegant, getting out of your way. It embodies the State of the art for API design, which closely matches the Zen of Python.
For applications, djangolint.com is awesome. It has helped me out so much on several projects. I would love to see something like this implemented and maintained for modern Python.
All the Python friendly PaaS efforts that have emerged are changing the landscape for those of us who want to launch projects but don't want to become full time system administrators in the process. Heroku, DjangoZoom, DotCloud, ep.io, gondor.io, and others are making it possible for developers to focus on development not server tooling. Google App Engine paved the way and it is wonderful to see the rest of the universe catch up with material that more closely follow core.
Event based programming! I've touched on it for years, but this year I really got a lot more more into it thanks to Aurynn Shaw kickstarting me and Audrey Roy expanding my knowledge ever since.
I participated mostly as co-lead in the Open Comparison project, which amongst other things involved running the largest sprint at PyCon 2011. We maintained Django Packages and launched Pyramid and Plone versions of the project. We hope to launch a Python implementation in 2012.
I took a lot of notes this year at pydanny-event-notes - enough to make a book.
Like Nick Coghlan, that would be http://planet.python.org.
A tool python-requests, but for shell access. Something like Unipath, but kept up-to-date and with nicely written documentation on Read the Docs.
A PIL replacement that is maintained, works for all modern Pythons, and is close enough to the PIL API to not cause too much confusion.
Something like Django Lint but for Python 2.7.x/3.x.
An open source project that tracks test coverages across PyPI and publishes reports of the results via an API.
1. What’s the coolest Python application, framework or library you have discovered in 2011?
For python libraries, that would have to be Kenneth Reitz' python-requests library. I've used it for an amazing amount of stuff and blogged about it. It took the grunge out of doing HTTP actions with Python. The API is clean and elegant, getting out of your way. It embodies the State of the art for API design, which closely matches the Zen of Python.
For applications, djangolint.com is awesome. It has helped me out so much on several projects. I would love to see something like this implemented and maintained for modern Python.
All the Python friendly PaaS efforts that have emerged are changing the landscape for those of us who want to launch projects but don't want to become full time system administrators in the process. Heroku, DjangoZoom, DotCloud, ep.io, gondor.io, and others are making it possible for developers to focus on development not server tooling. Google App Engine paved the way and it is wonderful to see the rest of the universe catch up with material that more closely follow core.
2. What new programming technique did you learn in 2011?
Event based programming! I've touched on it for years, but this year I really got a lot more more into it thanks to Aurynn Shaw kickstarting me and Audrey Roy expanding my knowledge ever since.
3. What’s the name of the open source project you contributed the most in 2011? What did you do?
I participated mostly as co-lead in the Open Comparison project, which amongst other things involved running the largest sprint at PyCon 2011. We maintained Django Packages and launched Pyramid and Plone versions of the project. We hope to launch a Python implementation in 2012.
I took a lot of notes this year at pydanny-event-notes - enough to make a book.
4. What was the Python blog or website you read the most in 2011?
Like Nick Coghlan, that would be http://planet.python.org.
5. What are the three top things you want to learn in 2012?
- How to use whatever consistently maintained project that replaces PIL that works in Python 2.7.x and Python 3.x.
- Really advanced Python as taught by Raymond Hettiger.
- backbone.js
6. What are the top software, app or lib you wish someone would write in 2012?
A tool python-requests, but for shell access. Something like Unipath, but kept up-to-date and with nicely written documentation on Read the Docs.
A PIL replacement that is maintained, works for all modern Pythons, and is close enough to the PIL API to not cause too much confusion.
Something like Django Lint but for Python 2.7.x/3.x.
An open source project that tracks test coverages across PyPI and publishes reports of the results via an API.
Want to do your own list? here’s how:
- copy-paste the questions and answer to them in your blog
- tweet it with the #2012pythonmeme hashtag
Saturday, December 17, 2011
Evaluating which package to use
In November of 2009 I wrote about which third-party Python Packages I'll use. Here is my modern take on it - much of it inspired by personal experience and the advice of peers and mentors:
I really don't like pulling from tags on Github, BitBucket, or whatever. Or being told to pull from a specific commit. That works in early development, but it certainly doesn't fly in production.
I also get frustrated when people release on PyPI but then insist on hosting the release themselves. That is because invariably at some critical point in development when PyPI is up, the host provider is down.
A huge point of frustration is that I shouldn't have to leave the canonical source of Python package versions to hunt down what I should be using. I've seen too many beginning Python developers fall into the trap of using 3 year old packages because they didn't know they should be using trunk. I was guilty of doing it for a 6+ month old release in 2010, and for that I apologize and promise I won't do it again.
This also means your package needs to be pip installable. If you don't know how to do it, please read the The Hitchhiker’s Guide to Packaging.
2011 is closing, which means your package needs to have Sphinx Documentation. And those Sphinx Docs should be on Read the Docs. Read the Docs is great because it doesn't just host the rendered HTML, it also lets you easily push to it from a DVCS push - and implements nice search and handy PDFs too.
Yes, I know there is packages.python.org but I don't trust it. It doesn't have the easy push/deploy workflow of Read the Docs, which means often the docs are dated because it's yet another step for developers. Plus, the lack of search outside of Sphinx makes it hard to discover documentation.
The same goes for hosting docs yourself. In fact, that's usually worse because when someone goes on vacation and the docs go down... ARGH!
Please don't mentioneasy_install in your docs. We are nearly in 2012 and ought to be unified on our package installer, which is pip.
You should have them. Otherwise any update you put on PyPI puts the rest of us at risk. We can't be sure your updates to the project won't break our stuff. So please write some tests! If you add in coverage.py and some kind of lint checker, it can even be fun! It certainly does earn you bragging rights having a high coverage rating.
Are you using new-style classes or old-style classes? Do you follow PEP-8? Do you keep meta-classes to the absolute minimum? Is the code on an available DVCS so others can fork and contribute? These are things that weigh in my judgement, and certainly the judgement of others.
Tag and release on PyPI
I really don't like pulling from tags on Github, BitBucket, or whatever. Or being told to pull from a specific commit. That works in early development, but it certainly doesn't fly in production.
I also get frustrated when people release on PyPI but then insist on hosting the release themselves. That is because invariably at some critical point in development when PyPI is up, the host provider is down.
A huge point of frustration is that I shouldn't have to leave the canonical source of Python package versions to hunt down what I should be using. I've seen too many beginning Python developers fall into the trap of using 3 year old packages because they didn't know they should be using trunk. I was guilty of doing it for a 6+ month old release in 2010, and for that I apologize and promise I won't do it again.
This also means your package needs to be pip installable. If you don't know how to do it, please read the The Hitchhiker’s Guide to Packaging.
Documentation
2011 is closing, which means your package needs to have Sphinx Documentation. And those Sphinx Docs should be on Read the Docs. Read the Docs is great because it doesn't just host the rendered HTML, it also lets you easily push to it from a DVCS push - and implements nice search and handy PDFs too.
Yes, I know there is packages.python.org but I don't trust it. It doesn't have the easy push/deploy workflow of Read the Docs, which means often the docs are dated because it's yet another step for developers. Plus, the lack of search outside of Sphinx makes it hard to discover documentation.
The same goes for hosting docs yourself. In fact, that's usually worse because when someone goes on vacation and the docs go down... ARGH!
Please don't mention
Tests
You should have them. Otherwise any update you put on PyPI puts the rest of us at risk. We can't be sure your updates to the project won't break our stuff. So please write some tests! If you add in coverage.py and some kind of lint checker, it can even be fun! It certainly does earn you bragging rights having a high coverage rating.
Code Quality
Are you using new-style classes or old-style classes? Do you follow PEP-8? Do you keep meta-classes to the absolute minimum? Is the code on an available DVCS so others can fork and contribute? These are things that weigh in my judgement, and certainly the judgement of others.
Friday, December 16, 2011
Announcing Consumer Notebook!
Need a Python programming language book? Want to see a comparison of the ones I own and use? Check out my Must-Have Python Programming Books comparison grid.
Let's drill down and take a closer look at one of the items on the page, in this case Doug Hellmann's amazing The Python Standard Library by Example. The product detail pages include the ability to add pros and cons and attach said products to comparison grids and specialized lists like 'my wishlist' and 'my possessions'.
Speaking of wishlists, check out my own:
In order to add items, like footy pajamas, I click on the 'add' button and paste the Amazon (or BestBuy) URL into the form:
At this time we just handle Amazon USA and BestBuy USA. In the future we plan on adding more affiliate providers, including non-USA providers to support our non-USA friends.
It was the summer of 2010 and we were brainstorming ideas for a coding contest called Django Dash. The one we settled on was a listing and comparison site for Django called Django Packages. The result has been a very useful tool for the Django community. Eventually, with the help of several dozen people, we turned the code into the Open Comparison framework and launched Pyramid and Plone implementations. Time permitting this year, we plan to do Python, Flask, Twisted, Node, JQuery, and other implementations.
Since then we've wanted to do something similar, but in the context of products. And we wanted to do it right - elegant design combined with an ad-free space. So we cooked up Consumer Notebook, launching today!
We'll be adding features and enhancements in the months to come. We've acquired a community manager, and even have a blog. We would love for you to check out the site, share it with your friends and family, and send us your commentary, suggestions, and advice.
Let's drill down and take a closer look at one of the items on the page, in this case Doug Hellmann's amazing The Python Standard Library by Example. The product detail pages include the ability to add pros and cons and attach said products to comparison grids and specialized lists like 'my wishlist' and 'my possessions'.
Speaking of wishlists, check out my own:
In order to add items, like footy pajamas, I click on the 'add' button and paste the Amazon (or BestBuy) URL into the form:
At this time we just handle Amazon USA and BestBuy USA. In the future we plan on adding more affiliate providers, including non-USA providers to support our non-USA friends.
There's a lot more than that...
In addition to weekly infographics, comparison grids, lists, and products, Consumer Notebook also awards points, coins, badges, and a growing privilege set to participating users. We even implemented an energy bar which regenerates over time, designed to match the pace of human users and serve as one of the brakes on scripts and bots.Technology
I built this with Audrey Roy using Python, Django, JQuery, PostGreSQL, Memcached, and RabbitMQ. I'll be blogging in depth about the technical side in an upcoming post.Genesis
It was the summer of 2010 and we were brainstorming ideas for a coding contest called Django Dash. The one we settled on was a listing and comparison site for Django called Django Packages. The result has been a very useful tool for the Django community. Eventually, with the help of several dozen people, we turned the code into the Open Comparison framework and launched Pyramid and Plone implementations. Time permitting this year, we plan to do Python, Flask, Twisted, Node, JQuery, and other implementations.
Since then we've wanted to do something similar, but in the context of products. And we wanted to do it right - elegant design combined with an ad-free space. So we cooked up Consumer Notebook, launching today!
We'll be adding features and enhancements in the months to come. We've acquired a community manager, and even have a blog. We would love for you to check out the site, share it with your friends and family, and send us your commentary, suggestions, and advice.
Friday, December 9, 2011
My BaseModel
When I build projects in Django I like to have a 'core' app with all my common bits in it, including a BaseModel. In that BaseModel I'll define the most basic fields possible, in this case a simple pair of created/modified fields built using custom django-extension fields.
You'll notice I also have core.fields defined. That is because (unless things have changed), django-extensions doesn't work with South out of the box. Hence the file below where I extend those fields to play nicely with my migration tool of choice.
Unfortunately, this all shows up as red marks when I run coverage.py reports. To deal with that I added in some tests. However, I'll readily I'm not super pleased with the tests below, but they are better then nothing, right?
I'll reiterate that I'm not happy with the tests. I'm open to suggestions.
I pretty much got the BaseModel from Frank Wiles of RevSys back in the summer of 2010. What I added was sticking all the common bits into the core app, getting the South migration to play more nicely, and adding tests.
Jannis and John both pointed out that django_extensions now has a TimeStampedModel that does what my BaseModel does. They also pointed out that django_extensions comes with built-in South migrations for it's CreationDateTimeField and ModificationDateTimeField fields.
Which means thanks we can safely just do this and not worry about migrations:
# core/models.py from django.db import models from django.utils.translation import ugettext_lazy as _ from core.fields import CreationDateTimeField, ModificationDateTimeField class BaseModel(models.Model): """ Base abstract base class to give creation and modified times """ created = CreationDateTimeField(_('created')) modified = ModificationDateTimeField(_('modified')) class Meta: abstract = True
You'll notice I also have core.fields defined. That is because (unless things have changed), django-extensions doesn't work with South out of the box. Hence the file below where I extend those fields to play nicely with my migration tool of choice.
# core/fields.py from django_extensions.db.fields import CreationDateTimeField, ModificationDateTimeField class CreationDateTimeField(CreationDateTimeField): def south_field_triple(self): "Returns a suitable description of this field for South." # We'll just introspect ourselves, since we inherit. from south.modelsinspector import introspector field_class = "django.db.models.fields.DateTimeField" args, kwargs = introspector(self) return (field_class, args, kwargs) class ModificationDateTimeField(ModificationDateTimeField): def south_field_triple(self): "Returns a suitable description of this field for South." # We'll just introspect ourselves, since we inherit. from south.modelsinspector import introspector field_class = "django.db.models.fields.DateTimeField" args, kwargs = introspector(self) return (field_class, args, kwargs)
Unfortunately, this all shows up as red marks when I run coverage.py reports. To deal with that I added in some tests. However, I'll readily I'm not super pleased with the tests below, but they are better then nothing, right?
# core/tests/test_fields.py from django.test import TestCase from core.fields import CreationDateTimeField, ModificationDateTimeField class TestFields(TestCase): def test_create_override(self): field = CreationDateTimeField() triple = field.south_field_triple() self.assertEquals(triple[0], 'django.db.models.fields.DateTimeField') self.assertEquals(triple[1], list()) self.assertEquals(triple[2], {'default': 'datetime.datetime.now', 'blank': 'True'}) def test_modify_override(self): field = ModificationDateTimeField() triple = field.south_field_triple() self.assertEquals(triple[0], 'django.db.models.fields.DateTimeField') self.assertEquals(triple[1], list()) self.assertEquals(triple[2], {'default': 'datetime.datetime.now', 'blank': 'True'})
Closing Thoughts
My pattern is also If I need more stuff in this BaseModel I extend it with another abstract class instead of changing it. That way I can be sure at least this part works really well and any additions are isolated in another class.I'll reiterate that I'm not happy with the tests. I'm open to suggestions.
I pretty much got the BaseModel from Frank Wiles of RevSys back in the summer of 2010. What I added was sticking all the common bits into the core app, getting the South migration to play more nicely, and adding tests.
But much of this is moot!
Note: I added this segment several days after my original posting because of the stuff in the comments. Thanks Jannis Leidel and someone named John - this is part of why I post.Jannis and John both pointed out that django_extensions now has a TimeStampedModel that does what my BaseModel does. They also pointed out that django_extensions comes with built-in South migrations for it's CreationDateTimeField and ModificationDateTimeField fields.
Which means thanks we can safely just do this and not worry about migrations:
# core/models.py from django.db import models from django.utils.translation import ugettext_lazy as _ from django_extensions.db.fields import CreationDateTimeField, ModificationDateTimeField class BaseModel(models.Model): """ Base abstract base class to give creation and modified times """ created = CreationDateTimeField(_('created')) modified = ModificationDateTimeField(_('modified')) class Meta: abstract = True
Wednesday, December 7, 2011
Made Up Statistics
At DjangoCon my good friend Miguel Araujo and I presented on Advanced Django Form Usage. Slide 18 of that talk mentioned some made up statistics. Here they are for reference:
With that out of the way, I'm going to make a bar graph out of my fictional data:
You'll notice that my bar titles could be stronger. I actually did that on purpose in case anyone tries to use that chart in real life. In any case, if you thought that was interesting, then read on. I have many more made-up statistics. For example, here are more numbers I've cooked up:
DevOps is the new hotness. I know because every other Python meetup features someone speaking on it - just like every other Ruby, Perl, and PHP meetup. Anyway... numbers:
Following the obvious logic flow (to me anyway) of DevOps to something else, let's go into Python environments, also known as the VirtualEnv vs Buildout debate, which adds up to an even 100% (making it good pie chart material):
The made up statistics in this post frequently touch on contentious topics. So let me add another controversial topic, this time the never ending template debate in Python:
I sometimes get asked how to best optimize a Django site. My answer is 'cache and then cache some more' but there are those who disagree with me and start switching out Django internals before doing anything silly like looking at I/O. My bet is this same thing happens with other frameworks such as Pyramid.
Of all the made up statistics in this blog post, I suspect this is the one closest to the truth of things.
Update: Alex Gaynor and Audrey Roy pointed out that the original line graph for this data was not appropriate. My weak defense was that I'm trying not to make things too serious but they stated that the line graph was so inappropriate it distracted from the rest of the post. Thanks for the advice!
Alright, let's conclude this article with some statistics I cooked up about frameworks in Python. I'm going to do more then just mention web frameworks, dabbling into other awesome things that the Python community has given us.
- 91% of Django projects use ModelForms.
- 80% ModelForms require trivial logic.
- 20% ModelForms require complex logic.
With that out of the way, I'm going to make a bar graph out of my fictional data:
You'll notice that my bar titles could be stronger. I actually did that on purpose in case anyone tries to use that chart in real life. In any case, if you thought that was interesting, then read on. I have many more made-up statistics. For example, here are more numbers I've cooked up:
Pydanny Made Up DevOps Statistics
DevOps is the new hotness. I know because every other Python meetup features someone speaking on it - just like every other Ruby, Perl, and PHP meetup. Anyway... numbers:
- 24.3% Python developers doing DevOps think they could have launched a PaaS (aka Heroku clone) before it got crowded.
- 46.3% Python developers doing DevOps spend all their time writing Chef/Puppet scripts and yet still claim to be Python developers.
- 14% Python developers are worried about so much of the backend being done in Ruby.
- 54% Python developers are just happy that there are many options now and don't care about the internal machinery that much.
This time, because I'm worried about the data being taken seriously, I've titled the bar chart in such a way that no one will reference it in anything important:
Pydanny Made Up Python Enviroment Statistics
Following the obvious logic flow (to me anyway) of DevOps to something else, let's go into Python environments, also known as the VirtualEnv vs Buildout debate, which adds up to an even 100% (making it good pie chart material):
- 77% of Python Developers prefer VirtualEnv.
- 13% of Python Developers prefer Buildout.
- 7% of Python developers rolled their own solution and wish they could switch over.
- 3% of Python developers rolled their own solution and are fiendishly delighted with how they have guaranteed their own job security forever. I know who some of you are and I can say with some confidence that when the Zombie apocalypse happens, no one is going to invite you into their fortified compounds. We hate you that much.
Pydanny Made Up Template Debate Statistics
The made up statistics in this post frequently touch on contentious topics. So let me add another controversial topic, this time the never ending template debate in Python:
- 70% python developers prefer non-XML templates
- 25% python developers prefer XML templates
- 5% python developers wonder why we don't just use the str.format() method and be done with it
- 50% python developers strongly disagree with my Stupid Template Languages blog post from last year.
Pydanny Made Up Python Web Optimization Statistics
I sometimes get asked how to best optimize a Django site. My answer is 'cache and then cache some more' but there are those who disagree with me and start switching out Django internals before doing anything silly like looking at I/O. My bet is this same thing happens with other frameworks such as Pyramid.
- 20% developers argue switching template languages.
- 80% developers argue using caching and load balancing.
- 100% Django/Pyramid/Flask/etc core developers argue using caching and load balancing.
Of all the made up statistics in this blog post, I suspect this is the one closest to the truth of things.
Update: Alex Gaynor and Audrey Roy pointed out that the original line graph for this data was not appropriate. My weak defense was that I'm trying not to make things too serious but they stated that the line graph was so inappropriate it distracted from the rest of the post. Thanks for the advice!
Pydanny Made Up Framework Debate Statistics
Alright, let's conclude this article with some statistics I cooked up about frameworks in Python. I'm going to do more then just mention web frameworks, dabbling into other awesome things that the Python community has given us.
- 23.6% of us get web.py and web2py confused with each other.
- 42% Python developers think Pyramid/Flask have awesome names that don't get mispronounced the same way Django does.
- 28% Python developers wish they could find a way to get some SciPy into their projects.
- 22% Python developers wish there was a PEP-8 wrapper for Twisted.
- 49% Twisted developers wish that Python had accepted their standard instead of PEP-8.
- 90% Python developers wonder what they were drinking when they renamed it to BlueBreem and wonder if it is sold over the counter in their municipality.
No chart? Getting this one to look meaningful was turning into a herculean effort. I invite others to render this data into something that look attractive and doesn't lose meaning. Come up with something impressive and I'll put it into a follow-up blog post.
Sunday, December 4, 2011
The Story of Live-Noting
Like a lot of people, I've got this thing I do when I attend conferences, meetups, classes, and tutorials: I take notes. My open source based ones are mostly written in RestructuredText and I've kept in a particular folder since at least 2006.
I'm not exactly sure when I started down this path, bit this commit log entry leads me to think I had it working on or around July 8th. What that would mean is that every time I pushed up a change in my notes, within minutes readthedocs.org would publish the content to the world in lovely HTML markup.
The result?
Here's a screen shot of the front page
Because I was committing constantly in order to get updates on readthedocs.org as soon as possible, I also adopted the habit of super-short pull request messages. That's because the content I'm writing overrides the need for verbose comments. So when you see me writing "moar" it's because every minute or so I'm doing something like:
In essence, I don't want to constrain what I write but I also don't want to write something that will haunt someone else later. Even with a caveat and all that stuff, it can still be problematic. There is a difference between me ranting about something and me taking notes, and the written word is such that things are all too often taken out of context.
Food for thought indeed.
Not only that, but I got asked if I would accept pull requests. After a good two seconds of deep thought, I responded that I would only consider corrections and clarifications, not new material. I received not just one, but two pull requests from good friends and left the conference pretty happy.
On top of that, I managed to get featured on the front page of http://readthedocs.org! (Thanks Eric)
Kenneth Love also took notes in a similar fashion: readthedocs.org/docs/djangocon-2011-notes
Josh Bohde also took notes at the event in a similar fashion readthedocs.org/projects/joshbohde-event-notes and even as I write this post he shares the featuring of our notes on the frontispiece of readthedocs.org:
The graphs and stats of this effort is really interesting. Fortran? And a total of
Five contributors!
All of this makes taking notes a lot more fun. I enjoy finding ways to enhance and improve my process, and find it exciting that others are following a similar pattern of effort. My hope is to make 2012 the Year of PyCon, where I find a way to go to a Python related conference on six continents (Antartica is too cold for my tastes) and take notes everywhere.
Going forward, should I document how I built this out? Would my steps and patterns be useful for others?
Putting notes in a DVCS
On September 13, 2009, I uploaded these notes to Github.com. I did that because I wasn't pleased with the workflow I established of moving items to Dropbox for backup. I use DVCS all the time and I figured why not just put my notes where I put my code? So I added my notes as a Github repo.DVCS Notes Based Management System?
For a while I tried to use the Github folder README.rst trick to make a navigations system for my notes. But Github isn't designed for making a README into a dynamic custom content navigator, and it would make a silly feature request. I would rather the Github team work on Mercurial integration or other practical things before they honored a request to turn their system into my own custom Notes Management System. Eventually I just gave up on it and moved on.Sphinx + Read The Docs!
In early July of 2011 I had a wicked fun thought. What if I turned my notes into a Sphinx project and posted it on readthedocs.org? Most of my content is in RestructuredText and I've gotten really fast at rolling out Sphinx documentation. The 'hard' part would be converting the few README.rst files into index.rst files, but on the flip side I could use fancy Sphinx directives.I'm not exactly sure when I started down this path, bit this commit log entry leads me to think I had it working on or around July 8th. What that would mean is that every time I pushed up a change in my notes, within minutes readthedocs.org would publish the content to the world in lovely HTML markup.
The result?
Pydanny Event Notes
Here's a screen shot of the front page
PyCon Australia 2011 Test Drive
For the 2011 PyCon Australia I gave my new process a serious whirl. I found if I created the page before the talk and entered some basic data like author and title and tied it to the index then I could constantly check the quality of my output while taking my notes. It made my notes seem a bit more exciting and alive. I even tweeted about it cause I thought it was fun, and people around the world seemed to enjoy the effort I was putting into my notes.Because I was committing constantly in order to get updates on readthedocs.org as soon as possible, I also adopted the habit of super-short pull request messages. That's because the content I'm writing overrides the need for verbose comments. So when you see me writing "moar" it's because every minute or so I'm doing something like:
$ git commit -am "moar" $ git push
Kiwi PyCon 2011
I did my rapid note taking again at Kiwi PyCon and it was fun. The downside was that sometimes I get rather critical in my notes and I had a couple speakers come up to me later to clarify their positions. This makes it a bit challenging because I want to put down my thoughts, but if my thoughts impact another person, what should I do? Especially since if my negative notes on someone turn up in a search it can negatively impact the speaker way beyond a single talk. This is now always on my mind when I take notes, and I'm trying to figure out a good way to handle this going forward.In essence, I don't want to constrain what I write but I also don't want to write something that will haunt someone else later. Even with a caveat and all that stuff, it can still be problematic. There is a difference between me ranting about something and me taking notes, and the written word is such that things are all too often taken out of context.
Food for thought indeed.
DjangoCon 2011 and the invention of the term 'live-noting'
At the start of DjangoCon 2011 someone tweeted that they were planning to 'live-blog' the event. Suddenly I realized that what I was doing had a name for it, and that was 'live-noting'. So I tweeted that was what I was doing and it seemed to catch on.Not only that, but I got asked if I would accept pull requests. After a good two seconds of deep thought, I responded that I would only consider corrections and clarifications, not new material. I received not just one, but two pull requests from good friends and left the conference pretty happy.
On top of that, I managed to get featured on the front page of http://readthedocs.org! (Thanks Eric)
Kenneth Love also took notes in a similar fashion: readthedocs.org/docs/djangocon-2011-notes
PyCodeConf 2011
I had the excellent fortune of being an invited speaker to Github's PyCodeConf. While I gave my talk, my lovely fiancée, Audrey took notes of my talk and submitted a pull request. Her contribution was the first time I accepted content I did not write, and I'll say right now she's the only one for whom I will accept such content. On the other hand, If you take notes when I present let me know and I'll link to them from my own notes.Josh Bohde also took notes at the event in a similar fashion readthedocs.org/projects/joshbohde-event-notes and even as I write this post he shares the featuring of our notes on the frontispiece of readthedocs.org:
Closing Thoughts
I often use my notes as reference, and if you follow the commit logs you may even see me comment or clean up things I wrote down years ago.The graphs and stats of this effort is really interesting. Fortran? And a total of
Five contributors!
All of this makes taking notes a lot more fun. I enjoy finding ways to enhance and improve my process, and find it exciting that others are following a similar pattern of effort. My hope is to make 2012 the Year of PyCon, where I find a way to go to a Python related conference on six continents (Antartica is too cold for my tastes) and take notes everywhere.
Going forward, should I document how I built this out? Would my steps and patterns be useful for others?