- Go to a Python related conference in
North America, South America,Europe,Asia, Africa, Australia, and New Zealand. Attend at least one JavaScript related conference or event.- Upload all my outstanding pictures to Flickr!
- Make Consumer Notebook profitable.
Find more ways to make Audrey Roy happy.- Pull off an Aú sem Mão during a Capoeira Roda.
- Attend my first Capoeira Batizado.
See a place in the USA I've never been.- Work out at least three times a week.
Drop to a 32 waistVisit friends and family back east. Been over a year since I've seen my sister!Blog once a week. That is at least 52 blog entries!Visit a Theme park.- Learn how to surf or snowboard.
Implement something in node.js, backbone.js, and handlebars.js- Take a high level Python class from the likes of Raymond Hettiger or David Beazly.
Teach some Python or Django.- Have a beer with Thomas, Andy, Andy, Tony, Garrick, Bernd, and the rest of Ye Aulde Gange.
- See my old DC area friends such as
Eric, Chris, Steve, Beth, Sarah, Daye, Renee, Kenneth, Leslie,Whitney, Dave, and many others. Visit my Son.
Thursday, December 29, 2011
Resolutions for 2012
Tuesday, December 27, 2011
2011 Resolution Summary
Items that are crossed out are completed.
- Travel to Europe again. Travel to Asia or Africa.(Went to Australia instead. Which was an very, very acceptable substitute.)
- Visit a Disney park.
See a place in the USA I've never been.- Drop the waist size 2 inches and
not break any bones. Go to Pycon and present or teach.Go to DjangoCon and present or teach.Present at LA DjangoContinue my Muay Thai and Capoeira studies, get back into Eskrima, learn some more BJJ, andpractice the forms I know.- Work out at least three times a week.
- Go back east and teach martial arts for a day.
Finish some outstanding legal proceedings.Launch a site that does cool stuff and somehow brings in money.(Consumer Notebook)- Get to the point with LISP where I can do cool stuff in it without needing a textbook. (I seem to have spent this time working on JavaScript instead).
- Blog once a week. That is at least 52 blog entries! (almost there!)
Explain why I wrote Diversity Rocks.
Thursday, December 22, 2011
New Year’s Python Meme
I love these blog memes, so I give you my version of Tarek Ziade's New Year's Python Meme.
For python libraries, that would have to be Kenneth Reitz' python-requests library. I've used it for an amazing amount of stuff and blogged about it. It took the grunge out of doing HTTP actions with Python. The API is clean and elegant, getting out of your way. It embodies the State of the art for API design, which closely matches the Zen of Python.
For applications, djangolint.com is awesome. It has helped me out so much on several projects. I would love to see something like this implemented and maintained for modern Python.
All the Python friendly PaaS efforts that have emerged are changing the landscape for those of us who want to launch projects but don't want to become full time system administrators in the process. Heroku, DjangoZoom, DotCloud, ep.io, gondor.io, and others are making it possible for developers to focus on development not server tooling. Google App Engine paved the way and it is wonderful to see the rest of the universe catch up with material that more closely follow core.
Event based programming! I've touched on it for years, but this year I really got a lot more more into it thanks to Aurynn Shaw kickstarting me and Audrey Roy expanding my knowledge ever since.
I participated mostly as co-lead in the Open Comparison project, which amongst other things involved running the largest sprint at PyCon 2011. We maintained Django Packages and launched Pyramid and Plone versions of the project. We hope to launch a Python implementation in 2012.
I took a lot of notes this year at pydanny-event-notes - enough to make a book.
Like Nick Coghlan, that would be http://planet.python.org.
A tool python-requests, but for shell access. Something like Unipath, but kept up-to-date and with nicely written documentation on Read the Docs.
A PIL replacement that is maintained, works for all modern Pythons, and is close enough to the PIL API to not cause too much confusion.
Something like Django Lint but for Python 2.7.x/3.x.
An open source project that tracks test coverages across PyPI and publishes reports of the results via an API.
1. What’s the coolest Python application, framework or library you have discovered in 2011?
For python libraries, that would have to be Kenneth Reitz' python-requests library. I've used it for an amazing amount of stuff and blogged about it. It took the grunge out of doing HTTP actions with Python. The API is clean and elegant, getting out of your way. It embodies the State of the art for API design, which closely matches the Zen of Python.
For applications, djangolint.com is awesome. It has helped me out so much on several projects. I would love to see something like this implemented and maintained for modern Python.
All the Python friendly PaaS efforts that have emerged are changing the landscape for those of us who want to launch projects but don't want to become full time system administrators in the process. Heroku, DjangoZoom, DotCloud, ep.io, gondor.io, and others are making it possible for developers to focus on development not server tooling. Google App Engine paved the way and it is wonderful to see the rest of the universe catch up with material that more closely follow core.
2. What new programming technique did you learn in 2011?
Event based programming! I've touched on it for years, but this year I really got a lot more more into it thanks to Aurynn Shaw kickstarting me and Audrey Roy expanding my knowledge ever since.
3. What’s the name of the open source project you contributed the most in 2011? What did you do?
I participated mostly as co-lead in the Open Comparison project, which amongst other things involved running the largest sprint at PyCon 2011. We maintained Django Packages and launched Pyramid and Plone versions of the project. We hope to launch a Python implementation in 2012.
I took a lot of notes this year at pydanny-event-notes - enough to make a book.
4. What was the Python blog or website you read the most in 2011?
Like Nick Coghlan, that would be http://planet.python.org.
5. What are the three top things you want to learn in 2012?
- How to use whatever consistently maintained project that replaces PIL that works in Python 2.7.x and Python 3.x.
- Really advanced Python as taught by Raymond Hettiger.
- backbone.js
6. What are the top software, app or lib you wish someone would write in 2012?
A tool python-requests, but for shell access. Something like Unipath, but kept up-to-date and with nicely written documentation on Read the Docs.
A PIL replacement that is maintained, works for all modern Pythons, and is close enough to the PIL API to not cause too much confusion.
Something like Django Lint but for Python 2.7.x/3.x.
An open source project that tracks test coverages across PyPI and publishes reports of the results via an API.
Want to do your own list? here’s how:
- copy-paste the questions and answer to them in your blog
- tweet it with the #2012pythonmeme hashtag
Saturday, December 17, 2011
Evaluating which package to use
In November of 2009 I wrote about which third-party Python Packages I'll use. Here is my modern take on it - much of it inspired by personal experience and the advice of peers and mentors:
I really don't like pulling from tags on Github, BitBucket, or whatever. Or being told to pull from a specific commit. That works in early development, but it certainly doesn't fly in production.
I also get frustrated when people release on PyPI but then insist on hosting the release themselves. That is because invariably at some critical point in development when PyPI is up, the host provider is down.
A huge point of frustration is that I shouldn't have to leave the canonical source of Python package versions to hunt down what I should be using. I've seen too many beginning Python developers fall into the trap of using 3 year old packages because they didn't know they should be using trunk. I was guilty of doing it for a 6+ month old release in 2010, and for that I apologize and promise I won't do it again.
This also means your package needs to be pip installable. If you don't know how to do it, please read the The Hitchhiker’s Guide to Packaging.
2011 is closing, which means your package needs to have Sphinx Documentation. And those Sphinx Docs should be on Read the Docs. Read the Docs is great because it doesn't just host the rendered HTML, it also lets you easily push to it from a DVCS push - and implements nice search and handy PDFs too.
Yes, I know there is packages.python.org but I don't trust it. It doesn't have the easy push/deploy workflow of Read the Docs, which means often the docs are dated because it's yet another step for developers. Plus, the lack of search outside of Sphinx makes it hard to discover documentation.
The same goes for hosting docs yourself. In fact, that's usually worse because when someone goes on vacation and the docs go down... ARGH!
Please don't mentioneasy_install in your docs. We are nearly in 2012 and ought to be unified on our package installer, which is pip.
You should have them. Otherwise any update you put on PyPI puts the rest of us at risk. We can't be sure your updates to the project won't break our stuff. So please write some tests! If you add in coverage.py and some kind of lint checker, it can even be fun! It certainly does earn you bragging rights having a high coverage rating.
Are you using new-style classes or old-style classes? Do you follow PEP-8? Do you keep meta-classes to the absolute minimum? Is the code on an available DVCS so others can fork and contribute? These are things that weigh in my judgement, and certainly the judgement of others.
Tag and release on PyPI
I really don't like pulling from tags on Github, BitBucket, or whatever. Or being told to pull from a specific commit. That works in early development, but it certainly doesn't fly in production.
I also get frustrated when people release on PyPI but then insist on hosting the release themselves. That is because invariably at some critical point in development when PyPI is up, the host provider is down.
A huge point of frustration is that I shouldn't have to leave the canonical source of Python package versions to hunt down what I should be using. I've seen too many beginning Python developers fall into the trap of using 3 year old packages because they didn't know they should be using trunk. I was guilty of doing it for a 6+ month old release in 2010, and for that I apologize and promise I won't do it again.
This also means your package needs to be pip installable. If you don't know how to do it, please read the The Hitchhiker’s Guide to Packaging.
Documentation
2011 is closing, which means your package needs to have Sphinx Documentation. And those Sphinx Docs should be on Read the Docs. Read the Docs is great because it doesn't just host the rendered HTML, it also lets you easily push to it from a DVCS push - and implements nice search and handy PDFs too.
Yes, I know there is packages.python.org but I don't trust it. It doesn't have the easy push/deploy workflow of Read the Docs, which means often the docs are dated because it's yet another step for developers. Plus, the lack of search outside of Sphinx makes it hard to discover documentation.
The same goes for hosting docs yourself. In fact, that's usually worse because when someone goes on vacation and the docs go down... ARGH!
Please don't mention
Tests
You should have them. Otherwise any update you put on PyPI puts the rest of us at risk. We can't be sure your updates to the project won't break our stuff. So please write some tests! If you add in coverage.py and some kind of lint checker, it can even be fun! It certainly does earn you bragging rights having a high coverage rating.
Code Quality
Are you using new-style classes or old-style classes? Do you follow PEP-8? Do you keep meta-classes to the absolute minimum? Is the code on an available DVCS so others can fork and contribute? These are things that weigh in my judgement, and certainly the judgement of others.
Friday, December 16, 2011
Announcing Consumer Notebook!
Need a Python programming language book? Want to see a comparison of the ones I own and use? Check out my Must-Have Python Programming Books comparison grid.
Let's drill down and take a closer look at one of the items on the page, in this case Doug Hellmann's amazing The Python Standard Library by Example. The product detail pages include the ability to add pros and cons and attach said products to comparison grids and specialized lists like 'my wishlist' and 'my possessions'.
Speaking of wishlists, check out my own:
In order to add items, like footy pajamas, I click on the 'add' button and paste the Amazon (or BestBuy) URL into the form:
At this time we just handle Amazon USA and BestBuy USA. In the future we plan on adding more affiliate providers, including non-USA providers to support our non-USA friends.
It was the summer of 2010 and we were brainstorming ideas for a coding contest called Django Dash. The one we settled on was a listing and comparison site for Django called Django Packages. The result has been a very useful tool for the Django community. Eventually, with the help of several dozen people, we turned the code into the Open Comparison framework and launched Pyramid and Plone implementations. Time permitting this year, we plan to do Python, Flask, Twisted, Node, JQuery, and other implementations.
Since then we've wanted to do something similar, but in the context of products. And we wanted to do it right - elegant design combined with an ad-free space. So we cooked up Consumer Notebook, launching today!
We'll be adding features and enhancements in the months to come. We've acquired a community manager, and even have a blog. We would love for you to check out the site, share it with your friends and family, and send us your commentary, suggestions, and advice.
Let's drill down and take a closer look at one of the items on the page, in this case Doug Hellmann's amazing The Python Standard Library by Example. The product detail pages include the ability to add pros and cons and attach said products to comparison grids and specialized lists like 'my wishlist' and 'my possessions'.
Speaking of wishlists, check out my own:
In order to add items, like footy pajamas, I click on the 'add' button and paste the Amazon (or BestBuy) URL into the form:
At this time we just handle Amazon USA and BestBuy USA. In the future we plan on adding more affiliate providers, including non-USA providers to support our non-USA friends.
There's a lot more than that...
In addition to weekly infographics, comparison grids, lists, and products, Consumer Notebook also awards points, coins, badges, and a growing privilege set to participating users. We even implemented an energy bar which regenerates over time, designed to match the pace of human users and serve as one of the brakes on scripts and bots.Technology
I built this with Audrey Roy using Python, Django, JQuery, PostGreSQL, Memcached, and RabbitMQ. I'll be blogging in depth about the technical side in an upcoming post.Genesis
It was the summer of 2010 and we were brainstorming ideas for a coding contest called Django Dash. The one we settled on was a listing and comparison site for Django called Django Packages. The result has been a very useful tool for the Django community. Eventually, with the help of several dozen people, we turned the code into the Open Comparison framework and launched Pyramid and Plone implementations. Time permitting this year, we plan to do Python, Flask, Twisted, Node, JQuery, and other implementations.
Since then we've wanted to do something similar, but in the context of products. And we wanted to do it right - elegant design combined with an ad-free space. So we cooked up Consumer Notebook, launching today!
We'll be adding features and enhancements in the months to come. We've acquired a community manager, and even have a blog. We would love for you to check out the site, share it with your friends and family, and send us your commentary, suggestions, and advice.
Friday, December 9, 2011
My BaseModel
When I build projects in Django I like to have a 'core' app with all my common bits in it, including a BaseModel. In that BaseModel I'll define the most basic fields possible, in this case a simple pair of created/modified fields built using custom django-extension fields.
You'll notice I also have core.fields defined. That is because (unless things have changed), django-extensions doesn't work with South out of the box. Hence the file below where I extend those fields to play nicely with my migration tool of choice.
Unfortunately, this all shows up as red marks when I run coverage.py reports. To deal with that I added in some tests. However, I'll readily I'm not super pleased with the tests below, but they are better then nothing, right?
I'll reiterate that I'm not happy with the tests. I'm open to suggestions.
I pretty much got the BaseModel from Frank Wiles of RevSys back in the summer of 2010. What I added was sticking all the common bits into the core app, getting the South migration to play more nicely, and adding tests.
Jannis and John both pointed out that django_extensions now has a TimeStampedModel that does what my BaseModel does. They also pointed out that django_extensions comes with built-in South migrations for it's CreationDateTimeField and ModificationDateTimeField fields.
Which means thanks we can safely just do this and not worry about migrations:
# core/models.py from django.db import models from django.utils.translation import ugettext_lazy as _ from core.fields import CreationDateTimeField, ModificationDateTimeField class BaseModel(models.Model): """ Base abstract base class to give creation and modified times """ created = CreationDateTimeField(_('created')) modified = ModificationDateTimeField(_('modified')) class Meta: abstract = True
You'll notice I also have core.fields defined. That is because (unless things have changed), django-extensions doesn't work with South out of the box. Hence the file below where I extend those fields to play nicely with my migration tool of choice.
# core/fields.py from django_extensions.db.fields import CreationDateTimeField, ModificationDateTimeField class CreationDateTimeField(CreationDateTimeField): def south_field_triple(self): "Returns a suitable description of this field for South." # We'll just introspect ourselves, since we inherit. from south.modelsinspector import introspector field_class = "django.db.models.fields.DateTimeField" args, kwargs = introspector(self) return (field_class, args, kwargs) class ModificationDateTimeField(ModificationDateTimeField): def south_field_triple(self): "Returns a suitable description of this field for South." # We'll just introspect ourselves, since we inherit. from south.modelsinspector import introspector field_class = "django.db.models.fields.DateTimeField" args, kwargs = introspector(self) return (field_class, args, kwargs)
Unfortunately, this all shows up as red marks when I run coverage.py reports. To deal with that I added in some tests. However, I'll readily I'm not super pleased with the tests below, but they are better then nothing, right?
# core/tests/test_fields.py from django.test import TestCase from core.fields import CreationDateTimeField, ModificationDateTimeField class TestFields(TestCase): def test_create_override(self): field = CreationDateTimeField() triple = field.south_field_triple() self.assertEquals(triple[0], 'django.db.models.fields.DateTimeField') self.assertEquals(triple[1], list()) self.assertEquals(triple[2], {'default': 'datetime.datetime.now', 'blank': 'True'}) def test_modify_override(self): field = ModificationDateTimeField() triple = field.south_field_triple() self.assertEquals(triple[0], 'django.db.models.fields.DateTimeField') self.assertEquals(triple[1], list()) self.assertEquals(triple[2], {'default': 'datetime.datetime.now', 'blank': 'True'})
Closing Thoughts
My pattern is also If I need more stuff in this BaseModel I extend it with another abstract class instead of changing it. That way I can be sure at least this part works really well and any additions are isolated in another class.I'll reiterate that I'm not happy with the tests. I'm open to suggestions.
I pretty much got the BaseModel from Frank Wiles of RevSys back in the summer of 2010. What I added was sticking all the common bits into the core app, getting the South migration to play more nicely, and adding tests.
But much of this is moot!
Note: I added this segment several days after my original posting because of the stuff in the comments. Thanks Jannis Leidel and someone named John - this is part of why I post.Jannis and John both pointed out that django_extensions now has a TimeStampedModel that does what my BaseModel does. They also pointed out that django_extensions comes with built-in South migrations for it's CreationDateTimeField and ModificationDateTimeField fields.
Which means thanks we can safely just do this and not worry about migrations:
# core/models.py from django.db import models from django.utils.translation import ugettext_lazy as _ from django_extensions.db.fields import CreationDateTimeField, ModificationDateTimeField class BaseModel(models.Model): """ Base abstract base class to give creation and modified times """ created = CreationDateTimeField(_('created')) modified = ModificationDateTimeField(_('modified')) class Meta: abstract = True
Wednesday, December 7, 2011
Made Up Statistics
At DjangoCon my good friend Miguel Araujo and I presented on Advanced Django Form Usage. Slide 18 of that talk mentioned some made up statistics. Here they are for reference:
With that out of the way, I'm going to make a bar graph out of my fictional data:
You'll notice that my bar titles could be stronger. I actually did that on purpose in case anyone tries to use that chart in real life. In any case, if you thought that was interesting, then read on. I have many more made-up statistics. For example, here are more numbers I've cooked up:
DevOps is the new hotness. I know because every other Python meetup features someone speaking on it - just like every other Ruby, Perl, and PHP meetup. Anyway... numbers:
Following the obvious logic flow (to me anyway) of DevOps to something else, let's go into Python environments, also known as the VirtualEnv vs Buildout debate, which adds up to an even 100% (making it good pie chart material):
The made up statistics in this post frequently touch on contentious topics. So let me add another controversial topic, this time the never ending template debate in Python:
I sometimes get asked how to best optimize a Django site. My answer is 'cache and then cache some more' but there are those who disagree with me and start switching out Django internals before doing anything silly like looking at I/O. My bet is this same thing happens with other frameworks such as Pyramid.
Of all the made up statistics in this blog post, I suspect this is the one closest to the truth of things.
Update: Alex Gaynor and Audrey Roy pointed out that the original line graph for this data was not appropriate. My weak defense was that I'm trying not to make things too serious but they stated that the line graph was so inappropriate it distracted from the rest of the post. Thanks for the advice!
Alright, let's conclude this article with some statistics I cooked up about frameworks in Python. I'm going to do more then just mention web frameworks, dabbling into other awesome things that the Python community has given us.
- 91% of Django projects use ModelForms.
- 80% ModelForms require trivial logic.
- 20% ModelForms require complex logic.
With that out of the way, I'm going to make a bar graph out of my fictional data:
You'll notice that my bar titles could be stronger. I actually did that on purpose in case anyone tries to use that chart in real life. In any case, if you thought that was interesting, then read on. I have many more made-up statistics. For example, here are more numbers I've cooked up:
Pydanny Made Up DevOps Statistics
DevOps is the new hotness. I know because every other Python meetup features someone speaking on it - just like every other Ruby, Perl, and PHP meetup. Anyway... numbers:
- 24.3% Python developers doing DevOps think they could have launched a PaaS (aka Heroku clone) before it got crowded.
- 46.3% Python developers doing DevOps spend all their time writing Chef/Puppet scripts and yet still claim to be Python developers.
- 14% Python developers are worried about so much of the backend being done in Ruby.
- 54% Python developers are just happy that there are many options now and don't care about the internal machinery that much.
This time, because I'm worried about the data being taken seriously, I've titled the bar chart in such a way that no one will reference it in anything important:
Pydanny Made Up Python Enviroment Statistics
Following the obvious logic flow (to me anyway) of DevOps to something else, let's go into Python environments, also known as the VirtualEnv vs Buildout debate, which adds up to an even 100% (making it good pie chart material):
- 77% of Python Developers prefer VirtualEnv.
- 13% of Python Developers prefer Buildout.
- 7% of Python developers rolled their own solution and wish they could switch over.
- 3% of Python developers rolled their own solution and are fiendishly delighted with how they have guaranteed their own job security forever. I know who some of you are and I can say with some confidence that when the Zombie apocalypse happens, no one is going to invite you into their fortified compounds. We hate you that much.
Pydanny Made Up Template Debate Statistics
The made up statistics in this post frequently touch on contentious topics. So let me add another controversial topic, this time the never ending template debate in Python:
- 70% python developers prefer non-XML templates
- 25% python developers prefer XML templates
- 5% python developers wonder why we don't just use the str.format() method and be done with it
- 50% python developers strongly disagree with my Stupid Template Languages blog post from last year.
Pydanny Made Up Python Web Optimization Statistics
I sometimes get asked how to best optimize a Django site. My answer is 'cache and then cache some more' but there are those who disagree with me and start switching out Django internals before doing anything silly like looking at I/O. My bet is this same thing happens with other frameworks such as Pyramid.
- 20% developers argue switching template languages.
- 80% developers argue using caching and load balancing.
- 100% Django/Pyramid/Flask/etc core developers argue using caching and load balancing.
Of all the made up statistics in this blog post, I suspect this is the one closest to the truth of things.
Update: Alex Gaynor and Audrey Roy pointed out that the original line graph for this data was not appropriate. My weak defense was that I'm trying not to make things too serious but they stated that the line graph was so inappropriate it distracted from the rest of the post. Thanks for the advice!
Pydanny Made Up Framework Debate Statistics
Alright, let's conclude this article with some statistics I cooked up about frameworks in Python. I'm going to do more then just mention web frameworks, dabbling into other awesome things that the Python community has given us.
- 23.6% of us get web.py and web2py confused with each other.
- 42% Python developers think Pyramid/Flask have awesome names that don't get mispronounced the same way Django does.
- 28% Python developers wish they could find a way to get some SciPy into their projects.
- 22% Python developers wish there was a PEP-8 wrapper for Twisted.
- 49% Twisted developers wish that Python had accepted their standard instead of PEP-8.
- 90% Python developers wonder what they were drinking when they renamed it to BlueBreem and wonder if it is sold over the counter in their municipality.
No chart? Getting this one to look meaningful was turning into a herculean effort. I invite others to render this data into something that look attractive and doesn't lose meaning. Come up with something impressive and I'll put it into a follow-up blog post.
Sunday, December 4, 2011
The Story of Live-Noting
Like a lot of people, I've got this thing I do when I attend conferences, meetups, classes, and tutorials: I take notes. My open source based ones are mostly written in RestructuredText and I've kept in a particular folder since at least 2006.
I'm not exactly sure when I started down this path, bit this commit log entry leads me to think I had it working on or around July 8th. What that would mean is that every time I pushed up a change in my notes, within minutes readthedocs.org would publish the content to the world in lovely HTML markup.
The result?
Here's a screen shot of the front page
Because I was committing constantly in order to get updates on readthedocs.org as soon as possible, I also adopted the habit of super-short pull request messages. That's because the content I'm writing overrides the need for verbose comments. So when you see me writing "moar" it's because every minute or so I'm doing something like:
In essence, I don't want to constrain what I write but I also don't want to write something that will haunt someone else later. Even with a caveat and all that stuff, it can still be problematic. There is a difference between me ranting about something and me taking notes, and the written word is such that things are all too often taken out of context.
Food for thought indeed.
Not only that, but I got asked if I would accept pull requests. After a good two seconds of deep thought, I responded that I would only consider corrections and clarifications, not new material. I received not just one, but two pull requests from good friends and left the conference pretty happy.
On top of that, I managed to get featured on the front page of http://readthedocs.org! (Thanks Eric)
Kenneth Love also took notes in a similar fashion: readthedocs.org/docs/djangocon-2011-notes
Josh Bohde also took notes at the event in a similar fashion readthedocs.org/projects/joshbohde-event-notes and even as I write this post he shares the featuring of our notes on the frontispiece of readthedocs.org:
The graphs and stats of this effort is really interesting. Fortran? And a total of
Five contributors!
All of this makes taking notes a lot more fun. I enjoy finding ways to enhance and improve my process, and find it exciting that others are following a similar pattern of effort. My hope is to make 2012 the Year of PyCon, where I find a way to go to a Python related conference on six continents (Antartica is too cold for my tastes) and take notes everywhere.
Going forward, should I document how I built this out? Would my steps and patterns be useful for others?
Putting notes in a DVCS
On September 13, 2009, I uploaded these notes to Github.com. I did that because I wasn't pleased with the workflow I established of moving items to Dropbox for backup. I use DVCS all the time and I figured why not just put my notes where I put my code? So I added my notes as a Github repo.DVCS Notes Based Management System?
For a while I tried to use the Github folder README.rst trick to make a navigations system for my notes. But Github isn't designed for making a README into a dynamic custom content navigator, and it would make a silly feature request. I would rather the Github team work on Mercurial integration or other practical things before they honored a request to turn their system into my own custom Notes Management System. Eventually I just gave up on it and moved on.Sphinx + Read The Docs!
In early July of 2011 I had a wicked fun thought. What if I turned my notes into a Sphinx project and posted it on readthedocs.org? Most of my content is in RestructuredText and I've gotten really fast at rolling out Sphinx documentation. The 'hard' part would be converting the few README.rst files into index.rst files, but on the flip side I could use fancy Sphinx directives.I'm not exactly sure when I started down this path, bit this commit log entry leads me to think I had it working on or around July 8th. What that would mean is that every time I pushed up a change in my notes, within minutes readthedocs.org would publish the content to the world in lovely HTML markup.
The result?
Pydanny Event Notes
Here's a screen shot of the front page
PyCon Australia 2011 Test Drive
For the 2011 PyCon Australia I gave my new process a serious whirl. I found if I created the page before the talk and entered some basic data like author and title and tied it to the index then I could constantly check the quality of my output while taking my notes. It made my notes seem a bit more exciting and alive. I even tweeted about it cause I thought it was fun, and people around the world seemed to enjoy the effort I was putting into my notes.Because I was committing constantly in order to get updates on readthedocs.org as soon as possible, I also adopted the habit of super-short pull request messages. That's because the content I'm writing overrides the need for verbose comments. So when you see me writing "moar" it's because every minute or so I'm doing something like:
$ git commit -am "moar" $ git push
Kiwi PyCon 2011
I did my rapid note taking again at Kiwi PyCon and it was fun. The downside was that sometimes I get rather critical in my notes and I had a couple speakers come up to me later to clarify their positions. This makes it a bit challenging because I want to put down my thoughts, but if my thoughts impact another person, what should I do? Especially since if my negative notes on someone turn up in a search it can negatively impact the speaker way beyond a single talk. This is now always on my mind when I take notes, and I'm trying to figure out a good way to handle this going forward.In essence, I don't want to constrain what I write but I also don't want to write something that will haunt someone else later. Even with a caveat and all that stuff, it can still be problematic. There is a difference between me ranting about something and me taking notes, and the written word is such that things are all too often taken out of context.
Food for thought indeed.
DjangoCon 2011 and the invention of the term 'live-noting'
At the start of DjangoCon 2011 someone tweeted that they were planning to 'live-blog' the event. Suddenly I realized that what I was doing had a name for it, and that was 'live-noting'. So I tweeted that was what I was doing and it seemed to catch on.Not only that, but I got asked if I would accept pull requests. After a good two seconds of deep thought, I responded that I would only consider corrections and clarifications, not new material. I received not just one, but two pull requests from good friends and left the conference pretty happy.
On top of that, I managed to get featured on the front page of http://readthedocs.org! (Thanks Eric)
Kenneth Love also took notes in a similar fashion: readthedocs.org/docs/djangocon-2011-notes
PyCodeConf 2011
I had the excellent fortune of being an invited speaker to Github's PyCodeConf. While I gave my talk, my lovely fiancée, Audrey took notes of my talk and submitted a pull request. Her contribution was the first time I accepted content I did not write, and I'll say right now she's the only one for whom I will accept such content. On the other hand, If you take notes when I present let me know and I'll link to them from my own notes.Josh Bohde also took notes at the event in a similar fashion readthedocs.org/projects/joshbohde-event-notes and even as I write this post he shares the featuring of our notes on the frontispiece of readthedocs.org:
Closing Thoughts
I often use my notes as reference, and if you follow the commit logs you may even see me comment or clean up things I wrote down years ago.The graphs and stats of this effort is really interesting. Fortran? And a total of
Five contributors!
All of this makes taking notes a lot more fun. I enjoy finding ways to enhance and improve my process, and find it exciting that others are following a similar pattern of effort. My hope is to make 2012 the Year of PyCon, where I find a way to go to a Python related conference on six continents (Antartica is too cold for my tastes) and take notes everywhere.
Going forward, should I document how I built this out? Would my steps and patterns be useful for others?
Friday, November 4, 2011
Redux: Python Things I Don't Like
Back in May of 2009, I wrote about Eight Things I don't like about Python. It was my attempt to come up with things I don't like about my programming language of choice. Consider this my update of that post.
Note: Chris Neugebauer pointed out that changing division in Python 2.7.x will break backwards compatibility. However that doesn't change that I don't like it in Python 2.7.x.
Honestly, it doesn't really matter to me anymore. I either use command-line scripts or things delivered to the web. Also, thanks to Brett Cannon, I know if I need to make TKinter look good, I can use TK Themed Widgets right out of the standard library.
I've thought of proposing something easier as a PEP. Imagine that! Me submitting a PEP!
Before I got to do Python full-time I was a go-to person with regular expressions. Languages without them were weak in my opinion. Since then (2006-ish) my skills have faded somewhat in regards to regular expressions. And you know what? It hasn't been a problem. Python's string functions are fast and useful, and when I really need regular expressions, I import the library and do some research. I'm considering this one closed.
The See project is one solution to the issue. A different approach I've used is the Sphinx autodoc feature, but Sphinx is a lot of work and doesn't cover every contigency.
So why didn't I put a strike-through on this one? Because the numbers still aren't good enough. I know a lot of female Pythonistas, but how many do you know? And even if you know a decent number, what percentage of a meetup group you attend are women?
I can say that things are improving, but they could be better - for women or minorities. Find ways to pitch in, be it PyLadies events, PyStar workshops, or what have you.
One last note on this subject, I've heard some unsubstantiated statements that the .Net world has a higher female-to-male ration then the Open Source world. Are we going to take that kind of thing sitting down?
1. Division sucks in Python
In Python 3 this is fixed so that 2 / 3 = 0.6666666666666666 but in Python 2.7.x you have 2 / 3 = 0. You can fix that in Python 2.7.x with doing a from __future__ import division before your division call. Can anyone tell me if a version of 2.7.x will natively support 2 / 3 = 0.6666666666666666 without that import?Note: Chris Neugebauer pointed out that changing division in Python 2.7.x will break backwards compatibility. However that doesn't change that I don't like it in Python 2.7.x.
2. TKinter blows
Honestly, it doesn't really matter to me anymore. I either use command-line scripts or things delivered to the web. Also, thanks to Brett Cannon, I know if I need to make TKinter look good, I can use TK Themed Widgets right out of the standard library.3. Lambdas make it easy to obfuscate code
I'm known for not liking lambdas in Python. These days, I do know of use cases for Lambdas, but those are far and few between. I might even try to turn that into a blog post this month - use cases for Lambdas in Python. Fortunately for me, these days I seem to work with people who mostly agree with me on this subject.4. Sorting objects by attributes is annoying
This is still annoying for me. As I said, "... the snippet of code is trivial. Still, couldn't sorting objects by attributes or dictionaries by elements be made a bit easier? sort and sorted should have this built right in. I still have to look this up each and every time."I've thought of proposing something easier as a PEP. Imagine that! Me submitting a PEP!
5. Regex should be a built-in function
Before I got to do Python full-time I was a go-to person with regular expressions. Languages without them were weak in my opinion. Since then (2006-ish) my skills have faded somewhat in regards to regular expressions. And you know what? It hasn't been a problem. Python's string functions are fast and useful, and when I really need regular expressions, I import the library and do some research. I'm considering this one closed.6. Reload could be less annoying
Reload only works on modules. I want to be able to something like reload(my_module), reload(my_class), reload(my_function), or even reload(my_variable):>>> from my_module import MyClass, my_function, my_variable >>> mc = MyClass(my_variable) >>> mc 5 # I go change something in my_module.MyClass and save the file >>> reload(MyClass) # reload just MyClass >>> mc = MyClass(my_variable) >>> mc 10My current fix is to use unittest as my shell as much as possible. And that is probably a good thing.
7. Help doesn't let me skip over the '__' methods
As I said way back when, "Python's introspection and documentation features makes me happy. And yet when I have to scroll past __and__, __or__, and __barf__ each time I type help(myobject), I get just a tiny bit cranky. I want help to accept an optional boolean that defaults to True. If you set it to False you skip anything with double underscores.The See project is one solution to the issue. A different approach I've used is the Sphinx autodoc feature, but Sphinx is a lot of work and doesn't cover every contigency.
8. Not enough female Pythonistas
These days I know a lot of female Python developers. There is my own fiancee, Audrey Roy. Face-to-face I've met and talked to Christine Cheung, Jackie Kazil, Leah Culver, Katharine Jarmul, Katie Cunningham, Barbara Shaurette, Esther Nam, Sandy Strong, Sophia Viklund, Jessica Stanton Aurynn Shaw, Brenda Wallace, Jen Zajac, and many more I know I'm missing. And there are even more with whom I've had in-depth online conversations.So why didn't I put a strike-through on this one? Because the numbers still aren't good enough. I know a lot of female Pythonistas, but how many do you know? And even if you know a decent number, what percentage of a meetup group you attend are women?
I can say that things are improving, but they could be better - for women or minorities. Find ways to pitch in, be it PyLadies events, PyStar workshops, or what have you.
One last note on this subject, I've heard some unsubstantiated statements that the .Net world has a higher female-to-male ration then the Open Source world. Are we going to take that kind of thing sitting down?
Wednesday, November 2, 2011
Loving the bunch class
Every play with a bunch class? I love 'em and make them protected or unprotected. I started using them early in my Python career, although it wasn't nearly about 2 years ago that I learned what they were called and the best way to code them. In any case, here is a simple, unprotected Bunch class.Warning
This is me playing around with things in Python. It's not anything I use in real projects (except maybe the odd test). Please don't use these in anything important or you'll regret it.
# Simple unprotected Python bunch class class Bunch(object): def __init__(self, **kwargs): self.__dict__.update(kwargs) bunch = Bunch(name='Loving the bunch class') print(bunch.name)
You can also make protected ones, that don't let pesky developers like me overwrite attributes, methods, and properties by accident:
# Simple protected Python bunch class class ProtectedBunch(object): """ Use this when you don't want to overwrite existing methods and data """ def __init__(self, **kwargs): for k, v in kwargs.items(): if k not in self.__dict__: self.__dict__[k] = v
You can also write them to raise errors when a key is in self.__dict__. Or perhaps merely publish a warning. There are many ways to customize, but generally you want to keep these things as simple as possible. Anyway, let's get back to the main topic...
In the early days of my experiences with Python I found a small, nagging issue with dictionaries and objects. The notation wasn't as handy as what you got with JavaScript and some other languages I was using at the time. For example:
// JavaScript object notation o = {}; o.name = 'Loving the bunch class'; o.name; // Calling with 'dot' notation o['name']; // Calling with 'bracket' notation
Unfortunately, in Python you can't do this with a normal bunch class:
# Python bunch class failing on bracket notation class Bunch(object): def __init__(self, **kwargs): self.__dict__.update(kwargs) bunch = Bunch(name='Loving the bunch class') print(bunch.name) print(bunch['name']) Traceback (most recent call last): File "", line 1, in TypeError: 'Bunch' object is not subscriptable
The quick answer is found with a little trick I found in the comments of a recipie by Alex Martelli that gives you the ability to do:
# Fancy dictionary/object trick class Buncher(dict): """ Warning: DON'T USE THIS IN REAL PROJECTS """ def __init__(self,**kw): dict.__init__(self,kw) self.__dict__.update(kw) bunch = Bunch(name='Loving the bunch class') print(bunch.name) print(bunch['name'])
I'm not the only one who likes Bunch classes. On PyPI I found a really complete implementation.
Of course, in a lot of cases you probably don't want this 'weight of code', right? Dictionaries being lighter than full objects and all that. Nevertheless, it's fun for noodling and playing around with code. Still, I'm thinking it might be a fun little project to take a group of bunch implementations and do performance checks on them versus each other and dictionaries. Maybe the 'performance hit' isn't so bad.
I should also dig into things like defaultdict and other constructs to learn more. Part of the fun of any programming language is the depth of even the 'simplest' components of the language.
Friday, October 14, 2011
PyCodeConf 2011 Report
As my fiancee said, "PyCodeConf is a new kind of Python conference with a radically different format. Speakers are invited to speak about whatever they desire relating to the theme ("The Future of Python"), in front of a room of round tables. In between talks there are long breaks to encourage discussion. As a result, talks are edgier, and you really get to know people and possibly shape the future together."
This summer (2011) I was invited by Chris Wanstrath to be an invited speaker at the first ever PyCodeConf, to be held in Miami, Florida. I've been using Git and Github since early 2009, and more importantly, I've known Chris since DjangoCon 2009. I've always appreciated his interest in not just providing tools for the community, but also his efforts across languages and platforms to improve the lives of developers and those who support developers. And that he and his partners seem to make a pretty penny at it and share what they make (drink-ups and now conferences) is only a good thing in my opinion.
So I accepted Chris' invitation. :-)
The theme for the conference was the future of Python, and I submitted a talk proposal about Collaboration. Chris helped me figure my talk topic, which was awfully nice of him. Shortly afterwards my fiancee, Audrey Roy, received her own invitation to speak.
Between the time that Chris invited me to speak and the start of conference, Chris Williams of JS Conf got involved.
Alright, let's review things...
Everyone stayed in the Epic Hotel, in downtown Miami. The rooms fit the name of the hotel, being Epic in size and having amazing stuff in them. The rooms had free/good internet if you signed up for a hotel mailing list. Since we rehearsed and polished talks the night before the conference we ordered room service and were pleased with what we ate. The hotel had a heated outdoor pool on the 14th floor, which I'll get into later. In any case, the only time I've been in a comparable hotel was the incredible arrangements provided by the PyCon New Zealand folks who put us up in the Museum Hotel for a few days.
Sure, $189/night is high, but 3 nights when you split it with 2 or more people makes it not so bad.
Result: Superb
If you are serving me food, you can't go wrong with salmon, steak, good cheese, fresh vegetables, coffee, and juice. I can report that PyCodeConf did quite well in this regard.
Outside of the conference I really enjoyed the food. Andiamo was a crazy good pizza place. I also got some really nice grouper which you can only seem to get in Florida.
Result: Superb
The conference took place on the 14th floor of the Epic Hotel. This was a single track conference, with all the talks were given in the same room. The room was large, but everyone had a comfortable seat at large round tables. That was a nice touch, because it encouraged you to socialize with everyone nearby.
That worked out for the most part, except for a couple of developers sitting at a table who had backs turned to the speakers and were talking loudly while pair programming or something. I asked them to quiet down but 30 seconds later they were back at it. We moved away, but in retrospect considering their rudeness, I should have politely asked them to take it outside.
Anyway, the acoustics were good, the temperature comfortable, and the seats comfortable.
Result: Superb
Speaker selection was done magnificently well. There wasn't a dud within the lot of speakers. I normally expect that in any conference you'll get at least one dud talk per day, and pycodeconf didn't have that problem (unless my talk sucked).
Jesse Noller opened things up with a great encouraging talk, Raymond Hettinger gave a talk on Python basics that was so full of nuance that I'm terrified of attending an advanced talk by him, Alex Gaynor filled us with hope for PyPy, Tracy Osborn taught us how to bootstrap entrepreneur projects, Travis Oliphant wants Python core and PyPy to collaborate more with the Scientific Community, Audrey Roy gave up some of her community building secrets, David Beazly explained the issues of the GIL in terms mere mortals such as myself can understand, Gary Bernhardt gave an amazing talk comparing Python and Ruby, and Leah Culver made Django + backbone.js look easy, but if you talk to her you know whatever she does is sophisticated and not for beginners. Dustin Sallings sold me on a neat idea for testing to help catch edge cases and Armin Ronacher opened my eyes on WSGI.
The wonderful thing about these talks is that since everyone knew the upcoming tracks, or had seen the previous ones, we could relate to each other. So David Beazly, Travis Oliphant, and Avery Pennarun raised interesting concerns about PyPy that everyone got the chance to hear about. They weren't show-stopping issues, just raising awareness about things that Alex didn't cover in his talk.
I live-noted the event as much as I could, with notable gaps in Leah's talk (she talks fast and is very technical and wanted to give her my full attention) as well as Armin (his talk shocked me a bit - I'm still a WSGI newbie). You can see my efforts at my PyCodeConf live-notes.
Speaking of live-noting, Josh Bohde also live-noted the event and captured a ton of stuff I missed.
The gaps between talks were also a nice 15 minutes. That meant you could stretch your legs, get a drink, and talk to people.
Result: Superb
We (me and Audrey) missed the first party (hosted by New Relic at a place called DRB) on account of preparing for our talks. We always do our absolute best on talks, and both like to practice a lot. Also, Mark Pilgrim's disappearance had touched me and I wanted to talk about it. We heard it was a great party, so we'll assume that it was. :)
The next evening the party (hosted by Heroku) was on the 14th floor, which meant it was a pool party! There was great food, good drink, and a latin jazz band playing. One of the pools was heated, so most people stayed in there, and drank many watermelon mojitos served by the staff. The pool was a huge hit because it was comfortable and people just talked freely. No laptops, no phones, just talking. Chris Williams served us drinks himself, Chris Wanstrath got wet, and everyone just relaxed. I have to say, a heated pool party is something EVERY conference should give. MOAR POOL PARTEEZ PLEEZE!
The final night's party was at the News Lounge and was hosted by Github and Droptype. The drinks and people were awesome, and I have good memories of being in a circle listening to Chris Williams and Audrey Roy talk. I did go beyond tipsy, overdid the Capoeira, and tried to convince Chris Wanstrath to give up the whole DVCS hosting thing to do wedding planning. There was also a bunch of us getting kicked out of a Karaoke bar because of the antics of a Python core developer. Ha ha ha. It was a crazy night that took me two days to recover from.
I'm glossing some important discussions that happened while I was still sober at these two parties, and maybe in the future if things play out right I'll go over them.
Result: Superb
Part of attending conferences is to meet old friends and make new ones. I got to spend time with Mark Ramm, Jesse Noller, Nick Coghlan, Alex Gaynor, Ben Firshman, Armin Ronacher, Raymond Hettinger, Rachel Hettinger, Chris Wanstrath, and many other excellent people. I also got the chance to meet and befriend Kenneth Reitz, Chris Williams, David Cramer, Wayne Witzel, and Leah Culver. I'm missing at least a dozen more. It was great to put faces to people, and in some cases, hear their side of a story.
Also, I got to gush at programming heroes like David Beazly and Travis Oliphant like a total fanboy.
Believe it or not, I got a bit shy. I'm kicking myself over not introducing myself to more people. Next time!
Result: Superb
The conference was amazing. Like all conferences it had its own character and fun. Because it was a purely commercial conference I was a bit worried going into it, since I've heard about the corporate feel of these things. Those fears were completely mitigated by the open attitude and decentness of the conference organizers and sponsors. I look forward to attending PyCodeConf again in the future.
If you've got good senior technical staff and you want them to benefit from a conference, this is a good place to send them.
Overall Result: Superb
This summer (2011) I was invited by Chris Wanstrath to be an invited speaker at the first ever PyCodeConf, to be held in Miami, Florida. I've been using Git and Github since early 2009, and more importantly, I've known Chris since DjangoCon 2009. I've always appreciated his interest in not just providing tools for the community, but also his efforts across languages and platforms to improve the lives of developers and those who support developers. And that he and his partners seem to make a pretty penny at it and share what they make (drink-ups and now conferences) is only a good thing in my opinion.
So I accepted Chris' invitation. :-)
The theme for the conference was the future of Python, and I submitted a talk proposal about Collaboration. Chris helped me figure my talk topic, which was awfully nice of him. Shortly afterwards my fiancee, Audrey Roy, received her own invitation to speak.
Between the time that Chris invited me to speak and the start of conference, Chris Williams of JS Conf got involved.
Alright, let's review things...
Accommodations
Everyone stayed in the Epic Hotel, in downtown Miami. The rooms fit the name of the hotel, being Epic in size and having amazing stuff in them. The rooms had free/good internet if you signed up for a hotel mailing list. Since we rehearsed and polished talks the night before the conference we ordered room service and were pleased with what we ate. The hotel had a heated outdoor pool on the 14th floor, which I'll get into later. In any case, the only time I've been in a comparable hotel was the incredible arrangements provided by the PyCon New Zealand folks who put us up in the Museum Hotel for a few days.
Sure, $189/night is high, but 3 nights when you split it with 2 or more people makes it not so bad.
Result: Superb
Conference Meals
If you are serving me food, you can't go wrong with salmon, steak, good cheese, fresh vegetables, coffee, and juice. I can report that PyCodeConf did quite well in this regard.
Outside of the conference I really enjoyed the food. Andiamo was a crazy good pizza place. I also got some really nice grouper which you can only seem to get in Florida.
Result: Superb
The Conference Room
The conference took place on the 14th floor of the Epic Hotel. This was a single track conference, with all the talks were given in the same room. The room was large, but everyone had a comfortable seat at large round tables. That was a nice touch, because it encouraged you to socialize with everyone nearby.
That worked out for the most part, except for a couple of developers sitting at a table who had backs turned to the speakers and were talking loudly while pair programming or something. I asked them to quiet down but 30 seconds later they were back at it. We moved away, but in retrospect considering their rudeness, I should have politely asked them to take it outside.
Anyway, the acoustics were good, the temperature comfortable, and the seats comfortable.
Result: Superb
Speakers
Speaker selection was done magnificently well. There wasn't a dud within the lot of speakers. I normally expect that in any conference you'll get at least one dud talk per day, and pycodeconf didn't have that problem (unless my talk sucked).
Jesse Noller opened things up with a great encouraging talk, Raymond Hettinger gave a talk on Python basics that was so full of nuance that I'm terrified of attending an advanced talk by him, Alex Gaynor filled us with hope for PyPy, Tracy Osborn taught us how to bootstrap entrepreneur projects, Travis Oliphant wants Python core and PyPy to collaborate more with the Scientific Community, Audrey Roy gave up some of her community building secrets, David Beazly explained the issues of the GIL in terms mere mortals such as myself can understand, Gary Bernhardt gave an amazing talk comparing Python and Ruby, and Leah Culver made Django + backbone.js look easy, but if you talk to her you know whatever she does is sophisticated and not for beginners. Dustin Sallings sold me on a neat idea for testing to help catch edge cases and Armin Ronacher opened my eyes on WSGI.
The wonderful thing about these talks is that since everyone knew the upcoming tracks, or had seen the previous ones, we could relate to each other. So David Beazly, Travis Oliphant, and Avery Pennarun raised interesting concerns about PyPy that everyone got the chance to hear about. They weren't show-stopping issues, just raising awareness about things that Alex didn't cover in his talk.
I live-noted the event as much as I could, with notable gaps in Leah's talk (she talks fast and is very technical and wanted to give her my full attention) as well as Armin (his talk shocked me a bit - I'm still a WSGI newbie). You can see my efforts at my PyCodeConf live-notes.
Speaking of live-noting, Josh Bohde also live-noted the event and captured a ton of stuff I missed.
The gaps between talks were also a nice 15 minutes. That meant you could stretch your legs, get a drink, and talk to people.
Result: Superb
Parties
We (me and Audrey) missed the first party (hosted by New Relic at a place called DRB) on account of preparing for our talks. We always do our absolute best on talks, and both like to practice a lot. Also, Mark Pilgrim's disappearance had touched me and I wanted to talk about it. We heard it was a great party, so we'll assume that it was. :)
The next evening the party (hosted by Heroku) was on the 14th floor, which meant it was a pool party! There was great food, good drink, and a latin jazz band playing. One of the pools was heated, so most people stayed in there, and drank many watermelon mojitos served by the staff. The pool was a huge hit because it was comfortable and people just talked freely. No laptops, no phones, just talking. Chris Williams served us drinks himself, Chris Wanstrath got wet, and everyone just relaxed. I have to say, a heated pool party is something EVERY conference should give. MOAR POOL PARTEEZ PLEEZE!
The final night's party was at the News Lounge and was hosted by Github and Droptype. The drinks and people were awesome, and I have good memories of being in a circle listening to Chris Williams and Audrey Roy talk. I did go beyond tipsy, overdid the Capoeira, and tried to convince Chris Wanstrath to give up the whole DVCS hosting thing to do wedding planning. There was also a bunch of us getting kicked out of a Karaoke bar because of the antics of a Python core developer. Ha ha ha. It was a crazy night that took me two days to recover from.
I'm glossing some important discussions that happened while I was still sober at these two parties, and maybe in the future if things play out right I'll go over them.
Result: Superb
People
Part of attending conferences is to meet old friends and make new ones. I got to spend time with Mark Ramm, Jesse Noller, Nick Coghlan, Alex Gaynor, Ben Firshman, Armin Ronacher, Raymond Hettinger, Rachel Hettinger, Chris Wanstrath, and many other excellent people. I also got the chance to meet and befriend Kenneth Reitz, Chris Williams, David Cramer, Wayne Witzel, and Leah Culver. I'm missing at least a dozen more. It was great to put faces to people, and in some cases, hear their side of a story.
Also, I got to gush at programming heroes like David Beazly and Travis Oliphant like a total fanboy.
Believe it or not, I got a bit shy. I'm kicking myself over not introducing myself to more people. Next time!
Result: Superb
Summary
The conference was amazing. Like all conferences it had its own character and fun. Because it was a purely commercial conference I was a bit worried going into it, since I've heard about the corporate feel of these things. Those fears were completely mitigated by the open attitude and decentness of the conference organizers and sponsors. I look forward to attending PyCodeConf again in the future.
If you've got good senior technical staff and you want them to benefit from a conference, this is a good place to send them.
Overall Result: Superb
Sunday, October 9, 2011
Conference Talks I want to see
I'm writing this the day after Github's pycodeconf ended. That was an amazing conference, and I'll be blogging it soon (I'll also be writing about PyCon Australia, PyCon New Zealand, and DjangoCon US). With all this conference experience very current in my head, things I've seen and done at them, and the deadline for PyCon US submissions coming up, here are some talks I really want to see happen in the next six months. If not at PyCon US, then please consider these for other forthcoming events!
Note: Couldn't do my preferred 'linkify' as well as I liked thanks to bad hotel internet. I'll clean it up later.
Advanced SQL Alchemy Usage
I think the uber-powerful SQL Alchemy ORM needs the same sort of treatment me and Miguel Araujo gave on Advanced Django Forms Usage. Not a 30 tutorial or overview or 'State of', but tricks and patterns by someone who has used it frequently on more than one project. Multiple projects is important because the speaker should have had the chance to try multiple approaches. Start with something simple like a TimeStampModel all model classes might inherit from, then go into deeper and and more complex technical detail. Finish the talk with something crazy hard from SQL Alchemy that is hard to explain. If that causes you to open a bug/documentation ticket, then you'll know that you've done the talk right.
Advanced Django Models Usage
Following the same pattern as my SQL Alchemy idea above, start with something simple like a TimeStampModel (including South migration of fields), then go into complex looks with Q objects, good patterns for Managers, Aggregation, Transactions, and then finish it with the craziest, hardest thing you can find. When putting together the closing material causes you to open tickets for broken core code/documentation, then you know you've done it right.
Python Code Obfuscation Contest
This certain-to-be-controversial talk idea would be where the speaker would solicit Pythonistas to submit a single arcane Python code module that would have to display the text of "Although that way may not be obvious at first unless you're Dutch." There would be a 'Expert' category which would forbid the eval/exec functions. The "Anything Goes Category" would allow use of eval/exec. The conference talk would be where the speaker announces the winners and comments on the brilliant insanity of submissions.
Django + Flask + Pyramid: A demonstration of useful things you can do with WSGI
At pycodeconf Armin Ronacher showed how with WSGI, he can run Django, Flask, Pyramid all from same server from the same domain. This surprised a lot of people, including me, and I want to see more of what Armin was talking about. I don't want any theory. I don't want anything obscure. I just want meaty bits I can implement the day after I hear the talk.
Zen of Python
Richard Jones gave his version of the talk at PyCon AU, and I want to hear other opinions about it. I'm happy to hear an expert give his view, and I would also be delighted to hear how a beginner (or relative beginner) feels about it.
Websites and OO Design Concepts: A Tutorial
For beginners, I would love to see a talk on a list of OO theories, and as each list item was discussed, examples designed in the context of a web site, how to do things right, plus identified anti-patterns would be presented. The web angle would be a good way to get the incoming Python web crowd to attend and identify with raised issues.
Note: Couldn't do my preferred 'linkify' as well as I liked thanks to bad hotel internet. I'll clean it up later.
Advanced SQL Alchemy Usage
I think the uber-powerful SQL Alchemy ORM needs the same sort of treatment me and Miguel Araujo gave on Advanced Django Forms Usage. Not a 30 tutorial or overview or 'State of', but tricks and patterns by someone who has used it frequently on more than one project. Multiple projects is important because the speaker should have had the chance to try multiple approaches. Start with something simple like a TimeStampModel all model classes might inherit from, then go into deeper and and more complex technical detail. Finish the talk with something crazy hard from SQL Alchemy that is hard to explain. If that causes you to open a bug/documentation ticket, then you'll know that you've done the talk right.
Advanced Django Models Usage
Following the same pattern as my SQL Alchemy idea above, start with something simple like a TimeStampModel (including South migration of fields), then go into complex looks with Q objects, good patterns for Managers, Aggregation, Transactions, and then finish it with the craziest, hardest thing you can find. When putting together the closing material causes you to open tickets for broken core code/documentation, then you know you've done it right.
Python Code Obfuscation Contest
This certain-to-be-controversial talk idea would be where the speaker would solicit Pythonistas to submit a single arcane Python code module that would have to display the text of "Although that way may not be obvious at first unless you're Dutch." There would be a 'Expert' category which would forbid the eval/exec functions. The "Anything Goes Category" would allow use of eval/exec. The conference talk would be where the speaker announces the winners and comments on the brilliant insanity of submissions.
Django + Flask + Pyramid: A demonstration of useful things you can do with WSGI
At pycodeconf Armin Ronacher showed how with WSGI, he can run Django, Flask, Pyramid all from same server from the same domain. This surprised a lot of people, including me, and I want to see more of what Armin was talking about. I don't want any theory. I don't want anything obscure. I just want meaty bits I can implement the day after I hear the talk.
Zen of Python
Richard Jones gave his version of the talk at PyCon AU, and I want to hear other opinions about it. I'm happy to hear an expert give his view, and I would also be delighted to hear how a beginner (or relative beginner) feels about it.
Websites and OO Design Concepts: A Tutorial
For beginners, I would love to see a talk on a list of OO theories, and as each list item was discussed, examples designed in the context of a web site, how to do things right, plus identified anti-patterns would be presented. The web angle would be a good way to get the incoming Python web crowd to attend and identify with raised issues.
Friday, September 23, 2011
Profiles: Breaking Normalization
In the summer of 2010 I either saw this pattern or cooked it up myself. It is specific to the Django profiles system and helps me get around some of the limitations/features of django.contrib.auth. I like to do it on my own projects because it makes so many things (like performance) so much simpler. The idea is to replicate some of the fields and methods on the django.contrib.auth.model.User model in your user profile(s) objects. I tend to do this usually on the email , first_name , last_name fields and the get_full_name method. Sometimes I also do it on the username field, but then I also ensure that the username duplication is un-editable in any context.
Sure, this breaks normalization, but the scale of this break is tiny. Duplicating four fields each with a max of 30 characters for a total of 120 characters per record is nothing in terms of data when you compare to avoiding the mess of doing lots of profile-to-user joins on very large data sets.
One more thing, I've found that most users don't care about or for the division between their accounts and profiles. They are more than happy with a single form, and if they aren't, well you can still use this profile model to build both account and profile forms.
Alright, enough talking, let me show you how my Profile models tend to look:
All of this is good, but you have to be careful with emails. Django doesn't let you duplicate existing emails in the django.contrib.auth.model.User model so we want to catch that early and display an elegant error message. Hence this Profile form:
Sure, this breaks normalization, but the scale of this break is tiny. Duplicating four fields each with a max of 30 characters for a total of 120 characters per record is nothing in terms of data when you compare to avoiding the mess of doing lots of profile-to-user joins on very large data sets.
One more thing, I've found that most users don't care about or for the division between their accounts and profiles. They are more than happy with a single form, and if they aren't, well you can still use this profile model to build both account and profile forms.
Alright, enough talking, let me show you how my Profile models tend to look:
from django.contrib.auth.models import User from django.db import models from django.utils.translation import ugettext_lazy as _ class Profile(models.Model): """ Normalization breaking profile model authored by Daniel Greenfeld """ user = models.OneToOneField(User) email = models.EmailField(_("Email"), help_text=_("Never given out!"), max_length=30) first_name = models.CharField(_("First Name"), max_length=30) last_name = models.CharField(_("Last Name"), max_length=30) # username field notes: # used to improve speed, not editable! # Never changed after original auth.User and profiles.Profile creation! username = models.CharField(_("User Name"), editable=False) def save(self, **kwargs): """ Override save to always populate changes to auth.user model """ user_obj = User.objects.get(username=self.user.username) user_obj.first_name = self.first_name user_obj.last_name = self.last_name user_obj.email = self.email user_obj.is_active = self.is_active user_obj.save() super(Profile,self).save(**kwargs) def get_full_name(self): """ Convenience duplication of the auth.User method """ return "{0} {1}".format(self.first_name, self.last_name) @models.permalink def get_absolute_url(self): return ("profile_detail", (), {"username": self.username}) def __unicode__(self): return self.username
All of this is good, but you have to be careful with emails. Django doesn't let you duplicate existing emails in the django.contrib.auth.model.User model so we want to catch that early and display an elegant error message. Hence this Profile form:
from django import forms from django.contrib.auth.models import User from django.utils.translation import ugettext_lazy as _ from profiles.models import Profile class ProfileForm(forms.ModelForm): """ Email validation form authored by Daniel Greenfeld """ def clean_email(self): """ Custom email clean method to make sure the user doesn't use the same email as someone else""" email = self.cleaned_data.get("email", "").strip() if User.objects.filter(email=email).exclude(username=self.instance.user.username): self._errors["email"] = self.error_class(["%s is already in use in the system" % email]) return "" return email class Meta: fields = ( 'first_name', 'last_name', 'email', ) model = Profile
Wednesday, September 14, 2011
History of my most used shell commands
I ran this a few years back and I'm running it again today.
What is interesting is that compared to the older history, git has replaced svn, pip has replaced easy_install, and virtualenv has now completely subsumed buildout. Oh, how the mighty have fallen!
What is interesting is that compared to the older history, git has replaced svn, pip has replaced easy_install, and virtualenv has now completely subsumed buildout. Oh, how the mighty have fallen!
$ history | awk '{a[$2]++ } END{for(i in a){print a[i] " " i}}'|sort -rn |head -n 20 209 git 123 python 34 ls 31 mate 18 cd 14 pwd 9 hg 8 touch 7 rm 6 cp 5 pip 5 mv 5 django-admin.py 4 mkvirtualenv 3 mysql 3 mkdir 3 bash 2 deactivate 2 add2virtualenv 1 workon
Tuesday, September 13, 2011
Quick conferences report: Presentations
My lovely Fiancée, Audrey Roy, was invited to be the opening keynote speaker at both PyCon Australia on Diversity in Python (video) and PyCon New Zealand on Python on the Web.
As for me, I managed to get talks into both of those conferences AND DjangoCon US. I co-presented on three of them, and I share all credit for success with my cohorts. The talks I gave at the conferences were (I'll post videos when they get up):
Confessions of Joe Developer (PyCon Australia, DjangoCon US)
The genesis of this talk was as a lightning talk at I gave at the Hollywood Hackathon. It is a talk about admitting that us mere mortals need to ask questions, take notes, and follow good practices in general. I gave it again at LA Django this summer, extending it to a full length talk complete with lots of technical content. At PyCon Australia I toned down the technical content because I was nervous, and while the response was positive, it could have been much better. So for DjangoCon I ramped up the tech-talk and it worked much better. I've now given the talk 4 times, and I'm leaning towards retiring it.
Python Worst Practices (PyCon New Zealand)
This talk grew out of a SoCal Piggies lightning talk which I gave for the purpose of humor. Often we as Python developers are smug in the clarity of the language that we don't realize just how easily we can obfuscate code. In fact, I contend that Python is fully capable of a code obfuscation contest. This talk rejects a lot of crazy practices I've either done myself or had to debug from other people's work. For New Zealand I added a ton of content and tested things pretty diligently. The variable naming pages stumped some people I really respect and I was quite happy with that result.
Django Packages Thunderdome (co-presented with Audrey Roy, DjangoCon US)
Audrey did most of the work for this presentation. In this talk I helped review a horde of Django Packages across 7 different categories. It was nerve wracking because every part of our talk would get judged - but Audrey kept things really positive and made it clear we were providing constructive criticism. I think she got her message across to most people, and more importantly, it got a lot of people thinking about what ought to be normal community standards. I'll probably blog on those community thoughts and statements later, but I think Audrey (with help from me) accomplished what she aimed to do.
Advanced Django Form Usage (co-presented with Miguel Araujo)
Some time ago Miguel befriended me and helped resurrect the django-uni-form project. He graciously agreed to help me present on Django Forms and we decided to make the talk as sophisticated as possible. Previous Django form talks have been good, but focused on the fundamentals and we wanted to do something really different. This talk was hard because Miguel and I were on opposite sides of the planet, so we did a lot of github pull/pushes. In both doing research and presenting Miguel did an unbelievably good job and I hope he does more of this in the future. The response was extremely positive and I'm certain that our plan of getting our notes/work/transcript into Django core is well on it's way.
Ultimate Django Tutorial Workshop (DjangoCon US)
I got about 10 professional Django experts in a room, including Django core developers, and had them help me coach nearly 20 people through a modified version of the Django tutorial. Students seemed to learn tons, lots of socializing happened thanks to some happy accidents, and the experts got a chance to really see where the Django tutorial needs work. PyLadies organizer Esther Nam spent her sprint days working on something that ties the slides into the Django Tutorial - and for now I'm holding off on sharing my work until she says her work is done.
Summary
These were amazing opportunities to speak and will hopefully make a difference. I wouldn't have traded all of this for the world. It was a lot of work, and I doubt I'll ever go quite at this pace again. My plan is to do fewer talks and make them much better.
As for me, I managed to get talks into both of those conferences AND DjangoCon US. I co-presented on three of them, and I share all credit for success with my cohorts. The talks I gave at the conferences were (I'll post videos when they get up):
Confessions of Joe Developer (PyCon Australia, DjangoCon US)
The genesis of this talk was as a lightning talk at I gave at the Hollywood Hackathon. It is a talk about admitting that us mere mortals need to ask questions, take notes, and follow good practices in general. I gave it again at LA Django this summer, extending it to a full length talk complete with lots of technical content. At PyCon Australia I toned down the technical content because I was nervous, and while the response was positive, it could have been much better. So for DjangoCon I ramped up the tech-talk and it worked much better. I've now given the talk 4 times, and I'm leaning towards retiring it.
Python Worst Practices (PyCon New Zealand)
This talk grew out of a SoCal Piggies lightning talk which I gave for the purpose of humor. Often we as Python developers are smug in the clarity of the language that we don't realize just how easily we can obfuscate code. In fact, I contend that Python is fully capable of a code obfuscation contest. This talk rejects a lot of crazy practices I've either done myself or had to debug from other people's work. For New Zealand I added a ton of content and tested things pretty diligently. The variable naming pages stumped some people I really respect and I was quite happy with that result.
Django Packages Thunderdome (co-presented with Audrey Roy, DjangoCon US)
Audrey did most of the work for this presentation. In this talk I helped review a horde of Django Packages across 7 different categories. It was nerve wracking because every part of our talk would get judged - but Audrey kept things really positive and made it clear we were providing constructive criticism. I think she got her message across to most people, and more importantly, it got a lot of people thinking about what ought to be normal community standards. I'll probably blog on those community thoughts and statements later, but I think Audrey (with help from me) accomplished what she aimed to do.
View more presentations from Audrey Roy
Advanced Django Form Usage (co-presented with Miguel Araujo)
Some time ago Miguel befriended me and helped resurrect the django-uni-form project. He graciously agreed to help me present on Django Forms and we decided to make the talk as sophisticated as possible. Previous Django form talks have been good, but focused on the fundamentals and we wanted to do something really different. This talk was hard because Miguel and I were on opposite sides of the planet, so we did a lot of github pull/pushes. In both doing research and presenting Miguel did an unbelievably good job and I hope he does more of this in the future. The response was extremely positive and I'm certain that our plan of getting our notes/work/transcript into Django core is well on it's way.
Ultimate Django Tutorial Workshop (DjangoCon US)
I got about 10 professional Django experts in a room, including Django core developers, and had them help me coach nearly 20 people through a modified version of the Django tutorial. Students seemed to learn tons, lots of socializing happened thanks to some happy accidents, and the experts got a chance to really see where the Django tutorial needs work. PyLadies organizer Esther Nam spent her sprint days working on something that ties the slides into the Django Tutorial - and for now I'm holding off on sharing my work until she says her work is done.
Summary
These were amazing opportunities to speak and will hopefully make a difference. I wouldn't have traded all of this for the world. It was a lot of work, and I doubt I'll ever go quite at this pace again. My plan is to do fewer talks and make them much better.
Sunday, September 4, 2011
Responses to Github is my resume
Shortly after I posted Github is my resume the responses started coming in. They seemed to fill these categories:
"Github is a portfolio, not a resume!"
I think this is rather valid, being a much more accurate description of the role that Github and other social coding sites are having in getting developer jobs these days. Two of the more choice responses in this category were posts by Gini Trapini and Andy Lester.
"In X years of hiring, I've never requested source code along with the resume!"
This comment raised the issue that personality, location, writing skills, etc were important. I agree that being able to not annoy your team into losing productivity is important, but it doesn't negate the frequent desire to be able to review the work of potential hires. Ignore the code at your own risk.
"Using only binary for calculations, how many ping pong balls fit in your car?"
A couple people said they prefer to ask programming questions or challenging problems in interviews to seeing portfolios of code. Personally, I think a few programming questions are okay but in my opinion 'challenging problems' all too often means sticking your interviewees with puzzles and trick questions that all too often have nothing to do with the day-to-day work of being a developer.
"Github is a portfolio, not a resume!"
I think this is rather valid, being a much more accurate description of the role that Github and other social coding sites are having in getting developer jobs these days. Two of the more choice responses in this category were posts by Gini Trapini and Andy Lester.
"In X years of hiring, I've never requested source code along with the resume!"
This comment raised the issue that personality, location, writing skills, etc were important. I agree that being able to not annoy your team into losing productivity is important, but it doesn't negate the frequent desire to be able to review the work of potential hires. Ignore the code at your own risk.
"Using only binary for calculations, how many ping pong balls fit in your car?"
A couple people said they prefer to ask programming questions or challenging problems in interviews to seeing portfolios of code. Personally, I think a few programming questions are okay but in my opinion 'challenging problems' all too often means sticking your interviewees with puzzles and trick questions that all too often have nothing to do with the day-to-day work of being a developer.
Tuesday, August 23, 2011
Github is my resume
I remember the first time I heard that statement - a couple years back Eric Florenzano said it to me on Twitter when I posted my resume publicly and asked for opinions. At the time I laughed at his statement, because it felt like naive arrogance to ditch the idea of a resume and 'traditional' social networking like Facebook and LinkedIn. How wrong I was...
Before I go any further, this isn't to say that education, job history, and references aren't important in getting jobs that utilize a lot of Python. They are important, but I think they go more towards shaping you as a person than getting a job. So if you want access to Python jobs (and possibly other open source languages), you need to be able to show working code. Why is this the case? I can thing of several reasons:
Python employers want to review your code in a public repo.
That puts the pressure on you doesn't it? Now you've got to show working code. One extremely unethical way to do that is to copy/paste other people's code into your own repo and claim it as your own. The problem with that is real reviewers know good code doesn't just magically appear in gigantic chunks. Which I'll sum up with another statement:
Python employers are smart enough to read your commit log.
So as a beginner, what can you do? A lot of shops will want to see your code but if you put up your early code, doesn't that mean they'll see your ugly, mistake-ridden work? Yes they will - but if you keep at it with tutorial examples you are working, whatever pet project you cook up, or even submitting patches to various existing projects, they'll see how your code improves. I am much more inclined to hire a person able/willing to learn than a jaded expert who doesn't want to grow - which is why I always try to think like an eternal beginner. Which brings me to my third statement:
Python employers are willing to hire bright, hungry developers willing to learn.
Getting away from employment, let's talk a little about the Python development community. This community is a meritocracy with amazing foresight. Passion for code and/or natural talent is often recognized before skill is achieved - but only if you show the community you are learning. Get your code onto Github, or BitBucket, or SourceForge so it is seen, and keep at it! Try to commit every day and if that isn't possible, then once a week!
Because if you write code every day or every week, over time your code will get better, you'll also be able to demonstrate a consistent body of work, and your passion for software development will be obvious. Also, try to comment your code as much as possible.
One good trick is to put your ongoing notes in a repo. I do it myself at https://github.com/pydanny/pydanny-event-notes. My early notes are very, very different from my later notes. Often embarrassingly so, but to a Python employer I'm pretty certain they are a useful reference into just how I think.
Github, not LinkedIn
LinkedIn (and Facebook, Google Plus, et al) are a place to define your profile and nothing more. That profile should include a link to your code. Python employers will be looking your for links to your code, not for any sort of networking you do on those sites. Employers get annoyed by 'developers' who excessively network but have no links to code samples on Github or other similar sites. If you have no code to find then it means we can't see your work, your thought processes, or your passion.
One common technique you see by a lot of Python developers is posting quick links to their projects and efforts on Github using various social networks. You can and should do the same.
You make connections by showing you want to learn
Before I go any further, this isn't to say that education, job history, and references aren't important in getting jobs that utilize a lot of Python. They are important, but I think they go more towards shaping you as a person than getting a job. So if you want access to Python jobs (and possibly other open source languages), you need to be able to show working code. Why is this the case? I can thing of several reasons:
- It is much harder to forge your style of code, comments, tests, and docs in a repo than it is to make false claims on a resume.
- Development team managers don't take LinkedIn references seriously because how often we see them gamed.
- Code gives us a body of work employers (including me) can use in order to help evaluate your skill and ability levels.
Python employers want to review your code in a public repo.
That puts the pressure on you doesn't it? Now you've got to show working code. One extremely unethical way to do that is to copy/paste other people's code into your own repo and claim it as your own. The problem with that is real reviewers know good code doesn't just magically appear in gigantic chunks. Which I'll sum up with another statement:
Python employers are smart enough to read your commit log.
So as a beginner, what can you do? A lot of shops will want to see your code but if you put up your early code, doesn't that mean they'll see your ugly, mistake-ridden work? Yes they will - but if you keep at it with tutorial examples you are working, whatever pet project you cook up, or even submitting patches to various existing projects, they'll see how your code improves. I am much more inclined to hire a person able/willing to learn than a jaded expert who doesn't want to grow - which is why I always try to think like an eternal beginner. Which brings me to my third statement:
Python employers are willing to hire bright, hungry developers willing to learn.
Getting away from employment, let's talk a little about the Python development community. This community is a meritocracy with amazing foresight. Passion for code and/or natural talent is often recognized before skill is achieved - but only if you show the community you are learning. Get your code onto Github, or BitBucket, or SourceForge so it is seen, and keep at it! Try to commit every day and if that isn't possible, then once a week!
Because if you write code every day or every week, over time your code will get better, you'll also be able to demonstrate a consistent body of work, and your passion for software development will be obvious. Also, try to comment your code as much as possible.
One good trick is to put your ongoing notes in a repo. I do it myself at https://github.com/pydanny/pydanny-event-notes. My early notes are very, very different from my later notes. Often embarrassingly so, but to a Python employer I'm pretty certain they are a useful reference into just how I think.
Github, not LinkedIn
LinkedIn (and Facebook, Google Plus, et al) are a place to define your profile and nothing more. That profile should include a link to your code. Python employers will be looking your for links to your code, not for any sort of networking you do on those sites. Employers get annoyed by 'developers' who excessively network but have no links to code samples on Github or other similar sites. If you have no code to find then it means we can't see your work, your thought processes, or your passion.
One common technique you see by a lot of Python developers is posting quick links to their projects and efforts on Github using various social networks. You can and should do the same.
You make connections by showing you want to learn
Sunday, July 31, 2011
The Ultimate Django Tutorial Workshop
That is a big statement to make as a title of a class/workshop blog post. However, in this case I believe I'm fully justified because this is going to be awesome. Here's why:
1. The teachers are beyond incredible
In the course description it says I'm the teacher and I have lab assistants. In retrospect, what I should have said is, "Daniel Greenfeld is organizing a workshop taught by the people he respects and admires".
Think I'm kidding? Look at just some of the names of people I've got lined up to participate:
2. The teacher to student ratio is going to be really small
This is not going to be a room with a few instructors and umpteen students in it. If the class size gets big, I'm going to bring in more teachers. I'll cajole, plead, and do whatever I must to get them in the room. I don't want anyone left behind!
I want a ratio of 5 students to each teacher.
3. Class implemented with a lot of lessons learned
I've taught a bunch. So have a number of the instructors I've lined up. We know which parts of the tutorial are important to focus on, and which parts should be visited by students later on their own. This means you learn the critically important parts that get you kick-started as a Django developer.
One thing we'll try to squeeze in is deployment to one of the new Django hosts such as Djangozoom.com, Gondor.io, and ep.io. In fact, Shimon Rura, one of the co-founders of Djangozoom, participating as an instructor.
4. We're all volunteers
All the proceeds earned by the instructors for this course will be going to the Pyladies Sponsorship program. That is important for two reasons:
Officially the tutorial ends at 12:30PM and we should be done. Sometimes though we stumble on things and we don't finish with the rest of the class (like me in my last C programming class). But after a lunch break I'm planning on grabbing some space and working through the rest of the tutorial with anyone who didn't complete the class.
6. The tutorial opens DjangoCon
The tutorial starts on Monday, September 5, 2011 at 9:30 AM at the Hilton Portland and Executive Tower at 921 SW Sixth Avenue in Portland, Oregon, USA. If you do plan on attending DjangoCon and are new to the framework, what a great way to get started!
7. You don't have to attend DjangoCon itself to take the tutorial
Tickets for the event are being sold separately from the conference. So if you can't take off more than one day of school or work, this is a great way to capitalize on DjangoCon.
Convinced? Here is what you need to know and do to get signed up:
1. The teachers are beyond incredible
In the course description it says I'm the teacher and I have lab assistants. In retrospect, what I should have said is, "Daniel Greenfeld is organizing a workshop taught by the people he respects and admires".
Think I'm kidding? Look at just some of the names of people I've got lined up to participate:
- Jacob Kaplan-Moss, Benevolent Dictator For Life of Django
- Russell Keith-Magee, President of the Django Software Foundation
- Audrey Roy
- Jacob Burch
- Katharine Jarmul
- Corey Bertram
- Sandy Strong
- Jonas Obrist
- Christine Cheung
- Shimon Rura
2. The teacher to student ratio is going to be really small
This is not going to be a room with a few instructors and umpteen students in it. If the class size gets big, I'm going to bring in more teachers. I'll cajole, plead, and do whatever I must to get them in the room. I don't want anyone left behind!
I want a ratio of 5 students to each teacher.
3. Class implemented with a lot of lessons learned
I've taught a bunch. So have a number of the instructors I've lined up. We know which parts of the tutorial are important to focus on, and which parts should be visited by students later on their own. This means you learn the critically important parts that get you kick-started as a Django developer.
One thing we'll try to squeeze in is deployment to one of the new Django hosts such as Djangozoom.com, Gondor.io, and ep.io. In fact, Shimon Rura, one of the co-founders of Djangozoom, participating as an instructor.
4. We're all volunteers
All the proceeds earned by the instructors for this course will be going to the Pyladies Sponsorship program. That is important for two reasons:
- Your attendance will help Pyladies sponsor more women to learn Python in the future.
- The teachers are doing this because they want to do it. They want you to learn Django.
Officially the tutorial ends at 12:30PM and we should be done. Sometimes though we stumble on things and we don't finish with the rest of the class (like me in my last C programming class). But after a lunch break I'm planning on grabbing some space and working through the rest of the tutorial with anyone who didn't complete the class.
6. The tutorial opens DjangoCon
The tutorial starts on Monday, September 5, 2011 at 9:30 AM at the Hilton Portland and Executive Tower at 921 SW Sixth Avenue in Portland, Oregon, USA. If you do plan on attending DjangoCon and are new to the framework, what a great way to get started!
7. You don't have to attend DjangoCon itself to take the tutorial
Tickets for the event are being sold separately from the conference. So if you can't take off more than one day of school or work, this is a great way to capitalize on DjangoCon.
Convinced? Here is what you need to know and do to get signed up:
- Get a laptop running Windows 7, Mac OS X 10.5 or higher, or Ubuntu.
- If there is no Python installed, install Python 2.7.1. DO NOT INSTALL PYTHON 3!!!
- Make sure you have a grounding in Python. If you are new to Python you need to have finished at least half the chapters in learnpythonthehardway.org before you attend. If you come to this event with no prior Python experience you will be left behind.
- Buy a ticket!
Sunday, July 17, 2011
Amtrak Review
Audrey and I got invited to a wedding in the Pacific Northwest. And mostly to try something new, we decided to take Amtrak's Coast Starlight from Los Angeles' Union Station to Seattle, Washington's King Street Station. There was some incredible awesomeness about the trip, and a lot that... wasn't so awesome.
No Internet (Bad)
Thanks to encroaching deadlines we planned to take advantage of Amtrak's wireless Internet, otherwise we would have flown. Each direction took 36 hours, which is basically two days. Which meant four working days on the train. Sure, we would have liked to have sat back and just enjoyed the journey, but for us that wasn't an option on this particular trip.
Unfortunately, both the ride there and the ride back lacked internet. In the first case the car with it wasn't part of the train. On the way back the car was part of the train but the Internet was nonfunctional. Which cost us nearly 4 days of work.
We did manage to use our cell phones for tethering, but coverage on rail lines is not that good. Each time we hit a town we connected and caught as much as we could. It wasn't ideal, but it worked. We're still trying to dig out from under work time lost.
What you should know is that more experienced Amtrak passengers all said that they've never had working Internet. There is always a problem on long journeys on the Coast Starlight. Which means don't use the train for business.
The Room (Okay)
We aren't big people and are fairly flexible and athletic. So the roomette we got was quite snuggly. Larger people or those unable to maneuver in tight spaces will probably be uncomfortable.
However, I think if I did this again I would get a bigger room. They have restrooms built in, but I think I would use external bathrooms so as to keep the room smelling nicer.
Food (Okay)
The breakfast eggs and dinner steak rocked. So did the ribs. The coffee was good. Everything else was mediocre at best.
The juices they served tasted like they lacked any connection to natural substances.
Room Attendants, Conductors, and Lounge Car attendant (Good)
The room attendants were amazing. They worked their butt off and got no sleep. I made sure to tip them well. The conductors were also extremely helpful. The first lounge car attendant, a guy named 'CJ' was incredible - he lacked a real lounge car and simply made do.
Diner Car Service (Unacceptably Bad)
The service in the Dining cars was uniformly bad. They were rude, obnoxious, and did their absolute best to avoid eye contact. One meal, when we waited two freaking hours for our food while others got seated and finished after we ordered, was unbelievable. If we asked about our food or drink refills we got snapped at. In retrospect, we should have gotten up and left - or filed a formal complaint.
After several attempts to weather the bad service we took all meals in our private roomette.
Scenery (Outstanding)
This is the part of things that was incredible. The Coast Starlight goes through some amazing scenery, from the beaches of Southern California to the incredible forests and mountains of the Pacific Northwest. Pictures just don't do it justice - you have to see it sometime for yourself.
Conclusion (Okay)
The lack of Internet was annoying. The abominable dining car service was infuriating and Amtrak should give their dining car servers some basic lessons in proper restaurant hospitality.
Those things said, because the scenery was just that lovely, I might consider taking another multi-day train ride over a time when I wasn't trying to hit deadlines and would avoid the dining car at all costs.
No Internet (Bad)
Thanks to encroaching deadlines we planned to take advantage of Amtrak's wireless Internet, otherwise we would have flown. Each direction took 36 hours, which is basically two days. Which meant four working days on the train. Sure, we would have liked to have sat back and just enjoyed the journey, but for us that wasn't an option on this particular trip.
Unfortunately, both the ride there and the ride back lacked internet. In the first case the car with it wasn't part of the train. On the way back the car was part of the train but the Internet was nonfunctional. Which cost us nearly 4 days of work.
We did manage to use our cell phones for tethering, but coverage on rail lines is not that good. Each time we hit a town we connected and caught as much as we could. It wasn't ideal, but it worked. We're still trying to dig out from under work time lost.
What you should know is that more experienced Amtrak passengers all said that they've never had working Internet. There is always a problem on long journeys on the Coast Starlight. Which means don't use the train for business.
The Room (Okay)
We aren't big people and are fairly flexible and athletic. So the roomette we got was quite snuggly. Larger people or those unable to maneuver in tight spaces will probably be uncomfortable.
However, I think if I did this again I would get a bigger room. They have restrooms built in, but I think I would use external bathrooms so as to keep the room smelling nicer.
Food (Okay)
The breakfast eggs and dinner steak rocked. So did the ribs. The coffee was good. Everything else was mediocre at best.
The juices they served tasted like they lacked any connection to natural substances.
Room Attendants, Conductors, and Lounge Car attendant (Good)
The room attendants were amazing. They worked their butt off and got no sleep. I made sure to tip them well. The conductors were also extremely helpful. The first lounge car attendant, a guy named 'CJ' was incredible - he lacked a real lounge car and simply made do.
Diner Car Service (Unacceptably Bad)
The service in the Dining cars was uniformly bad. They were rude, obnoxious, and did their absolute best to avoid eye contact. One meal, when we waited two freaking hours for our food while others got seated and finished after we ordered, was unbelievable. If we asked about our food or drink refills we got snapped at. In retrospect, we should have gotten up and left - or filed a formal complaint.
After several attempts to weather the bad service we took all meals in our private roomette.
Scenery (Outstanding)
This is the part of things that was incredible. The Coast Starlight goes through some amazing scenery, from the beaches of Southern California to the incredible forests and mountains of the Pacific Northwest. Pictures just don't do it justice - you have to see it sometime for yourself.
Conclusion (Okay)
The lack of Internet was annoying. The abominable dining car service was infuriating and Amtrak should give their dining car servers some basic lessons in proper restaurant hospitality.
Those things said, because the scenery was just that lovely, I might consider taking another multi-day train ride over a time when I wasn't trying to hit deadlines and would avoid the dining car at all costs.
Python and Django class/hackathon!
The Los Angeles Python community (LA Django and LA PyLadies) is meeting in Santa Monica on July 23rd to teach Django and hack on all things Python on Saturday, July 23rd. The day will start with a Django class based on the official Django tutorial, then turn into a general hackathon, and finish up with lighting talks.
Leading the event is noted Pythonista Katharine Jarmul. As Katharine is giving the talk on web scraping at DjangoCon US, I'm hoping we can get her to give a lightning talk on the subject.
Learning Django
Sandy Strong will lead the effort to teach people the fundamentals of Django. Besides all things Django and devops, Sandy is presenting the testing talk at DjangoCon US. And if that isn't good enough for you, she won't be alone teaching - there will be a bunch of us developers experienced with Django there to to provide her with support.
Even if you already know Django, please come and hang out for the first half! You can either help out others or work on your own project.
Hacking Python and Django
The second half of the day will be about working on whatever you want. If you are new to Django and want to finish the tutorial, go right ahead. Or you can work on your own pet Django or Python project. In fact, I know that there will be work on the nascent Pyramid project intended to represent the entire Los Angeles Python community.
Lightning Talks
We'll finish with lightning talks. Several people who attended the day will get the chance to talk for 5 minutes or so about a project, tool, or cause they wanted to share. If they go too long we start applauding until they step down.
Social Hour
After another awesome day of Python in LA, everyone will cool down by hanging and chatting over drinks. If you're lucky, maybe you'll get to see me do a drunken one-handed cartwheel where I don't spill a drop of what I'm holding.
My role
I'll be there in my normal role of setting up tables and chairs, helping during the class portion, and hacking on some Packaginator stuff in preparation for the forthcoming August/September Packaginator sprints at PyCon AU, Kiwi Pycon, and DjangoCon US.
Sponsors
This is all possible thanks to the sponsorship of Mahalo, Cars.com, and the Python Software Foundation
Sign up!
Tickets are selling out really fast! Sign up now!
Leading the event is noted Pythonista Katharine Jarmul. As Katharine is giving the talk on web scraping at DjangoCon US, I'm hoping we can get her to give a lightning talk on the subject.
Learning Django
Sandy Strong will lead the effort to teach people the fundamentals of Django. Besides all things Django and devops, Sandy is presenting the testing talk at DjangoCon US. And if that isn't good enough for you, she won't be alone teaching - there will be a bunch of us developers experienced with Django there to to provide her with support.
Even if you already know Django, please come and hang out for the first half! You can either help out others or work on your own project.
Hacking Python and Django
The second half of the day will be about working on whatever you want. If you are new to Django and want to finish the tutorial, go right ahead. Or you can work on your own pet Django or Python project. In fact, I know that there will be work on the nascent Pyramid project intended to represent the entire Los Angeles Python community.
Lightning Talks
We'll finish with lightning talks. Several people who attended the day will get the chance to talk for 5 minutes or so about a project, tool, or cause they wanted to share. If they go too long we start applauding until they step down.
Social Hour
After another awesome day of Python in LA, everyone will cool down by hanging and chatting over drinks. If you're lucky, maybe you'll get to see me do a drunken one-handed cartwheel where I don't spill a drop of what I'm holding.
My role
I'll be there in my normal role of setting up tables and chairs, helping during the class portion, and hacking on some Packaginator stuff in preparation for the forthcoming August/September Packaginator sprints at PyCon AU, Kiwi Pycon, and DjangoCon US.
Sponsors
This is all possible thanks to the sponsorship of Mahalo, Cars.com, and the Python Software Foundation
Sign up!
Tickets are selling out really fast! Sign up now!
Tuesday, July 12, 2011
Normalization noitazilamroN
Since pretty much the start of my career as a developer back in the 1990s one skill I've carried from job-to-job has been an understanding of relational databases. Over the years I've worked with Foxpro, Access, Oracle, SQL Server, MySQL, Sqlite, and now PostGreSQL.
Interestingly enough, database normalization comes instinctively to me. I knew about complex SQL joins and unions and subqueries before I read anything about normalization. As I read up on normalization, it was rather exciting to discover that my natural instinct during database design was to hit the fourth or fifth normal form without thinking about it. And since for most of my pre-Python career the number of records I dealt with was measured in the tens of thousands, normalization was a great tool. I was aware that my record sets were smallish, and good database design kept my stuff running fast.
Relational Databases are not a panacea that lets you overcome bad code.
It surprises me how many developers I've encountered over the years who complained about the performance issues of normalized data but didn't understand normalization. Instead, they refused to follow any sort of standard and every table seemed to duplicate data and every query requires complex joins for trivial data calls. And usually with sets of records in the count of tens of thousands, not millions or billions. The end result are projects that were/are unmaintainable and slow, with or without normalization.
NoSQL is not a panacea that lets you overcome bad code.
Which brings me to the current state of things. NoSQL is a big thing, with advantages of NoSQL being touted in the arenas of speed, reliability, flexible architecture, avoidance of Object relational impedance mismatch, and just plain ease of development. I've spent a year spinning an XML database stapled on top of MS SQL Server, years using ZODB, and about a woefully short time working on MongoDB projects. Like relational databases, the sad truth about XML, ZODB, and MongoDB is that there are problems. And just as with relational databases, the worst of it stemmed not from any issues with data systems, but developers and engineers. Like any other tool you can make terrible mistakes that lead to unmaintainable projects.
So for now, like most of the developers I know, what I like to do is as follows:
Interestingly enough, database normalization comes instinctively to me. I knew about complex SQL joins and unions and subqueries before I read anything about normalization. As I read up on normalization, it was rather exciting to discover that my natural instinct during database design was to hit the fourth or fifth normal form without thinking about it. And since for most of my pre-Python career the number of records I dealt with was measured in the tens of thousands, normalization was a great tool. I was aware that my record sets were smallish, and good database design kept my stuff running fast.
Relational Databases are not a panacea that lets you overcome bad code.
It surprises me how many developers I've encountered over the years who complained about the performance issues of normalized data but didn't understand normalization. Instead, they refused to follow any sort of standard and every table seemed to duplicate data and every query requires complex joins for trivial data calls. And usually with sets of records in the count of tens of thousands, not millions or billions. The end result are projects that were/are unmaintainable and slow, with or without normalization.
NoSQL is not a panacea that lets you overcome bad code.
Which brings me to the current state of things. NoSQL is a big thing, with advantages of NoSQL being touted in the arenas of speed, reliability, flexible architecture, avoidance of Object relational impedance mismatch, and just plain ease of development. I've spent a year spinning an XML database stapled on top of MS SQL Server, years using ZODB, and about a woefully short time working on MongoDB projects. Like relational databases, the sad truth about XML, ZODB, and MongoDB is that there are problems. And just as with relational databases, the worst of it stemmed not from any issues with data systems, but developers and engineers. Like any other tool you can make terrible mistakes that lead to unmaintainable projects.
So for now, like most of the developers I know, what I like to do is as follows:
- Create a well-normalized database preferably using PostGreSQL.
- Cache predicted slowdown areas in Redis.
- Use data analysis to spot database bottlenecks and break normalization via specific non-normalized tables.
- Use a queue system like Celery or even chronjobs to populate the non-normalized table so the user never sees anything slow.
- Cache the results of queries against the specific non-normalized tables in Redis.
The end result is something with the rigidity of a relational database but with the delivery speed of a key/value database. Since I work a lot in Django this means I get the advantage of most of the Django Packages ecosystem (at this time you lose much of the ecosphere if you go pure NoSQL). You can do the same in Pyramid, Rails, or whatever. Maybe its a bit conservative, but it works just fine.