Wednesday, October 29, 2008

Training failures

I remember reading once that until the 1920s you could take swimming lessons where you never entered the water. You could eventually teach swimming classes yourself without ever swimming! You would lie with your stomach on a stool and practice various strokes and techniques. These days we realize how stupid that approach is, since while exercises to learn fundamentals can be taught outside of an environment, until you really do something you don't really know it.

Yet we still use this method in the computer age and worse, in self-defense classes.

At some point in the 90s not enough press was made about the fact that Cisco certifications could be had without ever touching a computer. Heavy books were studied, expensive classes taken, and a paper exam was filled out. The repercussions were felt up and down company wallets and finally into Cisco's sales, so things changed. Nevertheless, my first IT job had a certified Cisco engineer who wasn't just bad, he was disastrous. He lasted long enough to cause a catastrophe, and was booted before more damage could be done.

Does this still happen today in the IT industry? Heck yeah. Finding computer security experts who can do manual penetration tests of the most basic cross site scripting types seems to be harder than finding people with degrees in IT security policy. They have paper degrees and no experience in what they are defending against! How can you defend against something you don't know how to do? Easy, you cannot!

Does this happen in the self-defense world? Oh yeah. People learn knife, stick, and rape defense without knowing the methods of attack used. Then feeling empowered they ignore their instincts about not going into scary places and get themselves promptly assaulted. You want to learn how effective your self defense is, then take this simple test:
  • Go buy a cheap, white t-shirt.
  • Get a brand new, reasonably athletic student who isn't used to attacking the way everyone in your dojo/kwoon/studio has been taught to attack.
  • Give the newbie a big, black marker.
  • Have them try and mark your arms and t-shirt before you can take the marker away. Tell them to go as fast as possible.
I've seen very skilled karate, kung-fu, hapkido, eskrima, and jiu-jitsu (Brazilian and Japanese) people all get marked up. The smart/experienced people will ask/improvise a longer weapon and won't take it hard when they get marked.

Moral of the story: Academic understanding is not a replacement for real-world experience or properly designed simulated experiences..

Wednesday, October 22, 2008

Morning brainstorm about FeedFeeder v2

I've been working on a .plan for FeedFeeder v2, but for some reason things were not really coming together. Something seemed off. In retrospect, what was off was that my proposed solution didn't immediately correct the current problem with the otherwise excellent current version of FeedFeeder. And that problem is that any anomalous feeds force you to write and deploy code (ie - plugins) to correct the anomaly.

Sure, the Van Rees brothers had agreed that a future stage would correct the problem via a TTW function, and we would even consider a handy AJAX powered GUI to make it intuitive. However, the issue with that is that it would occur at a future stage, not at a stage that worked with my current use case - that I get feeds from the customer that they want today to work in nasascience.nasa.gov. Speaking on the finanical side, how could I get NASA to pay for work done on FeedFeeder v2 if it doesn't correct our current issues out of the box?

Well, this morning the answer came to me. The solution to the problem was rather clear and simple. Rather than a sophisticated plug-in system what about a definition system? Currently FeedFeeder provides two content types:
  • FeedFolder:
    • includes a field listing the feeds consumed by this folder
    • and is a container for holding feed definitions and feed items

  • FeedItem:
    • individual feed content items provided by the feeds defined in the FeedFolder
My solution proposes adding a third content type called 'FeedDefinition' to handle defining of feeds:
  • FeedFolder:
    • a container for holding feed definitions and feed items

  • FeedItem:
    • individual feed content items provided by the feeds defined in the FeedFolder's FeedDefinitions

  • FeedDefinition:
    • Defines the source of a feed and how to handle the feed
A FeedDefinition would likely include the following fields in addition to the defaults:
  • Source:
    • URI of the feed source

  • FeedTitle:
    • default: standard
    • otherwise define location of feed title based on FeedParser output

  • FeedDescription:
    • default: standard
    • otherwise define location of feed description based on FeedParser output

  • ItemTitle:
    • default: standard
    • otherwise define location of item title based on FeedParser output

  • ItemDescription:
    • default: standard
    • otherwise define location of item description based on FeedParser output

  • Replacements:
    • default: empty
    • lines field that shows what text needs to be replaced with other values.
    • example: 'www.nasa.gov -> nasawww-origin1.hq.nasa.gov'
When handling feeds, when a FeedFolder has its update_feed_item action triggered it would:
  1. Iterate through its FeedDefinition children.
  2. Based off the rules in each FeedDefinition, fetch and parse each feed.
  3. The parsed feeds would be then added to the FeedFolder as FeedItems.
A supplementary view for FeedFolder would be provided that would not display the FeedDefinition.

Comments? Thoughts? Anyone think I should move my blog to a place that handles comments better?

Monday, October 20, 2008

FeedFeeder 2 plan

I started working on it last week. Basically the outline of who things will be set up, and our goals. Meant to finish it up and actually start coding on Sunday but then spent a good chunk of the day working on the house, raking leaves, and cycling.

Sometimes I wish I could be one of the guys who can code from wake up to bed time. :P

Normalization, Non-Normalization, Denormalization

I don't do much SQL anymore, thanks to tools like SQL Alchemy and the rather proprietary and object oriented ZODB. However, when I do interact with SQL databases I always go for the 5th normal form because this just seems right. I've dealt with more than enough non-normalized databases in my time to feel completely justified in this response to bad design.

The worst cases of non-normalized data in my experience have been with financial transactions or user data. Once I dealt with a financial databases that tracked the amounts in a pool of money in the same table as the historical transactions against that pool. Sound confusing? You bet, especially since determining what money was real and what was historic was done in an amazing piece of undocumented spaghetti code.

I've admittedly created two monsters of this sort of design. One was my first database design in a professional environment, and as the project went on I realized my mistakes and tried to fix them. The other was a database design where I took an application running by itself and tried to create an environment for it where multiple people could have instances of this application. It worked, but it was really hacked. I didn't know about source control back then so going back to doing it right was impossible.

So now you understand why I like normalized databases. Of course, once I get something working in 5th normal form then I start considering breaking the rules. And I do so in a systematic approach. This is called denormalization. The art of denormalization is knowing when to break the rules of normalization to improve performance and make life easier for anyone touching the project. The key is that when you do this that your breakage is clearly identified in developer documentation.

Some places I've found are good for denormalization include financial transactions, report helpers, and tables that track the history of another record.

Its shocking though how often I've run into people who equate non-normalized databases with denormalized databases. Sometimes you get a few newbies who like the sound of 'denormalization' as a word, but normally in my experience it is due to some 'senior developer' who hasn't read a coding book since 1997. You know who I am talking about.

Thursday, October 16, 2008

Tuesday, October 14, 2008

Can't seem to use zope debug mode anymore.

I drop out very quickly from my Mac OS X laptop running 10.5. Going to go home and try it on Ubuntu. Grrr... this is a royal pain.

Update: This was because I had a broken version of readline on my Mac. I did sudo easy_install readline and now it works.

Sunday, October 12, 2008

Help me with zc.testbrowser

I like zc.testbrowser. Toss in some BeautifulSoup to increase the accuracy of some tests and its a monstrously useful way to run tests. However...

For the life of me I can't get it to properly handle select fields (select or multi-select). Once I get the control, I can't seem to set select fields as selected.

Any help would be appreciated. This ate way too much of my time. What should have been a trivial test has caused me no end of frustration. The documentation is pretty good, and yet they don't seem to provide how to do this sort of thing.

In any case, once answered I plan to put the response in the zc.testbrowser reference card I am cooking up.

Update: Fixed the problem with some help from Aaron Van Derlip. Basically, since zc.testbrowser doesn't do JavaScript, sometimes you have to submit forms and links the hard way. I'll be putting that into my upcoming reference card.

Saturday, October 11, 2008

Day 2 of the 2008 Plone Conference

I was very rested by the time I arrived. I had slept well.

So you want to be a consultant by panel

No, I'm not looking to leave NASA these days. However, I do some side consulting, and even in my day job there are good project control lessons you can learn from the consulting crowd. For example, ideas on recruiting, customer relations, and how to handle billing when you need time to boost skills. There were 5 plone board members in the panel, and much wisdom was shared. Plone board members there:
  • Nate Atune
  • Gaer Baekholt
  • Calvin Hendryx-Parker
  • Jon Stahl
  • Matt Hamilton
What makes a great development team by Mike Robinson

He can code, he can manage, he can cycle, and he has the most awesome Irish accent, and he taught us about agile programming. And he explained why and how it works in a great fashion. I'm completely sold. I have been for some time, but he gives great arguments for it, or at least on evaluating on how to best handle this sort of thing.

Future of the Plone user experience by Alexander Limi

Limi has strong opinions and what he said may not reflect what ends up happening. He wants deliverance, strong media handling, more widgets, deliverance, better kupu, z3c form improvement, better validators, and an easier way to handle templates. All good stuff.

And he had a much, much better version of what I've been aiming at with my customer editor view in NASA Science. Great minds think alike, although I must admit his method is much better than my own.

Simplifying Plone by Martin Aspelli

He wants to do it with a chainsaw and make Plone more approachable from a developer's point of view.

Plone has a number of embarressments, with issues ranging from rich media support to import/export problems with the database. It is hard to learn and skinning is a challenge. He thanked Lennart for pointing out what Zope did wrong, and how developers expect things to be easy.

Wants to revamp much for Plone 4. Some quick bullets:
  • Follow the guiding lights shown by some of the other Python web frameworks.
  • Make learning follow a constant set of humps, not huge ones followed by a plateau followed by an insurmountable wall.
  • make certain things real easy to do (logo, branding, content types, etc)
  • Create one true way and remove the other ways
  • embrace through the web but allow filesystem round-trip for deployment and collaboration
NASA Science case study by Katie Cunningham and Daniel Greenfeld

Yes! I helped present!

So we presented and apparently did very well. We had a few luminaries in the room, including a couple Plone board members. I think we nailed all the points we wanted to make, which was a very awesome thing to do. We plan to send the slides off to Alex Clark shortly so we can have them on line for everyone to see.

Please go to NASA Science and leave feedback asking to release Umlizer to the world!

Evening Agile Development workshop by Mike Robinson

Two more hours of the awesome Mike Robinson ended the day for me. He gave a rock-solid lecture and then we played a game to support his statements. It was a fun game and learning was had by all. That said, I think this would have been better done as part of a day-long class, not at the end of a long day of conferencing.

General Socializing

Where to begin? I had great fun with so many incredibly awesome people. The quick and dirty list:
  • Vernon Chapman
  • Tarek Zaide
  • Alex Clark
  • Amy Clark
  • Matt Bowen
  • Jon Stahl
  • Nate Aune
  • Katie Cunningham
  • Gary Burner
  • Joel Burton
  • My whole agile development "team"
If I missed you, let me know!

Thursday, October 9, 2008

Day 1 of the 2008 Plone Conference

The 2008 Plone conference actually started on Wednesday, and Monday and Tuesday were training days. I'll be blogging about those later. Unlike the last conference, set in Italy, this is set here in Washington, DC. Since I am not being a tourist, I'm doing most of my posts here in my technical blog.

First off, Tuesday I had suffered from my periodic insomnia the night before so this day was a matter of a lot of water, light eating, and trying to keep cool in order to keep myself alert.

Registration

I had volunteered to work the registration desk as part of my effort to contribute to the conference. I arrived at 7 am. Others followed shortly and soon enough plonistas began to arrive. In no special order I got to meet or badge the following people:
I was not alone as a volunteer for registration! Also there was:
  • Emma Campbell
  • JoAnna Springsteen
  • Steve?
  • Paul Boos
  • Amy Clark
  • Matt Bowen
Keynotes

I drifting a little in and out, but mostly I was focused on the registration. I figured I could just catch up on plone.tv. Not perfectly ideal, but volunteering is good for the soul. ;) Then it was off to the talks!

Feed the Masses by Paul Bugni

This was about Vice a Plone 3.x syndication tool that lets you output types in RSS 2.0, Atom, and even RSS 1.0. It handles recursions and enclosures and much more. It looks like a wonderful tool for our efforts, but its not quite yet production ready. I may spring on it.

I like the Zope Component Architecture approach! Awesome!

I played timer guy for the nefarious Matt Bowen.

Theming a Plone Site from Start to Finish by Rob Porter

Rob Porter of WebLion presented on material I had just taken a 2 day class on, so in retrospect I should not have gone. On the other hand, I got a few gems and perhaps getting a reinforcement on the class was good. Again I played timer guy for the nefarious Matt Bowen.

Software in the Cloud Speech violates freedom by Bradley Kuhn

Awesome speech! The cloud is dangerous, because really, who own our emails and twits when we use gmail and twitter?

Usability by Ginger Butcher and Katie Cunningham

I did not go but visited. I have to say that I was impressed watching those two for just a few minutes. They NAILED it. They talked well, energetically, knew their material, gave good examples, and were having as much fun as two little girls in a sand box. They did an awesome job and got kudos from Alexander Limi himself.

Sunday, October 5, 2008

Plone Conference 2008 starts tomorrow!

I posted my bit of melancholy about the location on my personal blog under the ploneconf2008 label. Expect to see my more technical side here under my pydanny ploneconf2008 label. Hope that makes sense!

What will I be doing there? Let me see...

First two days will be Joel Burton's class on Plone Theming. Why theming? Because our team needs more skill in it!

I'll be volunteering to help out with the conference in the role of registration helper and general gopher.

I'll be helping our NASA Science team present on our big project, NASA Science!

I'll be attending talks and taking notes until my fingers bleed.

I'll be working in sprints.

I'm not sure about my social activities. We have a tight house budget.

Wednesday, October 1, 2008

Why did we use Plone for nasascience.nasa.gov?

I've had a few questions over time as to some of the specifics about http://nasascience.nasa.gov and I'm going to start providing answers. Here we go with the first question...

Way back on August 7th, John Kavanaugh asked:
Why didn't NASA use the eTouch CMS that the rest of NASA's portal runs on?
To explain what John is talking about, he means the NASA portal (http://www.nasa.gov), which is based on the commercial Java based CMS called eTouch. The portal contains a lot of content, and also serves as a nexus for the myriad of NASA sites hosted by various NASA centers (JPL, Goddard, etc) and organizations (Science Mission Directorate, Space Operations, etc).

On to answers!

First off, from what I understand, the Science Mission Directorate (SMD) started developing the Plone version of http://nasascience.nasa.gov before the portal was announced internally at NASA. By the time eTouch became known, we were well down the path of development, including integrating 1200+ articles and images. For various reasons the new portal launched first.

Second, SMD had a lot of very precise requirements and wanted to be able to get a lot of specific customizations to the project. Adding into the fact that we had already started on data migration and site design, and SMD felt that staying the course was the right way to go.

Third, and this comes from the business people, the portal project and our project come from two pools of money. The portal's pool is for providing a gateway for the public into NASA, and our pool is providing a gateway to Science at NASA. These are similiar yet different concepts, and allows each group to focus on what they do best. We link to each other, and that makes perfect sense.

Fourth, we are working with the Portal group to improve NASA Science. They are great folks and we applaud their work! I can't go into specific details into what we'll be doing but I will state that most of it is infrastructure and support.