I’m just starting to learn Python – like it’s been a week now – and my first script was a port of my bash shell script that takes an image list from standard input and sends to to sips (fast QuickTime image conversion built into OS X) for conversion. It was easy enough and I am liking the language for sure. When I started my second Python script, I thought I was setting the bar too high – I wanted to multithread this conversion operation so that it uses all CPU cores by spawning a sips instance for each core. sips isn’t multithreaded so this is the only way to get it to use all the hyper-threaded cores in my 12-core Westmere Mac Pro. After getting pointed in the right direction by Python pipeline TD Luke Olson, I was told that Python 3.2 makes it much easier to do this type of thing, and he was right. If you’ve heard of Grand Central Dispatch, Mac OS X 10.6 and above’s thread pooling technology, Python 3.2 ProcessPoolExecutor works much the same way, at least as far as ease of implementation is concerned. I’m a guy who knows ridiculously little about programming but I am now up running after a day of tinkering with a multithreaded script:
You can tell from the video that there is some weirdness to the concurrency - it seems to complete the batch of 24 before starting the next batch, but I don’t know enough about how it’s working to know if that’s just a limitation of process pooling in Python 3.2 or my mistake. Anyway, it’s an exciting start and I’ll be rolling these scripts into Automator actions and free Python scripts to post after I sort out the name filtering.
Update: here’s the post with links to the scripts and Automator service