I bought and started downloading the upgrade to Ableton Live 9 after it was released this morning and had a chance to play with it after work. (Happy birthday to me!) I’ve used Live since version 4, and it has been my first choice for producing electronic and electroacoustic music in the last few years. So far, Live 9 seems like a good upgrade overall, but I think the killer feature for my workflow is its ability to take polyphonic audio and transcribe a MIDI sequence for further arrangement or manipulation.
I was skeptical that this would work all that well (even monophonic audio-to-MIDI converters have been more “interesting” than “useful” in the past), but I plugged in a guitar1 and hastily recorded the hook from the Isley Brothers’ “Footsteps in the Dark”2. The results from using Live’s MIDI extraction were really quite impressive. This audio clip consists of the hook three times: first, with my original guitar recording, then crossfading between that and a synthesized electric piano driven by an extracted MIDI sequence, and then finally just the synthesized piano.
While this output is usable and surprisingly faithful, it’s not perfect. Live misses the appoggiaturas, but that’s absolutely forgivable (especially since I suspect that discarding appoggiaturas is the result of a tradeoff that also discards, e.g., the vast majority of fret noise). I’m optimistic that it’s good enough to use for transcribing reasonable recordings of many real instruments. As a terrible keyboardist (even worse, one who owns a crummy keyboard controller), I’m excited about the prospect of using my fretted stringed instruments as input devices for generic musical ideas (and not merely as things that make fretted-stringed-instrument and processed-fretted-stringed-instrument sounds).
1 Specifically, I was running both pickups of a stock Epiphone Wildkat into a Tech21 Blonde pedal with neutral but realistic settings and then recording that through a passive direct box. 2 It was a good day, but I’m not sure whether the Lakers even played the SuperSonics, let alone beat them.
I’ve long been interested in the Disquiet Junto music-making assignments, and have even started efforts at some, but haven’t gotten anything actually presentable until this week’s assignment: “Produce an original piece of music that fits the genre “‘dirty minimalism,’” which benefited from just the right combination of inspiration and a few rare blocks of contiguous free time.
My song, “the concept ‘horse’ is a concept easily understood” is available on SoundCloud or as an AAC download.
My basic idea was to do something in a rock or post-rock idiom that nonetheless has many of the hallmarks of what I consider minimalism: ostinati, shifting polyrhythms, and gradual introduction of pitch classes, new timbres, and layered sounds. Of course, minimalism-influenced rock is nothing new, but most such work (like this track) is necessarily temporally compressed when compared to minimalist concert music. Bitcrushing, phasing, frequency-shifting, and general sloppy technique (which I’ve been working on perfecting for some time) contribute to the “dirty” aspect. (I considered introducing a sparse bowed-guitar solo over the top of this — after all, I like those a lot — but didn’t think I’d have time to introduce it and maintain the minimalist aesthetic.)
Like many computer scientists of a certain age, I spent a nontrivial part of my graduate career going to academic talks about “sensor networks,” which were apparently an extremely hot research area in the early-middle aughts, suitable for distinguished lectures early on, faculty candidate talks later, and targeted research areas for not-particularly-prestigious academic job postings long after the rest of the field had moved on. The basic idea was that you’d have these enormous collections of tiny, autonomous primitive computers that were capable of self-organizing and reporting interesting-in-the-aggregate results to somewhat more powerful computers.
Sensor networks work touched on a lot of interesting challenges in systems and engineering, and the talks were always fascinating from a bottom-up perspective. But it always struck me as a solution in search of a problem. The motivation for these talks, perhaps due to the political and funding climates of the time, was always wrapped up in vague ties to national security. Depending on the host institution of the researcher presenting, the always-implausible hypothetical applications for this technology might involve placing one sensor in every cubic meter of the San Francisco Bay in order to detect bioterror attacks, or perhaps placing one sensor anywhere in the Charles River in order to verify that you should not place yourself in the Charles River.
If only these talks had led off by introducing ANT+, I would have maintained my interest far more readily. ANT+ is a real-world sensor network technology with a useful application (collecting data from fitness equipment like heart rate monitors and bike computers) that has to meet nontrivial engineering challenges (for example, working when everyone in a race is using them simultaneously). As a practical bonus, having a single standard for these devices means that I can track my bike computer or heart-rate monitor from a watch or from a cell-phone dongle — and that I can add additional data sources easily without being locked in to a single manufacturer.
Every computer scientist I know has given at least one talk with some ridiculous big-picture claim as the ostensible motivation, even though everyone in the room knows that — to use an example from one of my own talks — program analysis is interesting to computer scientists, on some level, whether or not making good software is uniquely hard or expensive among engineering disciplines. Not every research focus lends itself to a motivation that is both likely to occur in the real world and compelling to nonspecialists. But I’m inclined to believe that the field would be well-served by devoting more effort to finding such motivations, and the problems that they imply.
Lately, I’ve become increasingly concerned about the hyperspecific targeted advertising I receive while using the internet. This ranges from terrifyingly creepy, as in Google ads related to something I just received a message about in my gmail account, to comically incompetent, as in Facebook ads for multilevel marketing schemes that merely include numerous personal details about my life (e.g. “31-year old bald Madison dad who married a far better woman than he deserves nine years ago today and took way too long to complete a terminal degree makes $10,000 in his spare time”).
Amazingly, there’s something still worse than the ever-metastasizing clump of evidence that my personal information and attention are the real products on offer from currently-fashionable internet companies. Here I refer to the completely untargeted ad, as in the regular promotional emails that Borders sends me. I have never purchased a hardcover bestseller from Borders; in recent memory, I think I have exclusively bought technical books, books about photographic lighting, DK Eyewitness Books about robots and knights, and audio recordings of Baroque and Renaissance music. (We’ll construe the latter broadly enough to include DJ Shadow’s Endtroducing, which I also bought at a Borders.) Borders knows my purchase history because it is tied to my “Borders Rewards” card, which is the only reason that they have my email address in the first place.
Borders certainly has enough data about me to send me sensible recommendations, or even targeted promotions that I would be likely to exploit. Instead, they drop the same impersonal, coordinated, and clumsy marketing on (I presume) every email address they have. So instead of getting notifications of a new Pragmatic Programmers book or Fretwork album, I get messages from Borders touting the sort of crap I’d never buy: e.g. Dan Brown books, political memoirs, the Twilight series, and Eat Pray Love and Diary of a Wimpy Kid, whatever those are.
Targeted marketing, when done well, has the advantage of being relevant and potentially useful. Advertising that make me weep for my lost privacy is disheartening, but it doesn’t necessarily represent a waste of my time. I can’t say the same, on either count, for lazy, carpet-bombed marketing that reveals that the sender has absolutely not mined my purchases, browsing history, or “friend” network to the extent of its ability. As a concrete example, consider Amazon and Borders. Amazon’s aggressive daily emails encouraging me to buy every product tangentially related to something I looked at yesterday are bad for my wallet (and my soul), but they are at least sometimes interesting. When I see an email from Borders in my inbox, on the other hand, I typically delete it unread.
I recently spent a week on Lac Seul in northwestern Ontario walleye fishing with my father-in-law and some friends. When compared to the sorts of places in which I have spent most of my life, Lac Seul is notable for having visible stars at night, total freedom from wired or wireless communication networks, and an extremely favorable walleye-to-human ratio. It is also quite photogenic.
Click on any image for details, coordinates, and larger versions, or click here to see all of my published images from Lac Seul (sixteen public images, with more if you’re one of my flickr contacts). I have included notes on equipment (for photo nerds planning similar trips) after the jump.
But since getting such a phone — and, so I thought, using its data capabilities fairly heavily — I have never used more than 200 megabytes of cell network data in a month; Andrea has never used more than 100 megabytes. In the last seven months (charted below from my online AT&T bill), I haven’t even come that close to 200 megabytes, If we choose to switch from “unlimited” bandwidth to the new AT&T plans, we will save $30 per month. (We also have 2.5 days of “rollover minutes” for voice, but I suspect that we will have to continue to subsidize heavy voice users to some extent.)
I tacked a 15-mile detour onto my 3.5-mile commute home in honor of yesterday’s dailyshoot assignment ("Make a photo that represents your mode of transportation today.") You can tell by looking at the chainrings that my mode of transportation is probably sad that I primarily use it on roads and paved trails.
Click the photo for the flickr page with more sizes (I recommend the larger ones to see chain detail), metadata, and flash nerd info.
I took this picture for Daily Shoot #161, which suggested making a photo with side lighting. The etched crystal worked pretty well. (Click on the photo for its flickr page, with links to larger versions.)
When I was little, my dad would sometimes entertain me with miscellaneous software running on the VAX he had in his lab. One of the cooler things was a random maze generator, which probably came from the DECUS archives (we’ve both long since forgotten any details about the author or language) and would fill a whole page with twisting paths. This seemed pretty close to magic to me at the time.
Now that my son is old enough to enjoy mazes, I thought I’d replicate this old program so he could have random mazes of varying intricacy to solve. As far as I can tell, most common maze generation algorithms treat the grid of the maze as an undirected graph with edges between neighboring cells, build a spanning tree, and then place passageways between cells that are immediately connected in the spanning tree. The end result is guaranteed a path from the start to any other node in the maze, because the spanning tree must reach every node. (It’s a little sad that after spending more than a few years of grad school thinking about problems that boil down to graph reachability, this technique seems less like “magic” and more like “obvious.”)
I was able to write a program to build mazes very quickly, and have cleaned it up a bit for public release. You can download the gem package here (use sudo gem install willb-mazegen), or browse the source on github. It will make square-grid mazes to fill a sheet of letter paper, and can make a document consisting of one or many mazes. The code is fairly readable but not particularly fast, but you should be able to generate mazes at least as quickly as a small human can solve them. In the future, I would like to improve performance, optimize the generated PDF (it is currently pretty inefficient), and allow for mazes on non-square grids. (The technique itself is sufficiently general to allow for, e.g., hexagonal or triangular grid mazes, but other aspects of the code would have to change to enable this.)
If you’re just interested in seeing some mazes, here’s a set of twenty preschool-difficulty mazes to download: mazes.pdf. Or you can try this comically ridiculous example (but be warned that it is almost an 8mb file!): hugemaze.pdf.
Like a lot of other fussy nerds, I typically use properly spaced small capital letters when typesetting acronyms. The reason for doing so is simple: large capital letters are designed to appear next to lowercase letters, and are not designed to appear in sequence. As a consequence, strings of large capitals, as might appear in an acronym, are jarring to the reader and can disrupt the color of a page. Small capital letters, on the other hand, are designed to appear next to other small capital letters.
I didn’t think that setting acronyms in this way was controversial, but yesterday John Gruber linked to Toronto author Joe Clark’s mildly-amusing but wrongheaded tirade against the use of small caps in typesetting acronyms. Roughly, Clark’s argument is that:
Small caps fare poorly when applied in a host of pathological cases (like camel-case abbreviations, portmanteaus, or other similarly wretched feats of orthographic gymnastics), and
Only (putatively) pedantic commentators like Robert Bringhurst insist upon using small caps for acronyms, anyway.
I believe that the first claim is the best part of his argument. Indeed, small caps can be applied in the service of careless typography just as well as ordinary Roman capital and lowercase letters. If someone were advocating the universal application of small caps as a panacea, then Clark would really have a point. However, I’ve not seen any well-regarded commentators recommend slavish devotion to small caps, even when amateurish settings result (Bringhurst certainly does not). The second claim strikes me as irrelevant, and I’m disinclined to address it further here.1 Judging by his writing elsewhere, Clark takes some delight in the “fusillade of defamatory comments on pipsqueak blogs” that appear in response to ad hominem attacks on Bringhurst; I like Bringhurst’s work a great deal, but decline to join the fusillade.
Of course, it’s far easier to point out the flaws of others than it is to identify something that actually works, and where Clark’s argument really falls apart is in his proposed solution, which we’ll get to after a bit of background. Recall that real small capitals must be designed separately from large capitals; thus, not every typeface has them. You’ve probably seen “fake small caps” before, which are simply regular large capitals that have been automatically compacted by a word processor.2 Fake small caps look terrible, and Clark himself points this out in his piece (as well as elsewhere on his site). It is thus at least a little ironic that Clark’s recommended solution to the problem of setting acronyms involves making your own fake small caps and then setting them properly spaced: “What works nicely, though? Knock the size down a point, add a few units of tracking, and equalize spacing.”
Last night, I received my preordered copy of Star Trek on Blu-ray. I noticed that this disc included a “digital copy,” which is some code to activate a DRM-enfeebled file that you can install on your computer. I’ve never owned a disc with this feature, and it has always struck me as mildly bogus. Upon seeing it on the disc box, though, I thought it seemed like a nice convenience — after all, we don’t have a portable Blu-ray player, but we do have portable computers. Then I got to the fine print.
Of course, you’ll need to activate the “digital copy,” and you can only do this once (although, if you link it to an iTunes account, you can play it back on any computer that is authorized for that account). Apparently, the digital copy cannot be activated after November 10, 2010 (that’s 51 weeks after the release date of the disc). So buy now, kids! In addition, an all-caps, condensed barrage of text informs me that:
THE DIGITAL COPY CONTAINS A COPY OF THE MOTION PICTURE ONLY, WITHOUT DVD SPECIAL FEATURES, IN STANDARD DEFINITION FORMAT WITH ENGLISH LANGUAGE TRACK IN STEREO ONLY AND IS NOT CLOSED-CAPTIONED OR SUBTITLED.
What a convenience, indeed! If you’re willing to overlook the minor omissions of the digital copy: namely, special features, 3/4 of the resolution on the Blu-ray disc, four audio channels, any concessions to the hearing- (or volume-) impaired, and the flexibility to install it a year from the date of release, then it’s quite a deal. In fact, the only glaring shortcoming of the digital copy is that it doesn’t include a spring-loaded boxing glove with which to punch the viewer in the groin immediately upon installation.
I’ve had an iPhone for about 16 months now, and I’m pretty happy with every aspect of it that has nothing to do with the wireless carrier. However, some minor complaints are inevitable even in such a well-designed device. Consider, for example, the user interface displayed upon receiving a call. When the phone is asleep, the incoming-call UI looks like this:
To answer the call, you drag the green box from the left side of the phone to the right, just as you would do ordinarily to activate the phone’s screen. However, if your phone is awake — maybe you’re using it when someone called, or you recently put it in your pocket without explicitly putting it to sleep — the interface is different:
Now, years of computer use have conditioned most people to expect the affirmative option on the right in graphical interfaces. But even a few days of iPhone use are sufficient to condition one to drag from left-to-right in order to wake the phone or answer a call. I wonder how often one has send one’s wife straight to voicemail before one develops the necessary reflexes for the more-complex behavior demanded by this pair of interfaces.
I especially dig some of the microcopy on these tapes (e.g. “Use to record (SAVE) computer programs or data”), and am delighted by the reminder of an era in which it seemed that there might be perceptible differences between the products of competing consumer brands in the mass-produced analog media arena. (Although the various claims of high fidelity for tape manufacturers were almost surely nonsensical, it is interesting to recall that — within living memory — the perception of fidelity was still a selling point; it certainly isn’t so today.)
Today, John Gruber links to Real Networks’ announcement that they intend to ship an iPhone client for their Rhapsody music service; note that any concrete app that provides access to a music service probably violates Apple’s iPhone SDK terms in several ways. While I’m sure this announcement is great news for the eight or ten people who use Rhapsody, the interesting part is Gruber’s gloss on the link, in which he suggests that Apple will likely approve the app if it meets technical muster — even though the app probably violates the developer agreement — in order to avoid the appearance of anticompetitive behavior with regard to the iTunes music store.
If Gruber is right — and he certainly sounds plausible here — I wonder if there will become a de facto special class of iPhone applications during the review process: those that are potentially-controversial and well-publicized enough to require more careful examination or more flexible approval constraints. (Real probably assumes that this is already the case, or they wouldn’t be taking their case to the court of public opinion by pre-announcing this product before it is available.) Such a policy would certainly minimize Apple’s entanglements with irritable executive-branch agencies, but introducing yet more inconsistency and privileging some applications over others likely wouldn’t serve consumers or developers any better than the current policy.
Here’s a little script I wrote that provides a command-line interface to the geobytes.com API, allowing you to look up the geographic location of an IP address. (The JSON interface to geobytes.com seems to be flaky on occasion.) I’ve tested it with python 2.5 and python 2.6.
Never mind the bollocks and never mind the provenance. Imagine you’re an auction house selling off a flyer for a Sex Pistols show from 1978. Now imagine that said flyer has a huge chunk of Comic Sans (designed by V. Connare in 1994) in it. The thread on typophile that presents this sad case is the sort of thing that you’ll love if you’re the same kind of nerd I am.
Unlike the infamous forged Killian documents, which were clearly the product of a delusional and careless conspiracy theorist’s extended reverie, some effort clearly went into this Sex Pistols forgery. The creator of this fake flyer didn’t merely dump some text into Microsoft Word’s AutoAnarchy Wizard (see below); he or she was obviously concerned with aping at least the most basic characteristics of the form. The fact that the flyer included four consecutive characters in Comic Sans makes me wonder if the creator wanted to be caught, whether he or she intended such flagrant anachronism to be a John Lydon-style two-finger salute to the sorts of people wealthy enough to buy old punk rock flyers at auction.
Like almost all sophisticated and clever people, I am delighted every time Facebook announces a new, easy-to-abuse feature that might at its best enable some of its users to become sharecroppers of a trivially larger chunk of the AOL of the oughts. If you’d prefer to see a rather dimmer view, then you’ll want to read Anil Dash, who wrote an amusing article speculating on the immediate aftermath of the “user-specified URL” feature rollout on Facebook; it is chock-full of goodness like this:
LinkedIn posts a thinly-veiled but very smart update on their company blog that happens to mention in passing that they’ve had friendly usernames as an option for URLs for years, and that it’s more likely you want to show your professional profile to the world as the first Google result for your name. The post omits any mention that you can also register a real domain name that you can own, instead of just having another URL on LinkedIn.
Here’s some exciting news from Brent Simmons that may signal the end of my syndication lament. Being able to use Google Reader’s web interface when I need a web interface and NetNewsWire’s application interface on my Mac sounds almost perfect — although mediocre sync performance could kill the utility of Google Reader syncing, as it does with NewsGator sync. (It’s not clear whether syncing with NewsGator is slow, whether NewsGator simply is slow in processing new items, or both. But one very nice thing about Google Reader is that it seems to lag very little when compared to NewsGator.)
In another update to the original syndication lament, the newest version of Byline is much better overall (certainly more stable) than prior versions, although it still takes ages to sync and regularly forgets that it has done so.
Today I had the chance to take another picture of a very small thing. I’m mostly happy with this even in spite of somewhat hazardous conditions. Most notably, WT was always trying to move the bug; you can see evidence of his interloping — little flecks of FieldTurf left over from this morning’s trip to Keva — if you view full-size. (You can also see that I’m front-focused by almost the width of the bug.)
I am pretty frustrated with the feed reader landscape. My requirements are not that esoteric; I need a feed reader that supports:
Flagging, tagging, starring, or otherwise annotating items (organizing feeds is nice but not crucial),
Offline reading (for airplane/car/whatever use) and archiving (for future reference),
Synchronization among multiple devices, and
Cross-platform operation (at least Mac and iPhone; Linux or web-based clients are a huge win here).
Finally, stability and scalability are important. I currently subscribe to 295 feeds (although many of them are low-traffic, like tracking price changes at Amazon.com, the police blotter in Madison, or machine and network statuses), which I suspect is easily within the realm of what a feed reader should be specified and designed to handle.
Alan Jacobs rightly points out that there are basically two games in town: Google Reader and NetNewsWire. And, indeed, I have at times been only somewhat dissatisfied with either of these. Each has certain commendable points: Google Reader has one of the most usable web interfaces I’ve seen for a feed reader; NetNewsWire’s built-in browser (and intelligent handling of tabs, with, e.g. undo for tab close), like the rest of its interface, is superb, and its integration with Instapaper and my weblog client of choice are great features. NetNewsWire also benefits from an accessible, responsive, and friendly developer (unlike certain other feed readers).
However, each of these falls short in some crucial way. Google Reader is not accessible offline and cannot sync with anything reliable that is. (I have been using Byline on the iPhone for about a month, but it is crashy, prone to leaving my phone in a state of hot, churning insomnia, and consistently logs me out of Reader and/or forgets what it has downloaded. I know of no offline Google Reader client for the Mac, but I haven’t looked very hard.) NetNewsWire only supports synchronization (or any iPhone use) through the NewsGator servers, which lag catastrophically behind the real world for many feeds, especially low-traffic or unpopular feeds. (This is certainly NewsGator’s prerogative, and they are justified in devoting limited resources to feeds that reach more users. However, many feeds that only I subscribe to have lagged 12 or more hours on NewsGator. Since many of these feeds contain time-sensitive data, this lag renders them useless.) In addition, the only way to get at NetNewsWire feeds from Linux is via the NewsGator web interface, which is far clunkier than Google Reader.
If you, dear reader, are happy with how you consume feeds, I’d like to hear about it. Until then, I’m considering the drastic and foolish act of adding a project to my overfull spare-time queue.
Due to some hardware trouble with my main work machine, I’m presently working in a virtual machine on my personal computer. After a few dim trails, I found a pretty straightforward method to clone my work computer into a virtual machine image, so that I am able to work in the exact same environment I would have on my physical work computer. Here’s how to do it:
Clone the drive using dd (the following example assumes your drive is /dev/sda and you have an external drive mounted at /media/removable:
Use qemu-img to convert the raw bits of the drive to an image in the appropriate format for the virtual machine monitor you want to use (QEMU or VMWare):
Create a new virtual machine that uses this drive image, using the interface for your preferred virtual machine monitor.
I was able to image and convert a 100 gb drive in around six hours. My drive was an LVM volume and the home partition was encrypted with LUKS; I was delighted to see that qemu-img handled these oddball features of my drive flawlessly. (I can’t think of a technical reason why these wouldn’t be supported, but I’m nonetheless inclined to be pleasantly surprised when things work as they should out of the box.)