Wednesday, April 23, 2008

Update from America; shit I won't be patenting

posted by Joe Ardent @ 1:14 PM

So, we've been in San Francisco for a little over three weeks. It feels like we've been here for much longer, but really, three weeks is not that long a period of time. As expected, we miss our Wellington friends terribly, but we've been muddling through.

We've been looking for a place to live, and it's been really difficult. There were a couple places we really wanted, but one fell through not unexpectedly, and the other was a total blue-balling con job. That is, we were told the place was ours to refuse, we said we wanted it almost immediately, and then we were rejected with no explanation.

The job front has also been frustrating. Kris thought there was a job waiting for her, but it turns out that it's not happening for a few months. I've interviewed at a couple places, one of which does not have a good position for me (this was expected), and the other of which would be perfect, but they have institutional issues which are preventing them from making an offer at the moment; I've been told to "sit tight". This is mildly irritating, but not overwhelmingly so. It's also possible that our employment status is a turn-off for prospective landlords, though we are flush with cash. We have rental packets that include a bank statement to show that we can pay their dang rent.

WARNING: extreme geekery follows.

OK! Enough of the bitching. I've been meaning to post something about some ideas I have for human-computer interfaces for a while, but since I'm looking for work, the issue has gotten a little pressing: I want to get this out, so that the ideas are not appropriated by any future employers. This also stands as a public record of the ideas, in case there are patent issues; I want this stuff in the public domain.

First, some background. There are a couple of consumer-class EEG machines due to be hitting the market soon. There's NeuroSky and Emotiv Systems. EEGs read brainwaves, and are not exactly new technology. What makes these devices special is their price, which should eventually be less than $200, and the fact that you don't need to carefully place the electrodes on your skull using conductive gel. You just put on a headband or helmet or something like that, and you're good to go. This lowers the barrier to use significantly.

The second bit of background has to do with how I believe the mind works, and especially with how it interfaces with our senses. You may have read about people that developed a magnetic sense by implanting small magnets in their fingers. There was a guy that wore a belt that imparted an absolute directional sense, and let's not forget the phenomenon of seeing with your tongue, where a grid of mechanical or electrical pixels is placed on the tongue and driven by a camera. As you use it, after about half an hour, your brain just starts interpretting that input as visual data, and you can actually navigate around and recognize objects while blindfolded.

All these things point to a cognitive theory popularized by a guy named Jeff Hawkins, who wrote a book called On Intelligence. In it, he articulates and defends a theory of cognition, that the cortex (the wrinkly part of the brain, the part that makes mammmals so smart) operates with one fundamental algorithm: it recognizes temporal patterns of pulses from the sensory system and itself. So, as long as a certain stimulus produces a characteristic and repeated pattern, the brain will learn to interpret that pattern as a true sense. There's no difference between a pattern of pulses coming from the tongue and the pattern of pulses coming from your eyes; as long as they are consistent and driven by photon-sensing instruments, your brain can use them to see. (This is a pretty drastic simplification of Hawkins' thesis: I'm omitting all the hierarchical nature to the patterns, but it's good enough to have context for what follows.)

OK, so, what does this have to do with using a computer? Plenty. By having a computer reading your brainstate as it interacts with you, it can learn to respond to your intentions without you having to prompt it to. Imagine the following scenario:

You are using a text-to-speech program to write a letter (or some software, or anything). The program transcribes the wrong homophone (say, "your" instead of "you're"). You see this happen, and as you do, your brainstate indicates that you're displeased/irritated/distracted. The computer then automatically erases the wrong homophone and switches to correction mode. You select the right word, and it automatically switches back to transcription mode. The key idea here is that the computer can use your brainstate to automatically switch modes in the tool that is currently being used, eg, from transcription mode to command mode, and back. This is a very powerful notion, and in a sense, is a true "computer implant"; the loop between intention and action is very tight, and since all your brain knows is pulses of data, there's no qualitative difference between that and the intention/action sequence that goes with, say, recalling an incident from episodic memory.

You can close the loop tighter, though, if you enlist more senses into the interface. Imagine that you have a system that uses your eyes to control a mouse pointer. When you want to "click" on something, the desire alone causes the action to occur. I have a dream of a software development environment like that, where it actually recognizes your brainstate to do things like bring up a definition of a variable or function, in conjunction with cursor position. Kind of like the Emacs Code Browser, but again, with modality and other actions controlled via the brain interface, always modulated by the current context. There's no reason to not combine the speech recognition with the code browser; the more of these capabilities utilized together, the more profound the immersion and the more powerful the cybernetic system that is the human/computer partnership. Eventually, the computer will act as a true enhanced memory and cognition aid, all by using non-invasive feedback.

OK, last usage scenario for this tech. There's a scene from Neal Stephenson's book, The Diamond Age, which concerns society with a crude type of advanced nanotechnology (that's not really a contradiction; non-crude advanced nano leads to the end of material scarcity, and it's hard to write a story that people can relate to in that universe, though it's not impossible). Anyway, at one point, one of the main characters is fitted with some augmented reality goggles, and he can't take them off. The goggles read his brainstate and subtly change the image of what he's seeing based on that. What happens is that as he is talking to a woman, the goggles make small random changes to the visual representation of her, and keeping the ones that result in him feeling better/attracted. What he feels is that this woman just keeps looking hotter and hotter, until he eventually realizes what's going on. Just FYI, nothing bad happens.

With that in mind, imagine that you have a computer display that is driven by a program that generates random shapes/colors/etc. By measuring your pleasure or displeasure, as well as knowing where you're looking on the screen, the computer would be able to keep the random fluctuations that please you, and get of the ones that displease. It would be like finding faces in clouds, but clouds that sensed what you were looking for and kept the patterns that fit that.

OK, still with me? I apologize for the long ramble about non-personal stuff. I wanted to get these ideas out in the open, as I said. I actually registered a domain for this stuff, which I'll eventually use to share my implementations of them: But I've not set up a server to actually serve any content, or even configured the DNS for it. Maybe with all this freetime I have, I will do that soon!

Saturday, April 12, 2008

Home Again

posted by Kris Ardent @ 9:27 AM

So, we are back in America. And it hasn't disappointed at all. After spending 18 days in Thailand (amazing) and a week in Chicago (aah, so relaxing), we're back in San Francisco for good. And it's so good to be here. We're spending our time working on the details: car, housing, jobs and seeing a lot of old friends.

The weather has been lovely here. A little colder than I expected at first, but yesterday and today are perfection. Warm, sunny, a little breezy. Just heading into another summer after a lovely one in New Zealand. We did it right this time, after learning from our mistake when we moved there. Two winters is no way to go. Take my word for it. Endless summer! That is the way!

Aimee and I are catching up, watching American Idol and the terrible The Bachelor. It's so good to be here and do stupid things, like shop at Whole Foods and Berkeley Bowl and in Chinatown. The first week here, we bought a half duck and used our newly acquired Thai cooking skills to make Red Curry with Roasted Duck and Som Tam (Spicy Green Papaya Salad) for Nikhila and Jeff, who are graciously caring for The Cheat and The Sneak while they recover from their travels.

We bought a car. It's a 2006 Scion xB. It's tiny and full of pep, and it gets great gas mileage. And we can park it on a postage stamp, it's so small and nimble. It's the nicest car either one of us had ever owned, and we're really enjoying zipping around the city. The only downside is that it has zero moon roofs (and you know how I loves me some moon roofs).

The job front is looking good. I already have a lead on a personal chef client, and an interview set up on Monday for a part-time administrative job with a company that works with nonprofits and philanthropic organizations to conduct needs assessment and evaluate their impact. It's feel-good work, good pay, and flexible hours from home, and the job description reads like it was written for me. So I'm excited, putting all my eggs in one basket, waiting to see if that basket blows up. Cross your fingers.

I start working for the San Francisco Food Bank on Thursday, and I'm most excited about this position. I'm going to start out as a teller, and work my way up to Vice President of New Accounts. Get it? Food bank teller? That part is a joke, although I secretly wish that could be my title. Maybe I'll get a nametag made for myself, to wear as I stack cans and assemble bags, to further enjoy my joke.

We're looking at a couple apartments today, trying to find the perfect place to settle down for at least a year. It's so hard though! There seem to be a lot of options, but each one is just shy of perfect. Staying in such a lovely temporary place isn't helping either. If we were sleeping in the car, it might be easier to settle on a rental. But Cary's place is a delightful mansion, with a motley crew of lovely people. O life, why must you be so difficult?!

We've got a lot of plans in the coming weeks, aside from job and house hunting. Tonight is Yuri's Night, celebrating Yuri Gagarin's first flight into space. There's a big party, with exhibits, lectures, art installations and music late into the night. Then it's up at 9 tomorrow for a trip to Napa with Aimee and her sister. We're going to visit Copia, The American Center for Wine, Food & the Arts. How could that not be cool?! I've never been, but I have high hopes.

Next weekend, we see Point Break Live!, which is a stage adaptation of the 1991 Keanu Reeves/Patrick Swayze "classic" in which Reeves plays an undercover FBI agent (FBI Special Agent John 'Johnny' Utah) charged with busting a gang of pseudo-Buddhist bank-robbing surfers. Yes, that's right. And each night, a different audience member is selected (after an extensive audition process) to play the role of Johnny, reading all his lines from cue cards, just as Keanu did in the original film. I can't fucking wait.

I love you, San Francisco. You rule.