Cyborg Meeting Minutes, July 13, 2009
This was a combination of meeting and hack session between David Allen, Skory, Eric, and Rachel.
Hack Session 1
Eric brought Sensebridge/North Paw blurbs for the Sensebridge website, they were edited, it was decided that Skory would spend more time later editing them, as well as writing a blurb about Noisebridge.
We first talked about Eric's echolocation kit idea, and how it might work. Eric explained the different approaches that are used to shift sound from the inaudible spectrum to the audible spectrum. The first, digital division, takes an 80khz range of inaudible frequencies and compacts that into an 8khz band of audible frequencies. This results is a loss of audio features, such as amplitude, and is best compared to the ticking of a Geiger counter. Another approach was, heterodine, or to shift one band of 8khz from being inaudible to audible at a time, which works for a certain expected noise but can often result in other inaudible sounds being missed. There are several computationally complex methods which can transform the major sounds of an 80khz band to a 8khz band, but they aren't commonly available.
We also talked about what data bat/human brains need to be able to echolocate. When done naturally, the brain knows it's going to create the sound, and has the physical feeling of creating the sound instantaneously, but we might be able to get by with only the sound and the echo of the sound. This means there are several different approaches, and we need to choose one and do it. The approaches are: emit an inaudible sound and transfer it to audible both at the moment of release and the moment of return; emit an audible sound; emit an inaudible sound and give a somatosensory stimulus simultaneously, and transfer the inaudible sound's echo into audible sound; emit an inaudible sound transferred to an audible sound with a simultaneous somatosensory stimulus, and then transfer the inaudible sound's echo to an audible sound; or to emit an inaudible sound and provide a simultaneous somatosensory stimulus, and then provide a somatosensory stimulus upon the return of the inaudible sound's echo.
We also talked about a monocle that mapped invisible light spectra to visible spectra as false color images, and what effects on the brain that might have. The big question here is how your brain would interpret seeing compositions of true color in one eye and false color in the other. Would it create new colors, map them to existing colors, or just see them as the color they appear to be in each eye?