Thursday, February 12, 2009

Organic structs

Thought: Programming with organic cells...

Saturday, December 6, 2008

Whiteboard and Hyperlinks


TODO: Write about idea

Thursday, August 28, 2008

Java with generics..

Programming in Java without generics feels like juggling with a bunch of greasy balls.

Wednesday, January 23, 2008

Andre's secret weapon

is monkeys on typewriters.

Monday, January 21, 2008

Mysticism of Creativity

The more I study creativity, the more the magic of it begins to melt away. When I first started reading about creativity, I thought of it like pulling things that don't exist out of the air, and somehow give birth to a new concept. I saw exercises for improving creativity as a mystic ritual people did to summon ideas from a netherworld. The process always seemed magical.

Now, creativity just seems like a way creating connections between what we already know, or reframing things we already know with a different perspective. I'm feeling like creativity is a process that can be forced on anyone. I can completely see creativity being built into AI once a sense of semantics is built into an AI engine.

Sunday, December 16, 2007

Tip of the tongue phenomenon

Feature idea - what if Calico used speech recognition to map words that were said while something was being drawn? It wouldn't have to be accurate in the least. If only a few words can be captured, it would key people in to what was being discussed. For example, after a discussion is done, you could use a lasso that would circle a set of figures, and a list of words that were mentioned while those were being drawn appeared in a transient floating box. It wouldn't necessarily be sentences, just key words. The speech recognition software could have a high error rate, even 50 percent miss, but it would still be extremely useful. It would need to be calibrated, test if the gap would need to be only while it's being drawn, or if it needs everything said 1 minute before and after. It could also be calibrated to pick out "important" words, or all words. This could be done retroactively by assigning a time stamp to all drawn objects, and a time stamp to all words recorded, then map them to each other afterwards.

What makes this useful and possible is the tip of the tongue phenomenon. People are much better at recognition than they are at recall. While reviewing a session, the words would be similar enough that they would instigate a recognition of events. A psych study was done on tip of the tongue events, and people recited items which were either close phonetically or semantically. So, even if the voice recognition software completely butchered the word, it would be close enough that it clue the reviewer to an idea that was at the tip of their tongue.

I would like to perform a study eventually using Calico and this concept. The study would involve two sessions, one session using the board, and one session a few days later that reviewed the first session to allow an incubation process for ideas. The latter test would involve questions reviewing the first drawing session, having them look at key words, and measure how many new ideas are spawned by looking at the old. Also, a search feature would send the user to elements drawn that included that key word.

I don't want to develop this feature myself, but if another person joins the team, I'd like have them work on this.

Reference 1
Reference 2
Reference 3

Saturday, October 13, 2007

Multitouch up and Running!

After the hardware had a few misadventures around the world, I finally got the multitouch setup up and running. I'm using the multitouch technologies based off the guide put out by the NUI Group, and had my hardware mostly made by Harry van der Veen (I owe them a lot of thanks!)

My setup is pretty crude, but it worked for a good test run. The acrylic screen is only 15" inches, but I was pretty surprised at how close I could get the projector. My home projector only needed to be about 3 1/2 feet away.

Here's the setup I was working with.

I had the screen setup vertically, but the fingers didn't give it enough pressure for my FTIR setup. I had to press my fingers a little harder than it was comfortable, but I think gravity would make this much easier once the screen is horizontal.

The hardware responded a bit slow once I got it running. You can see a few points were missed in the picture below.
I'm using a camera that advertises 25 fps, so maybe a faster camera will give better results. Maybe if I tried some software that used point interpolation it would react better. As it was though, the recognition was pretty slow. (Btw, the hanging projection paper gives the image the ripple effect, another reason to go to a horizontal setup =D ).

Overall though, the project is looking good. The touchlib framework acts as a web server, so the multitouch is compatible with any programming language. You just need to connect to the proper ports (3000/3333), and any C++, Java, or Flash application can work with it. Technically you can interact with other computers across the network, but if the reaction time is less than a fraction of a second it'll feel wrong.

Next steps are to build a cage and look into getting a faster IR camera.