Skip to content

Communication Setup for Blinded

15-May-10 | 19,015 views

Li and I worked on a group of robotic/kinetic creatures to create an ironic and self-balanced networked system called Blinded.

A very detailed documentation of the construction iterations are up on Li’s website, so I’ll just write a bit about the communication between the creatures.

For the first iteration we had only one creature fully assembled so I built a quick “single character communication protocol” that allows the creature to be remotely controlled from a computer. I have two pre-paired series one xbees which are perfect for this basic cable replacement purpose.

The arduino on the creature side is expecting specific “single character” functions from the serial channel via xbee, available instructions are:

1. L – turn left
2. R – turn right
3. F – step forward
4. anything else – stop

At the same time, for monitoring purpose the arduino is reporting if the creature is on its target by comparing its left and right sensors and their historic readings, the reported status is written into the serial channel simply with notation of 1 (on-target) and 0 (target lost).

Because we had only one creature ready back then, the pursuing status is not entirely making sense, so the computer mainly served as a remote controller. The controller setup is pretty simple though, it can be just an xbee explorer board and any serial terminal program can be used to send commands to the remote creature. Following video shows one of our field tests in the park.

After more creatures were added to the group, the situation became much more complicated and I have to put up a processing sketch to coordinate input from multiple creatures. I did not use the xbee library for processing although it did look pretty promising in the examples and taking care a lot of the heavy liftings. But I dig into the source code a little bit and got a feeling that it might be a pain to use with series 2 xbees. So I just wrote some very simple parsing functions to take apart I/O packets and make sure I have access to any digital or analog pins without too much hack.

For physical setup, I switched all remote xbees to series 2 and put their firmware to router-at mode. The base station that stays with the computer is in coordinator-api mode. I used 4 pins of the remote xbees, 2 for digital output (remote commands), 2 for analog input (transferring sensor values to the remote computer/coordinator). Each remote xbee will still need to handle status reporting and command receiving, and the arduino on the creature remains as local intelligence that parses the sensor information and actually controls the motion.

Commands for Xbee configuration:

ATD04
ATD14
ATD22
ATD32

During runtime, the computer is able to set the remote xbee digital pin state through processing sketch. Code snippet for setting

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
void setRemotePin(byte []address, String command, int value)
{
  if (address.length != 8)
  {
    println("invalid address");
    return;
  }
  if (command.length() != 2)
  {
    println("invalid command");
    return;
  }
  serialSelector.port.write(0x7E); // start byte
  serialSelector.port.write(0x0); // high part of length (always zero)
  serialSelector.port.write(0x10); // low part of length (the number of bytes that follow, not including checksum)
  serialSelector.port.write(0x17); // 0x17 is a remote AT command
  serialSelector.port.write(0x0); // frame id set to zero for no reply
  // ID of recipient, or use 0xFFFF for broadcast

  for (int i = 0;i < 8; i++)
  {
     serialSelector.port.write(address[i]);
  }

 /* broadcast */
 /*
  serialSelector.port.write(00);
  serialSelector.port.write(00);
  serialSelector.port.write(00);
  serialSelector.port.write(00);
  serialSelector.port.write(00);
  serialSelector.port.write(00);
  serialSelector.port.write(0xFF); // 0xFF for broadcast
  serialSelector.port.write(0xFF); // 0xFF for broadcast
  */

  // 16 bit of recipient or 0xFFFE if unknown
  serialSelector.port.write(0xFF);
  serialSelector.port.write(0xFE);
  serialSelector.port.write(0x02); // 0x02 to apply changes immediately on remote
  // command name in ASCII characters
  serialSelector.port.write(command.charAt(0));
  serialSelector.port.write(command.charAt(1));
  // command data in as many bytes as needed
  serialSelector.port.write(value);
  // checksum is all bytes after length bytes
  int sum = 0x17 + 0xFF + 0xFE + 0x02 + int(command.charAt(0)) + int(command.charAt(1)) + value;
 
  for (int i = 0;i < 8; i++)
  {
     sum += address[i];
  }

  serialSelector.port.write( 0xFF - ( sum & 0xFF)  ); // calculate the proper checksum
  //delay(10); // safety pause to avoid overwhelming the serial port (if this function is not implemented properly)
}

Contour Study

22-Apr-10 | 17,700 views

These probably won’t be posted to the actual impersonal archive site since I would like to keep that as only archives for self-supervised creations. The brush is working very well even it’s just training direct output from blob detection, the brush itself is taking care of the motion dynamics.

It could be just a side project, the contour training of impersonal seems to producing some interesting visuals. Click image for better resolution.

Imaginary Sociable Objects

23-Mar-10 | 20,138 views

Nanorobots!

Click to read more about Local Network for Group Intelligence

(a collaboration with Adi Marom)

Face Reader

23-Mar-10 | 13,575 views

Think of a question you have about the world, the simpler the better. Create an experiment that seeks the answer. Design a pilot study that includes gathering data. Consider having a control group and an experimental group to compare. User your data to try and see something that might be invisible to the casual observer.

I wondered if people actually smile or cry on the streets, or if they would explicitly express their feelings in a public space. So I would like to put sensors out on the streets to track individual people that are not in groups, and see if their emotions are easily detectable from their facial expressions.

The best known active emotion sensors are our own eyes, we’ve been trained for years since our very birth to read other people’s facial expressions and to properly respond to it. So I decide to walk outside with my own eyes as the sensor to “scan” the street. I set up some basic rules as follows:

  • record only people who walk past me from the front so that I can read their face
  • log only people who are alone
  • it’s okay to miss a person or two, the sample rate is determined by how fast I write the data down
  • avoid crowds and busy streets

It would be great if different aspects of information about the people could be logged together with their facial expression, which might reveal a lot more about why or why not and how people are showing their feelings to strangers. In this case with time and other constraint I decide to record just people’s gender and (rough estimation of) age for cross reporting. By just myself it’s kinda hard to all these things at once, I started by thinking of using devices like vehicle traffic counter but ended up with some simplified logging strategy that helped me also organize the visualization while I was logging.

The data is logged with following rules:

  • each circle represents a person
  • the vertical position of the circle represents the facial expression: middle for neutral, the top for a big laugh, and the bottom for sad/crying
  • horizontal positon is not relevant, although it does loosely represents the time it is logged
  • size of the circle represents age, bigger circle is an older person
  • circle pattern represents gender, filled circles are female

The completely human powered data visualization looks like below. The data is collected from an approximately 2 hours walk including two subway trips.

Sensor Network: Data Logging and Visualization

01-Mar-10 | 26,406 views

I’m working with Sebastian and Michael as “visualization” team for the sensor network assignment for this week’s Sociable Objects Workshop. It’s very challenging for the whole class to work on one single project, but so far it went on pretty well. Tasks are split and assigned and we also find ways to work with dependencies and such constraints.

Since we are just focusing on the visualization part, meaning that we take on only after all the data are aggregated at the coordinator xbee, our main effort will be data logging, mining and visualization. So the first thing we built is a pseudo data generator that feeds data in the same format as the data will be from the coordinator. We talked to the base station team and agreed that we will be getting raw API I/O RX packet and we will take care of the parsing and data storage.

I put together some simple arduino code to feed pseudo data, they could be easily replaced by a real coordinator and ideally we will not have to change the processing code on the other side which is listening to the serial channel.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
#define ADDRESS_COUNT 9
int addresses[9] = {
  0x0001,
  0x0002,
  0x0003,
  0x0004,
  0x0005,
  0x0006,
  0x0007,
  0x0008,
  0x0009,
};

void setup()
{
  Serial.begin(9600);
  randomSeed(analogRead(0));
}

void loop()
{
  for (int i = 0;i < ADDRESS_COUNT;i++)
  {
    if (random(1000) > 990)
    {
      if (random(1000) > 500)
      {
        sendPseudoPackage(addresses[i], 1);
      }
      else
      {
        sendPseudoPackage(addresses[i], 0);
      }
    }

  }
  delay(50);
}


void sendPseudoPackage(int address, int value) {  // pass either a 0x4 or and 0x5 to turn the pin on or off
  Serial.print(0x7E, BYTE); // start byte
  Serial.print(0x0, BYTE); // high part of length (always zero)
  Serial.print(0x17, BYTE); // low part of length (the number of bytes that follow, not including checksum)
  Serial.print(0x92, BYTE); // 0x92 is I/O RX Packet

  //05-12:  64-bit address
  Serial.print(0x00,BYTE);
  Serial.print(0x13,BYTE);

  Serial.print(0xA2,BYTE);
  Serial.print(0x00,BYTE);

  Serial.print(0x40,BYTE);
  Serial.print(0x33,BYTE);

  int low = address & 0xFF;
  int high = (address - low) >> 2;

  Serial.print(high, BYTE);
  Serial.print(low, BYTE);


  //13-14:  16 bit of recipient or 0xFFFE if unknown
  Serial.print(0xFF, BYTE);
  Serial.print(0xFE, BYTE);

  //15:  Receive Option
  Serial.print(0x01, BYTE);

  //16:  Num Samples
  Serial.print(0x01, BYTE);

  //17-18 Digital Channel Mask
  Serial.print(0x00, BYTE);
  Serial.print(0x01, BYTE); //only Digital I/O 0

  //19:  Analog Channel Mask
  Serial.print(0x00, BYTE);

  //20-21:  Digital Samples
  Serial.print(0x00, BYTE);
  Serial.print(value, BYTE);

  //no analog samples

  // checksum is all bytes after length bytes
  long sum = 0xFF; //fake
  Serial.print( 0xFF - ( sum & 0xFF) , BYTE ); // calculate the proper checksum
  delay(10); // safety pause to avoid overwhelming the serial port (if this function is not implemented properly)
}

Then I wrote a basic processing sketch which filters the data and stores only changes to the databases. The sketch itself servers as some sort of realtime visualization on the input, and we get a feeling of how often data are changing and potentially detects sensor failures if one sensor is not active for a very long time.

The processing sketch is sending data to a website that handles data logging. The website also provides query api that returns sensor history in json format, and functions like filtering by certain address. The site itself is lightweight and fairly easy to add new functionality.

The plan is to meet tomorrow and start building a web interface upon the api for the final visualization. Hopefully we could also get to verify if the pseudo data is identical to the real format, and maybe we can get a test run with the real setup.

Stay tuned!

UPDATE: A list of available API functions:

url: http://itp.nyu.edu/~lx243/sensornet/index.php/api/

all (meaning that the url will look like http://itp.nyu.edu/~lx243/sensornet/index.php/api/all)

returning all records in json format

find/:address

returning all records from address specified
address should be a string of 64bit address, e.g. 0013A20040330001

find_today/:address

find_by_day/:address/:day

day should be in the format of yyyymmdd, e.g. 20100301

status/:address

returning latest records from specified address

status_at/:address/:time

returning latest records by the time specified
time should be in the format of yyyymmddHHMMSS, e.g. 20100301140000

changes/:address

returning all changes at specified address

changes_today/:address

returning changes happened today at specified address

changes_by_day/:address/:day

returning changes happened today on specified date at specified address
day should be in the format of yyyymmdd, e.g. 20100301

stats/:address

returning open time, close time, changes, and total active time of the sensor

stats_today/:address

returning above stats from specified address

stats_by_day/:address/:day

returning above stats from specified address on specified day

UPDATE 2: Visualization

It took a while for Sebastian and I to make processing.js to talk to the API and get real time update. We used jQuery as a bridge to retrieve the data and feed it on the page to processing.js. But eventually all is fine:

Using XBee Without Arduino

26-Feb-10 | 30,550 views

I worked with Adi on the “Romantic Lighting Sensor” Lab for Sociable Objects Workshop. Since we do not have a photo cell at hand, we just used a normal potentiometer as the remote sensor. The code is from Rob Faludi’s course code sample page.

We had some problem at first with the receiver side and we didn’t find an effective to debug it. It was pretty clear that the transmitter is working fine since while I was using just xbee explorer board as receiver I can directly see the data coming from the serial channel. However I cannot get the code to work when I plugged the xbee back to arduino circuit.

I ended up using a processing sketch to print out serial data from arduino since I cannot see it directly by printing out data to the serial terminal in arduino code. So lesson learned:

1. the resister used to augment sensor output value to fit xbee’s 3.3v requirement has to be carefully handled, otherwise you do not get to use full range of the sensor output and it makes it harder to figure out the range to look for in the receiver code,

2. a processing sketch is necessary for debugging. it gives a lot of confidence while you can see what is in the data stream, at least for me.


simple remote xbee sensor


with feedback

“im-personal”: components and features

24-Feb-10 | 33,769 views

Overview

My thesis project is a software program that creates generative drawings on its own, one drawing per day, and updates its own drawing blog.

The program was built on the reverse of most generative computer graphics artwork or simulation approaches that creates realistic graphics; instead of creating complex visual graphics based on intensive computational power, it is trying to simulate human drawing in a very low-level approach of recreating emotion and gestures.

Rationale

I choose not to build another program to appreciate visual complexity because it comes down to the question of the value of the work of art, which in my opinion has a lot to do with mastery of skills and efforts put in it. So for me people appreciate complex visuals in computer graphics because those visuals are extremely hard to achieve by manual effort. However it is not always true for computers and that is why I am not super excited about computer generated complex abstracts. The real effort would be assigning emotion to fully automated process, and achieve humanity in the drawings produced.

In short, I am building this program in a way that is extremely hard for the computers to do, giving up the significance that could be easily achieved (compared to human effort) by a computer program. I would like to show the slow and painful process of this computer generated drawing and its evolution towards its own aesthetics.

There are infinite number of ways to build a program that draws on itself, and I am building it to mimic my own way of drawing/doodling. From the way the brush moves on the canvas, the shape of the brush strokes, to the composition preferences and training materials feed to the program for inspirations, all implementation would be following my own subjective choice. However the machine is running all by itself without constant intervention even by myself. Ideally it could live as part of my ghost in a different shell. The program could be an extension of the body, virtually. Would it be an even closer way to communicate directly through my ghost?

Goals

The goal of this project is to create new genre of computer graphic that expresses humanity by mimicking the doodling experience and generating drawing based on emotional factors and gestures.

It is not the goal of the project to produce tradition painting through simulation of painting tools and media. Drawings produced by the program should be visually recognizable as a computer generated artwork, while the content of the drawing can be subjective, emotional and less “visually mathematical”.

Technically, I would like to show the process of how a computer program would make sense of visual inputs and produce creative result out of it. And ideally it should also be able to publish its evolution from creating basic shapes and curves to being able to produce complex compositions (but not necessarily always doing so).

Audience and Location

The project is expecting a wide audience, essentially anyone that has prior experience of using a computer and interested in computer generated graphics. It would also probably draw attention of people that has particular interest in drawings.

This project itself is a very personal expression, meaning that I decide to design the algorithm independently without too much concerns about potential audience. While I am building the project not for particular target audience, I do expect different responses from different groups of people, and there are several that I am more interested in: a) generative graphics artists, b) illustrators, c) artificial intelligence scholars or engineers.

In terms of presentation, I would like the piece to be accessible online including the whole archive of the drawings it created. It is meant to be experienced alone than in a group. While it’s most preferred to be viewed through a web browser or a downloadable application running on one’s own computer, the project could possible still be suitable for public spaces like galleries or museums.

For now it seems that a website would provide a most complete experience for this project given its non-linear nature and technical difficulty to replicate its infrastructure. If I would have enough time to release the application itself to the public or make it easily deployable to other people’s website, and the copies of this project would be able to evolve independently, the final presentation of this decentralized artificial intelligence would have to be on the internet, or at least using internet to aggregate the productions/artworks from all copies.

Core Features and Functionality

The core of the project is a software that generates drawings. Every time the program launches, it creates a drawing and terminates itself. The drawing takes inspiration from the program’s external visual memories mixed with some “live” input that adds emotional factors to it. The drawing process is monitored and supervised by the composition and aesthetics assessment algorithms. The program determines the scale of the drawing and the moment to finish drawing.

There will be a few peripheral components to assist the core drawing module.

a) Visual Memory Storage, an important basis for building up the aesthetics preference of the program, as well as its raw material for inspiration. The memory is prioritized based on an arbitrary “impression” algorithm that puts the program’s most favorite visuals on the top.

b) Interface for Visual Memory allows the program to accept multiple source of external graphical inputs. The source could be a streaming video feed from a live webcam on justin.tv, or a favorite image feed from my ffffound.com image bookmarking account. The sources could be used as either positive or negative training material to build up the program’s aesthetic preference. The interface should be easily configured to maintain the input feeds which the program is listening to, and whether or not they are used as training materials. Ideally it could be easily accessible on the web.

c) Aesthetic Preference database, is a set of rules that the program collectively builds up over time through the process of learning, drawing and self assessment. The database is referred to during the creation of the drawing, and will be updated through training, drawing, and feedback on the published drawings.

d) Coordination Scripts that launches the program and publishes its creation to a website. Since the program itself is designed to be modular for future portability, some coordination scripts are required to help keep the whole thing running. They would help monitoring the visual feeds and compiling any updates to the visual memory, launch the drawing module at a preconfigured time, publish
the drawing created in this iteration and collect the feedback on the drawing to reflect into the preference database.

e) A website, technically a blog, that showcases the daily creation of the program.

People will be able to browse through the drawings the program created over time. They will probably not be able to search, since there will not be enough metadata associated with the drawing for filtering. If there has to be a filter or category, it could be based on timeline, emotion, or arbitrary themes (if the program evolves to name its creation).

For individual drawings, people will be able to:

a) watch a playback of the drawing process;
b) see the inspiration of the drawing, or the pictures that the program referred to in the creation;
c) rate the drawing, the result might or might not be used as training material;
d) (optional) order a print of the drawing;

This project is not intended to be instantly interactive because I would like to show the slow and painful process of creating a drawing, not to draw people’s attention to the coolness of computer generative graphics. I would like each of the drawing the program made would take a while for people to digest. I would like people to notice and appreciate the drawings themselves as work of art without knowing that they are created by a computer. This is already a very challenging task both in AI and fine art.

The software program shares my attitude on computer generative graphic art. While it is capable of creating complex generative visuals, it chooses to (as I choose to make it ) take the hard way and to apply its own (as my own) preference of visual beauty to its creation. It is a very personal critique on current computer generative art.

The core drawing module will either be written in Processing or openFrameworks, depending on my research on the portability of each option. I would like the program to be able to executed on a command line tool that does not require user intervention in the drawing process, so that it would be easily automated after it is deployed to a linux server.

For visual memory storage, couchDB seems a competitive choice for my data structure given its document oriented concept and flexibility in data types. MySql will be a safe but second choice since I’m more familiar with it.

Housekeeping scripts will be a compilation of a set of tools written in combination of shell scripts, python and php. It is most likely that I will build the website in php or just hack around the wordpress CMS framework.

As a software program, it does not have a user interface. It should not even require an input. It could be simply described as an application that generates one drawing at a time.

The showcase website, as the final presentation of the program, does have a user interface. I have in mind a blog site with clean minimalist design, highlighting latest drawing created by the program. And it should be easy enough to navigate through the archive of drawings to see how the algorithm evolves.

Success Measures / Future Plans

The goal of this project is to achieve machine intelligence in artworks, meaning that it should essentially create drawing that is visually or emotionally appealing to its viewers. Although the concept of machine intelligence evolution made public is as important, I would like to make the project beyond just being conceptual. The diversity of the drawings that it creates and the fact whether or not people appreciate those drawings will be also very important and valuable information for me.

For thesis, of course I would like to build the whole project to its greatest completion. However given the time constraints and technical challenges, I could still sort out some of the components to be lower prioritized than others.

The core drawing module is first priority, but it does not mean that the evolving algorithm needs to be perfect. The most important thing is the self driven composition process.

Secondly important is the infrastructure that automates the program and publish the drawings. Even if the algorithm is not perfect, it still makes a lot of sense if the whole evolution procedure is online and accessible to all.

Next will be the external interface of the program to the outside world, which makes it easier to feed training materials, to supervise the training process and to apply the feedbacks.

For future I will be most interested in making it portable enough so that there could be multiple instances of this program evolving individually and people could observe how different they are becoming over time.

“im-personal”: use cases

24-Feb-10 | 26,482 views

I still need a name for my drawing machine. My class felt the “machine” in the name is confusing, but I keep thinking the whole application as a machine since there will be loosely coupled components, and they are assembled in a way like machines. Anyways a name would still be necessary because that will make the project itself a lot personal. A working title for now is “im-personal” because it is an impersonal machine but it is crying out and saying that “i am personal” and its creations and its own visual memories are really personal.

 
Collect training materials

 
Supervise training process


Training personal drawing style


A public online gallery showcases all creations

Thesis Workplan

24-Feb-10 | 17,124 views

is here: http://bit.ly/thesis-workplan

Thesis Idea Switch

24-Feb-10 | 20,453 views

So this is happening to me, and I guess sooner is better than later. I would like to have a specific idea to develop (and code!) so I am turning back to the self-evolving drawing machine idea. I realized that given the very limited time it is nearly impractical to complete my research on the online privacy issue and develop a great idea that is proper as a thesis project. I would like to save all research that I’ve already done on this matter as a theme for my future project and I’m still working on the readings that I found.

Get back to the drawing machine that learns from human drawing, I finished the brush stroke and drawing API encapsulation last semester. From now on until the end of the semester, I would like to finish the genetic algorithm for evolving machine aesthetics and the publishing module, so that I can at least setup a solid basis for this project. Even if I cannot finish all features that I planned, I will still be able to have a running platform to improve the underlying intelligence on.