Ordi: Difference between revisions

From Noisebridge
Jump to navigation Jump to search
Line 65: Line 65:
Cheap [[PowerShot A2200]] with [[CHDK]]. Raspberry Pi with touchscreen running a modified version of [[chdkptp]]. This controls the camera and allows remote shooting from the gui and saving the picture over the USB cable.
Cheap [[PowerShot A2200]] with [[CHDK]]. Raspberry Pi with touchscreen running a modified version of [[chdkptp]]. This controls the camera and allows remote shooting from the gui and saving the picture over the USB cable.


Lee started writing some [https://github.com/lazzarello/chdkptp/ software for a custom touchscreen GUI]. He got stuck at the live viewfinder feature.
Lee started writing some [https://github.com/lazzarello/chdkptp/ software for a custom touchscreen GUI]. It is the primary user interface for the booth, though there's a possible alternative touch-screen native framework called Kivy that would require a from-scratch UI project. This is out of the scope for maker fair. Probably worth it in the long run, for many reasons.


There is a 3D printed power adapter for the camera but it is got a different model of battery. The camera expects 3.7 V.
There is a 3D printed power adapter for the camera and a [http://www.instructables.com/id/USB-Powered-Camera-PowerShot-Battery-Hack/ hacked battery pack] but both are for different models of battery. The camera expects 3.7V and draws an unknown amount of current. The batteries max current storage is 700m amps. The camera could be powered from the 5V GPIO pin with a small voltage divider to drop down to 3.7. It should have [https://raspberrypi.stackexchange.com/questions/51615/raspberry-pi-power-limitations enough current available].


=== Rendering pipeline ===
=== Rendering pipeline ===

Revision as of 18:17, 1 May 2017

Project Ordi

This project is planned as a photobooth where a computer takes your picture and then draws it with a robot in a separate kiosk. The booth and kiosk are being designed in an Art Nouveau (primarily Paris Metro) styling with the goal of these being attractive objects on their own.

There are two overall parts to this poject, for now, they are called the Système and the Boites.

The project is named after and sponsored by a fictional retro-futurist company called Ordi. This is a potentially clumsy play on the French word for computer, ordinateur, which few outside of francophile countries are aware exists. The computer does most of the "magic" here, taking the photo and translating it into a half-toned path which is then drawn by a robot on a piece of paper.

Système

The Système is the design, development, testing, and installation of the systems of hardware and software that will capture an image, process it, and plot it.

The expected process is:

  1. A Noisebridge booth operator removes the previous image from the plotting robot(s) and places a clean drawing board under it
    1. the drawing boards are pieces of paper affixed to a small board that can be reliably aligned under the plotting robot.
  2. A person steps into the booth (the booth is already lit properly and an intercom is running)
  3. They stand on a marked spot and face an indicated direction
  4. They adjust the height of the camera
  5. They make a face and press the "Press this to take a photo" button
    1. any questions the person has are answered via intercom
  6. A booth operator verifies the photo and requests the person step out, via intercom
  7. A booth operator, at the operator workstation, adjusts the image and sends it to the plotting robot kiosk
  8. A plotting robot plots until it has finished plotting
    1. Booth operators chat with people about Noisebridge and prepare drawing boards
    2. If additional robots are being used and are available to print another person can start the photo process.
      1. How fast and how photos will be queued up are subject to testing before the show.
  9. The finished drawing is given to the person
    1. The drawing is also quickly inspected to verfiy line quality
      1. The rate of pen replacement is pending testing
  10. Hopefully the person saw the "donations for photos accepted", and puts some money in the kiosk
  11. this process continues from roughly the beginning
    1. and booth operators of course are only to work as quickly as they feel like, take breaks often and take turns with other operators
      1. bonus points if we can somehow get the people using the booth to do this stuff themselves

Boites

The Boites are the booth and kiosk which will contain the système, provide the controlled environments in which they operate, and catch the eyes and interest of people. The main booth is planned as a 3' by 3' person standing compartment connected to a equipment area, containing the camera, lighting, and operator workstation. This is connected to a separate kiosk that houses the plotting robots (Evil Mad Scientist - AxiDraw(s). This booth is well lit inside and easy to see into. There will be a donation box built into it.

Sketches

More precise working drawings will follow soon, but here are some of the initial ideas.

The arrangement and proportions of the Boites

APBinitial-2.jpg

And here is the concept for the styling of the photo booth facade

APBinitial-1.jpg

Software pipelines

Networking

The application is a network of computers. The computer that acquires the images is a raspberry pi. We should not assume that it will have internet connectivity. The computer that processes the images is a minimac. It too should not be assumed to have internet connectivity but it does have two network interfaces. The ethernet interface connects the pi to the mini. The mini works as a router for the rpi. If there is a wifi network, the rpi will be able to receive external network connectivity through the mini.

Both endpoints on the wired network have static IP addresses, configured according to Debian standards of the

/etc/network/interfaces

file. The minimac has iptables rules, saved to

/etc/network/iptables

to do SNAT, acting as a router. In order for a picture captured on the rpi to be available on the minimac for rendering, it must travel over a network.

To solve the file transport problem with a minimum of custom software, the sshfs FUSE module is used. A directory on the minimac is "exported" to the rpi. In reality there was a public/private key relationship established between the two hosts and the rpi mounts a directory from the minimac via sshfs.

The minimac is notified of editions to this directory with a small program that watches a directory for changes. It executes an arbitrary command when an event is triggered. In this case it will desaturate the image with graphicsmagick and open it in the stipplegen rendering program.

Optical pipeline

Cheap PowerShot A2200 with CHDK. Raspberry Pi with touchscreen running a modified version of chdkptp. This controls the camera and allows remote shooting from the gui and saving the picture over the USB cable.

Lee started writing some software for a custom touchscreen GUI. It is the primary user interface for the booth, though there's a possible alternative touch-screen native framework called Kivy that would require a from-scratch UI project. This is out of the scope for maker fair. Probably worth it in the long run, for many reasons.

There is a 3D printed power adapter for the camera and a hacked battery pack but both are for different models of battery. The camera expects 3.7V and draws an unknown amount of current. The batteries max current storage is 700m amps. The camera could be powered from the 5V GPIO pin with a small voltage divider to drop down to 3.7. It should have enough current available.

Rendering pipeline

There is a second computer which mounts the internal storage on the camera controller via SSHFS. The picture is opened in darktable for proofing prior to rendering. Darktable >= 2.2.4 is required for realtime updates as new pictures arrive from the camera pipeline.

When a picture is chosen, a darktable plugin is run that opens that picture in stipplegen to begin rendering a stipple cluster to generate a TSP path. Results of each iteration are shown in realtime on a display on the outside of the booth. After enough iterations, a recognizable image will form. Use your judgement and wait until you begin to see an image you like. The generator will export a vector of the TSP path crossing all the dots. This is what the axidraw will print.

There is a Processing interface to the AxiDraw serial port. This will be merged into a unified stipple, trace, print application.

The GUI for the rendering section is the most important graphical stage since it greatly effects the output image. The stipplegen GUI is good for a demonstration but has poor performance. It should be optimized for interactivity.

Printing pipeline

There are two current drivers for the AxiDraw to begin printing

  1. An interactive GUI built as a plugin for Inkscape. This requires exporting from stipplegen and importing the SVG into Inkscape and clicking around buttons.
  2. A Processing utility which can control the AxiDraw. This is probably the way to go.

Automation

A few Ansible playlists should be used to automate building both the camera computer and the rendering computer. This could help facilitate different hardware configurations.