Sunday 31 October 2010

Generative Spaces

 
Algorithmic Space from SAHAR FIKOUHI on Vimeo.

The aim of this study was to combine different algorithmic principles in creating spatial propositions. The film above was created using processing programming language and combines the trear physics library and minim sound library along with tracked data from a moving image to re-create the initial space based on the different parameters guiding the base algorithm. The spatial networks are created using 3 levels of intelligence, primarily they track 3d positions of an existing space, they grow as a series of particle-springs, influenced by real-world parameters including gravity and mass, and finally they re-generate on external stimuli such as music.


Wednesday 27 October 2010

Tracking-Extended

In this study 3d tracking data was used to convert an existing space made up of a point cloud of particles into a generative form which grows and re-generates at a random rate. The main algorithm was developed in processing using the physics library and uses a real time simulation demonstrating the capabilities of springs and particles in 3d space.

  
                     Generative spaces from SAHAR FIKOUHI on Vimeo.

The main aim of this study was to use algorithmic principles in the development of 3d form so that greater control and analysis can be carried out on each individual component. It is merely demonstrating the significance of these principles at an architectural scale. At present the evolving form generation is an abstract representation of how we can strive to reinterpret and further develop data into more contained architectural proposals.





Tracking-continued

This study is a continuation of the previous experimentation of detecting spaces from a computational perspective, however these series of test where conducted using Boujou and After affects to translate 3d tracking data from a piece of moving image into 3d space.

  
    detection-continued from SAHAR FIKOUHI on Vimeo.

Friday 8 October 2010

Representing spaces

Robo-Vision

How is it possible to depict a building through moving image and allow the audience to really experience/understand 3d space?

The intention of this study is to breakdown the visual experience of the building through a series of logical statements which define the surrounding environment. The internal processing of the device capturing and analysing  the movie denotes the legibility of the space in question. Edge detection is a primary capability of this device along with movement tracking and blob detection. The aim is to have minimal human input so that we can experience the space from an alien/robotic perspective.  Through this study it should be possible to answer questions regarding the basic understanding of an environment. Not just a result of human perception, but merely a breakdown of the core principles/functionalities of space.

 



The Study was conducted at the Imperial War Museum in Manchester and some initial apparatus were assembled to make the detection and analysis possible. This consists of a web cam connected to processing and a remote control robotic arm guiding the camera. Here is a shot of the set-up in Salford. The final drawing below is a capture of the 2d-detection in action: