In this project we will develop real-time systems and data processing workflows to reveal hidden brain and bodily processes summoned through the collaborative production of a musical piece or the collective experience of film. We will sonify and visualize a range of physiological measures collected during collaborative production and collective spectatorship [Han14] to provide insights into perception, action, and social engagement. As an open-ended project, we will have the opportunity to work on the following scenarios: 1) use machine learning to discover patterns in physiological data that predict the subjective experience of film [SES13]; 2) sonify eye movements by manipulating timbre, pitch, loudness, and spatial location as a function of the foveated image; 3) visualise electrical brain activity to reveal group (de-)synchronisation; 4) correlate quantitative and qualitative variables from the stimulus to the physiological measures. With the findings we aim to gain greater insight into the impact of a collective experience ourselves and providing a toolkit that can be deployed in live events to augment audience participation and immersion in the cinematic experience.
Some theories of cognition suggest that consciousness emerges out of any highly networked system—such as the Internet—might constitute some kind of (emerging) consciousness (Koch, 2014; Stibel, 2014). Critics counter that connectedness is not sufficient for consciousness. For instance, a conscious agent must be situated within and in relationship to some kind of environment. Furthermore, according to situated, enactive, and embodied accounts of cognition (Clark & Chalmers, 1998a; Wilson, 2002), intelligence is not possible without a physical body. The mind, after all, is not (just) situated in the brain, and the sensorimotor dimensions of the body are essential components of consciousness. It has been suggested, for instance, that the reason why we have a nervous system in the first place was to make navigation and interaction with the physical world safer for the organism (move away from danger, move towards things that reinforce survival).The networked consciousness hypothesis might be more plausible if we expand what we mean by “environment” (cf. the extended mind hypothesis by Clark & Chalmers, 1998b) and if we try to imagine what might constitute the “body” of the network.
The goal of this project would be to compose a compelling (and potentially fundable) research proposal including research questions, initial literature review, discussion on methodologies, and proofs-of-concepts.
The idea we have of humanoid robots comes mostly from science fiction films. Even though we have a clear conception of what robots are, we rarely have an opportunity to interact with one that mimics human interaction.
NAO’s design is humanlike: it has the overall shape of a human body with a trunk, two legs, two arms and a head; however, it does not imitate human biological forms in its 58 cm height.
NAO is usually described as a social robot. What makes us perceive the NAO as it is? How can we exploit the emotions that the appearance of the NAO elicits in us? What makes it a social robot? Is it a translation from being a social human? What makes us social humans?
The project will start with a transdisciplinary discussion taking into account the above questions (and more). From what comes out of this discussion, the students are encouraged to design, stage and programme a performance including one or more NAO robots. This could be a piece of comedy, magic tricks, dance, theatre… or something else that our creative students come up with. Roboticists, performers, artists, philosophers and programmers are welcome to apply: true to the spirit of CogNovo, we want this project to be a fulfilling experience for all its students, who will receive stimulating inputs from experts in the fields that this project spans.
At the end, participants with no previous experience in robot programming will have learned to programme the NAO using the Choregraphe software, which creates behaviour, monitors and controls NAO. Participants with previous experience in robot programming will have the opportunity to add a piece of performing art to their portfolio.
As a social species, humans are experts in collaboration. However, understandings of the ‘social glue’ that allows us to coordinate complex actions within dynamic environments is limited. This workshop explores psychological theories on shared experience, in relation to physiological processes such as heart rate and respiration. Using playful movement and dance improvisations, we will study notions of social entrainment, synchronization, empathic projection and shared flow experience. We will examine various hardware and software tools for capturing and interpreting biodata, and design collective improvisations with real-time visual, sonic, and haptic biofeedback. The workshop will culminate with a live improvisation performance in which we share the results of our practice research.
This project has been merged with Let’s improv it
Play is a common, yet elusive phenomenon in humans, animals and possibly other entities as well. Despite many attempts at definitions, explanations and justifications, sciences, humanities and the arts have yet to achieve a truly transdisciplinary perspective on the issue. Our summer school strives to contribute to the understanding of play by discussing cognitive, social and philosophical aspects with representatives from psychology, AI research, human-computer interaction, and game studies. We will not only theorize about play, but also practice it by playing and offering sessions in play design through workshops and exercises during the week.
This is an attempt at wide range interdisciplinarity, engaging representatives from different faculties and backgrounds.
This project has been cancelled due to insufficient participants.
The original idea is to develop a virtual world that would be generated in real time and would be explored with the Oculus Rift. World generation will be following aesthetic rules (low level rules: symmetry, curvature, line orientation, colors... or anything the students wish to use for aesthetic evaluation) instead of exploiting basic genetic algorithms with pre-defined elements such as Minecraft biomes.
For example, the landscape shape will be attributed a specific line orientation distribution, or the branches of a tree will respect certain angles and curvatures. Those characteristics will be variable depending on the different areas around the observer. We could then imagine an experimental game where the player would have to go towards areas aesthetically more appealing. In other words, the player's direction will be used as a fitness function to estimate the player's preferences. The tricky part is that aesthetic measures have mostly been used on 2D images and not in 3D.