Navigation

Whitehead Lectures in Cognition, Computation and Culture

In this section

Article

Goldsmiths' Departments of Computing and Psychology organise regular lectures by guest speakers throughout the academic year encompassing diverse aspects of cognition, computation and culture. All are welcome to attend.

All seminars are held at 4pm in the Ben Pimlott Lecture Theatre, unless otherwise stated. Check our map for directions to Goldsmiths. For enquiries related to the lectures, please contact Karina Linnell or Frederic Leymarie.


Autumn 2017

Towards a more human machine perception of realism in mixed reality

Speaker: Alan Dolhasz (Birmingham City University)
When: 4pm - 5.30pm Wednesday 4 October
Where: Lecture Theatre, Ben Pimlott Building

Our ability to create synthetic, yet realistic representations of the real world, such as paintings or computer graphics, is remarkable. With the continual improvement in creative digital tools we are able to blur the line between the real and synthetic even further. Simultaneously, our ability to consciously detect minute imperfections within this imagery, which break down the illusion of realism, improves with experience. This problem of shifting realism thresholds remains paradoxical and largely underexplored.

As human expectations in this context grow, tools to assist in this problem are scarce, and computational models of perception still far from human performance. While it is possible for computers to make binary decisions regarding the realism or plausibility of imperfections and image artifacts, the problem of making them utilise similar features and methods to humans is nontrivial.

Visual realism in the context of mixed reality and synthetic combinations of objects and scenes is a complex and deeply subjective problem. Human perception of realism is affected by a range of visual properties of the scene and objects within it, from attributes of individual textures, surfaces and objects, to illumination, semantics and style of visual coding, to name a few. On top of this, individual subjective traits and experience of observers further complicate this issue.

In this talk, Alan Dolhasz discusses his work attempting to understand, quantify and leverage human perception of combinations of objects and scenes in order to develop machine perception systems that could aid us in creating more realistic synthetic scenes, as well as detect and localise imperfections.

Alan Dolhasz is a researcher and part-time PhD student at the Digital Media Technology (DMT) Lab, Birmingham City University, with a background in film, sound and visual effects. His research interests include human perception, computer vision, machine learning and mixed and augmented reality. Prior to his research position, he lectured Sound for Visual Media and Sound Synthesis and Sequencing, as well as running a production company, focusing on filmmaking and visual effects compositing, which largely contributed to developing his research area. Alan also works closely with industry, developing application cases for the research work coming out of the DMT Lab. 

dmtlab.bcu.ac.uk


Media Art between audience and environment: Italian case study

Speakers: Isabella Indolfi for SEMINARIA (Biennial festival of Environmental Art ) and Valentino Catricalà for Fondazione Mondo Digitale (Media Art Festival - Rome, IT)
When: 4-5pm Wednesday 11 October 2017
Where: Ben Pimlott Lecture Theatre

This talk is an attempt to analyze the latest trend of media art in the contemporary art field. Media Art is nowadays a stable field characterized by festival, research center, museum, etc. In the last 60 years, this field has created different ways to reread spaces, buildings, and environments, interacting with the audience and actively involving them. In this way media art has modified the relationship between audience and space. The Media Art Festival in Rome is an example of this.

The talk is divided in two parts. The first one is focused on the concept of media art and the differences between terms such as digital art, new media art, etc., trying to look back at the history and the archeology of the field.

The second one is an attempt to analyze the new relation between technologies and environment: a new trend of media art which is well represented by the Biennial festival Seminaria where artists, temporarily residing on location, are invited to collaborate and integrate on social and geographic variables of an entire village and community, through spatial and relational practices. Life-sized installations, immersive, accessible and habitable, virtually or physically, allow viewers to become inhabitants and meaningful activators.

So Media Art, on the edge of Land Art, Relational Art and many others cross-cultural developments, gives a new idea of public space, which can be sensitive and permeable to the audience and to the environment.

Isabella Indolfi received a Master’s degree in Sociology and New Media from University La Sapienza of Rome. She is an independent curator and consultant for contemporary art and develops and produces projects in collaboration with artists, institutions, festivals, galleries and museums.

With sociological theories in mind, she has approached contemporary art from the perspective of public, social, and relational aspects. For this reason she has founded and supervises the Biennial festival of Environmental Art SEMINARIA since 2011 and has collaborated with institutions and other festivals in the creation of numerous public art installations. Media and Communication studies have led her research to focus on digital art and the latest art languages. She has recently collaborated with the Fondazione Romaeuropa for Digital Life 2014 and 2015; she has curated the Trilogy "Opera Celibe" exhibitions for Palazzo Collicola Visiva Arti in Spoleto. Since 2016 she is artistic consultant for the Cyland Media Art Lab in St. Petersburg; and she is a member of the jury of the Media Art Festival at MAXXI Roma 2017.

Valentino Catricalà (Ph.D) is a scholar and art curator specialised in the analysis of the relationship of artists with new technologies and media. He received a Ph.D. from the Department of Philosophy, Communication and Performing Arts - University of Roma Tre. He has been Ph.D. visiting at ZKM-Center for Arts and Media (Karlsruhe, Germany), University of Dundee (Scotland), Tate Modern (London). Valentino has been part time postdoc research fellow at University of Roma Tre.

Valentino is currently the artistic director of the Rome Media Art Festival (MAXXI Museum) and Art Project coordinator at Fondazione Mondo Digitale. Valentino is also the curator of the project “artists in residence” for the Goethe Institut. Valentino is currently teaching at Rome Fine Arts Academy. Valentino has curated exhibitions in museum and private Galleries and has written essays in international University Journal (see academia.edu). Valentino collaborates with important contemporary art magazine as Flash Art, Inside Art and Segno.


Painting with real paints - e-David, a robot for creating artworks using visual feedback

Speaker: Prof. Oliver Deussen, Visual Computing, Konstanz University
When: 4pm Wednesday 18 October 2017

In Computer Graphics, the term Non-Photorealistic Rendering is used for methods that create "artistically" looking renditions. In last years deep neural networks revolutionized this area and today everybody can create artistically-looking images on their cellphones. Our e-David project targets another goal: we want to understand the traditional painting process, imitate it using a machine and employ techniques from computational creativity on top of this to create artworks that have their own texture and look.

The machine supervises itself during painting and computes new strokes on the difference between content on the canvas and intended result. The involved framework enables artists to implement their own ideas in form of constraints for the underlying optimization process. In the talk I will present e-David as well as recent projects and outline our future plans.

Bio:Prof. Deussen graduated at Karlsruhe Institute of Technology and is now professor for visual computing at University of Konstanz (Germany) and visiting professor at the Shenzhen Institute of Applied Technology (Chinese Academy of Science). In 2014 he was awarded within the 1000 talents plan of the Chinese Government. He is vice speaker of the SFB Transregio "Quantitative Methods for Visual Computing" that is a large research project conducted together with University of Stuttgart.  From 2012 to 2015 he served as Co-Editor in Chief of Computer Graphics Forum, currently he is Vice-President of Eurographics Association.

He serves as an editor of Informatik Spektrum, the journal of the German Informatics Association and is the speaker of the interest group for computer graphics. His areas of interest are modeling and rendering of complex biological systems, non-photorealistic rendering as well as Information Visualization. He also contributed papers to geometry processing, sampling methods and image-based modelling.


The neuroscience of music performance: understanding exploration, decision-making and action monitoring to learn about virtuosity and creativity

Maria Herrojo Ruiz, Department of Psychology, Goldsmiths University of London
4pm Wednesday 25 October
Ben Pimlott Lecture Theatre, Goldsmiths

Expert music performance relies on the ability to remember, plan, execute, and monitor the performance in order to play expressively and accurately. My research focuses on examining the neural processes involved in mediating some of these cognitive functions in professional musicians, but also in non-musicians and in patients with movement disorders.

This talk will illustrate different aspects of our current work at Goldsmiths. First, I will present new data from our research on error-monitoring during music performance, which takes a novel perspective by examining the interaction between bodily (heart) and neural signals in this process.

In addition, I will present results from our studies in non-musicians investigating the mechanisms by which anxiety modulates learning of novel sensorimotor (piano) sequences. Using electrophysiology and a behavioural task with separate phases of learning – including an exploratory and a reward-based phase – our research could dissociate the influence of anxiety on these two components. I will finish my talk by highlighting what our data on exploration and performance monitoring can teach us about virtuosity and creativity.

BIO
Maria Herrojo Ruiz is a lecturer in the Psychology Department at Goldsmiths. She studied Theoretical Physics in Madrid, Spain, and later specialised as a postgraduate student in Physics of Complex Systems. She did her doctoral dissertation in Neuroscience as a Marie Curie Fellow in Hanover, Germany, focusing on the neural correlates of error-monitoring during music performance. As principal investigator in two successive research grants in Berlin, Germany, Maria has been conducting research on the role of the cortico-basal ganglia-thalamocortical circuits in mediating learning and monitoring of sensorimotor sequences, both in healthy human subjects and in patients with movement disorders. Her current research at Goldsmiths focuses on the neural correlates of exploration during piano performance and sensorimotor learning, their modulation by anxiety, and the brain-body interaction during music performance.


From dancing robots to Swan Lake: Probing the flexibility of social perception in the human brain

Emily S. Cross, Wales Institute for Cognitive Neuroscience, School of Psychology, Bangor University, Wales
4pm Wednesday 29 November
Ben Pimlott Lecture Theatre, Goldsmiths

As humans, we gather a wide range of information about other agents from watching them move. A network of brain regions has been implicated in understanding others' actions by means of an automatic matching process that links actions we see others perform with our own motor abilities.

Current views of this network assume a matching process biased towards familiar actions; specifically, those performed by conspecifics and present in the observer's motor repertoire. However, emerging work in the field of social neuroscience is raising some interesting challenges to this dominant theoretical perspective. Specifically, recent work has questioned if this system is built for and biased towards familiar human actions, then what happens when we watch or interact with artificial agents, such as robots or avatars?

In addition, is it only the similarity between self and others that leads to engagement of brain regions that link action with perception, or do affective or aesthetic evaluations of another’s action also shape this process?

In this talk, I discuss several recent brain imaging and behavioural studies by my team that provide some first answers to these questions. Broadly speaking, our results challenge previous ideas about how we perceive social agents and suggest broader, more flexible processing of agents and actions we may encounter.

The implications of these findings are further considered in light of whether motor resonance with robotic agents may facilitate human-robot interaction in the future, and the extent to which motor resonance with performing artists shapes a spectator’s aesthetic experience of a dance or theatre piece.

BIO
Emily S. Cross is a professor of cognitive neuroscience at Bangor’s School of Psychology. She completed undergraduate studies in psychology and dance in California, followed by an MSc in cognitive psychology in New Zealand, and then a PhD in cognitive neuroscience at Dartmouth College in the USA. Following this, she completed postdoctoral fellowships at the University of Nottingham and the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany.

The primary aim of her research is to explore experience-dependent plasticity in the human brain and behaviour using neuroimaging, neurostimulation and behavioural techniques. As her research team is particularly interested in complex action learning and perception, they often call upon action experts and training paradigms from highly skilled motor domains, such as dance, music, gymnastics, contortion, and acrobatics.

In addition, she has a longstanding interest in aesthetic perception, and has performed a number of studies exploring the impact of affective experience on how we perceive others. More recently, as part of an ERC starting grant, she and her team are examining how social experience or expectations about artificial agents shape how we perceive and interact with robots and avatars.

Her research has been supported by a number of funding bodies in the USA and EU, including the National Institutes of Health, Volkswagen Foundation, Economic and Social Research Council, Ministry of Defence and European Research Council.


Tactile perception in and outside our body

Speaker: Professor Vincent Hayward
When: 4pm - 5pm Wednesday 6 December
Where: Ben Pimlott Lecture Theatre


The mechanics of contact and friction is to touch what sound waves are to audition, and what light waves are to vision. The complex physics of contact and its consequences inside our sensitive tissues, however, differ in fundamental ways from the physics of acoustics and optics. The astonishing variety of phenomena resulting from the contact between fingers and objects is likely to have fashioned our somatosensory system at all its levels of it organisation, from early mechanics to cognition. The talk will illustrate this idea through a variety of specific examples that show how surface physics shape the messages that are sent to the brain, providing completely new opportunities for applications of human machines interfaces.

Speaker
Vincent Hayward is a professor (on leave) at the Université Pierre et Marie Curie (UPMC) in Paris. Before, he was with the Department of Electrical and Computer Engineering at McGill University, Montréal, Canada, where he became a full Professor in 2006 and was the Director of the McGill Centre for Intelligent Machines from 2001 to 2004.

Hayward is interested in haptic device design, human perception, and robotics; and he is a Fellow of the IEEE. He was a European Research Council Grantee from 2010 to 2016. Since January 2017, Hayward is a Professor of Tactile Perception and Technology at the School of Advanced Studies of the University of London, supported by a Leverhulme Trust Fellowship.


Summer 2017

The Neural Aesthetic

Speaker: Gene Kogan
Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths
When: 4pm Wednesday 3 May 2017

Artist and programmer Gene Kogan discusses how artists and musicians are using deep learning for creative experimentation.

Over the last two years, deep learning has made inroads into domains of interest to artists, designers, musicians, and the like. Combined with the appearance of powerful open source frameworks and the proliferation of public educational resources, this once esoteric subject has become accessible to far more people, facilitating numerous innovative hacks and art works. The result has been a virtuous circle, wherein public art works help motivate further scientific inquiry, in turn inspiring ever more creative experimentation.

This talk will review some of the works that have been produced, present educational materials for how to get started, and speculate on research trends and future prospects.

Biography
Gene Kogan is an artist and a programmer who is interested in generative systems, artificial intelligence, and software for creativity and self-expression. He is a collaborator within numerous open-source software projects, and leads workshops and demonstrations on topics at the intersection of code, art, and technology activism.

Gene initiated and contributes to ml4a, a free book about machine learning for artists, activists, and citizen scientists. He regularly publishes video lectures, writings, and tutorials to facilitate a greater public understanding of the topic.

www.genekogan.com / ml4a.github.io / @genekogan


Design for Human Experience & Expression at the HCT Laboratory

Dr. Sid Fels, Electrical & Computer Engineering Department, UB
4pm Wednesday 24 May 2017
Goldsmiths Cinema, Richard Hoggart Building

Research at the Human Communications Technology (HCT) laboratory (hct.ece.ubc.ca) has been targeting design for human experience and expression.

In this presentation, I’ll start with a discussion of gesture-to-speech and voice explorations, including Glove-TalkII and the Digital Ventriloquized Actors (DIVAs). I’ll connect these to other explorations of the new interfaces for musical and visual expression that we have created. I will discuss our work on modelling human anatomy (www.parametrichuman.org) and function, such as speaking, chewing, swallowing and breathing (www.magic.ubc.ca) with biomechanical models using our toolkit Artisynth (www.artisynth.org).

This work is motivated by our quest to make a new vocal instrument that can be controlled by gesture. I’ll discuss some of the activities we have been doing on some new 3D displays: pCubee and Spheree. Finally, these investigations will be used to support a theory of designing for intimacy and discussions of perspectives on human computer interaction for new experiences and forms of expression.