In this section
Goldsmiths' Departments of Computing and Psychology organise regular lectures by guest speakers throughout the academic year encompassing diverse aspects of cognition, computation and culture. All are welcome to attend.
Can Computers Create Art?
Dr. Aaron Hertzmann (Adobe Research)
2 Oct 2019, 6:00pm - 7:30pm
LG02, Professor Stuart Hall Building. Campus map
Aaron will discuss whether computers, using Artificial Intelligence (AI), could create art. He will cover the history of automation in art, examining the hype and reality of AI tools for art together with predictions about how they will be used. Aaron will also discuss different scenarios for how an algorithm could be considered the author of an artwork, which, he argues, comes down to questions of why we create and appreciate artwork.
Aaron Hertzmann is a Principal Scientist at Adobe, Inc., and Affiliate Faculty at University of Washington. He received a BA in computer science and art & art history from Rice University in 1996, and a PhD in computer science from New York University in 2001. He was a Professor at University of Toronto for 10 years, and has also worked at Pixar Animation Studios and Microsoft Research. He has published over 90 papers in computer graphics, computer vision, machine learning, robotics, human-computer interaction, and art. He is an ACM Fellow.
Location details for the event: https://www.gold.ac.uk/find-us/places/professor-stuart-hall-building-psh/
Available (in open access) relevant article by A. Hertzmann:
The Experiential Continuum: From Plant Sentience to Human Consciousness
Alfredo Pereira Jr – São Paulo State University (UNESP) and (Visiting Researcher at) Goldsmiths University of London
Wednesday 9 October 2019 **4 pm in RHB 137a** Campus map
Abstract: Empirical and theoretical developments in Neuroscience and Psychology support a concept of the Experiential Continuum, applied to the whole phylogenetic scale, and containing three layers (non-conscious, non-conceptual conscious and conceptual conscious experience), and six phases, according to the degree or self-awareness (Sentient, Interpretive, Automatized, Thought, Intuitive and Voluntary).
Recent discoveries about plant sensitivity support the attribution of Sentience and Interpretation (biosemeiosis) to them. Automatized processes, as, for instance, driving a car while focusing attention on talking through the cell phone, is considered to be conscious monitoring activity. Thinking can be related to the operation of neuronal networks; in the evolutionary scale, we find evidence for rudimentary thinking in molluscs and insects. Intuition is related to unconscious homeostatic processes in the nervous system that abruptly emerge to consciousness. Voluntary action is dependent on neuro-muscular structures found in animals, supporting agency.
On this conceptual basis, it is possible to provide clarification about current usages of the terms “affect”, “feeling” and “emotion” in contemporary theories of human consciousness. The affective drive is a set of genetically based psycho-physiological functions found in animals. “Feeling” refers to all types of conscious expression of affect, from sensations to social emotions. The non-conscious physiological and behavioural processing and expression of affect compose unconscious emotion.
Brief Bio: Alfredo Pereira Jr is Professor of Philosophy of Science at São Paulo State University (UNESP; 1988-present). Previously, he was a Post-Doctoral Fellow at MIT (1996-98) and Visiting Researcher at the Universities of Zurich (2012), Copenhagen (2012) and Pavia (2015). He has published 200 papers and chapters, organized three books on consciousness (Springer, Routledge and Cambridge U.P.), and several discussion groups (including Nature Networks groups, 2007-2010).
Characterising brain network dynamics in rest and task
Wednesday 23 October 2019 **4 pm in RHB 137a** Campus map
Diego Vidaurre - Oxford Centre for Human Brain Activity (OHBA), University of Oxford
Abstract: The brain needs to activate multiple networks in a temporally coordinated manner in order to perform cognitive tasks, and it needs to do so at different temporal scales, from the slowest cyrcadian cycles to fast subsecond rhythms. I propose a probabilistic framework to investigate brain functional reorganisation, capable to reliably access the dynamics contained in the signal even at the fastest time scales.
Using this approach, we investigate several aspects of the intrinsic dynamics of the human brain. First, we found that the brain spontaneously transitions between different networks in a predictable manner and follows a hierarchical organization that is remarkably simple, is heritable and significantly relates to behaviour. Second, we investigate the spectral properties of the default mode network using MEG, which is revealed to be composed of two components, anterior and posterior, with very distinct spatial, temporal and spectral properties - both having a strong implication of the posterior cingulate cortex, yet in very different frequency regimes.
Finally, I show an extension of this model for task data where we incorporate the stimulus information into the model, in such a way that we can reliably find between-trial temporal differences in stimulus processing - which we argue are crucially related to learning and plasticity, and can avoid the interpretation caveats of traditional decoding techniques.
Bio: Diego obtained his PhD in Statistics in the Universidad Politécnica de Madrid, and has worked in the University of Oxford as a postdoc in Computational Neuroscience since 2013. In 2018, he was appointed as an Assistant Professor in Osaka University, in Japan. He has just been appointed as an Associate Professor in Aarhus University, in Denmark.
Music, speech and conversational grooves
Wednesday 30 October 2019 **4 pm in RHB 137a** Campus map
Ian Cross - Centre for Music and Science, University of Cambridge
Abstract: I will suggest that time in music and in speech is underpinned by the same perceptual, cognitive and neural processes, and that regular temporal structure —periodicity— serves largely the same functions in speech and in music. I start by exploring evidence for temporal regularity in speech and suggests that this regularity serves the functions of enhancing communicative predictability and mutual affiliativeness between interlocutors. Results from studies that explore conversational and musical interaction will be discussed, and new results concerning effects of musical interaction on subsequent conversational interaction will be presented. The paper concludes by noting the need to develop integrated approaches to the study of music and speech as cognate components of the human communicative toolkit.
Brief Bio: Ian Cross is Professor and Director of the Centre for Music and Science at the University of Cambridge. His early work helped set the agenda for the study of music cognition; he has published widely in the field of music and science, from the psychoacoustics of violins to the evolutionary roots of musicality. His current research explores whether music and speech are underpinned by common interactive mechanisms. He is Editor-in-Chief of SAGE's new Open Access journal Music & Science, is a Fellow of Wolfson College, Cambridge and is also a classical guitarist.
The perceptual prediction paradox: Seeing what we expect (and what we don’t)
Wednesday 13 November 2019 **4 pm in RHB 137a** Campus map
Daniel Yon – Goldsmiths University of London
Abstract: From the noisy information bombarding our senses our brains must construct percepts that are veridical – reflecting the true state of the world – and informative – conveying the most important information for adjusting our beliefs and behaviour.
Theories in cognitive science suggest both of these challenges are met by mechanisms that use our beliefs and prior knowledge to shape what we perceive. However, current models are mutually incompatible.
In this talk, I will contend that ideas from research on learning and inference may resolve this paradox – explaining why our experience of the world around us is sometimes dominated by our existing beliefs, and sometimes focuses on information that surprises us the most.
Brief Bio: Daniel Yon is a new lecturer in the Department of Psychology at Goldsmiths. He studied psychology as an undergraduate at the University of Oxford (2010-2013) before completing an MSc and PhD at Birkbeck, University of London (2013-2017). Daniel was a postdoc at Birkbeck for two more years before joining Goldsmiths in September 2019. His research uses a mixture of behavioural, neuroimaging and computational methods to investigate how we perceive and interact with the world around us, with a particular focus on how our expectations shape our perceptions and decisions.