Navigation

Whitehead Lectures in Cognition, Computation and Culture

In this section

Article

Goldsmiths' Departments of Computing and Psychology organise regular lectures by guest speakers throughout the academic year encompassing diverse aspects of cognition, computation and culture. All are welcome to attend.

All seminars are held at 4pm in the Ben Pimlott Lecture Theatre, unless otherwise stated. Check our map for directions to Goldsmiths. For enquiries related to the lectures, please contact Karina Linnell or Frederic Leymarie.

Summer lectures 2016

Anne Verroust-Blondet on "Sketch-based 3D model retrieval

2:00 - 3:00pm 24 May 2016
RHB Cinema, ground floor, Richard Hoggart Building

"Sketch-based 3D model retrieval using visual part shape description and view selection" by Dr. Anne Verroust-Blondet from INRIA, Paris, France (work done in collaboration with Zahraa Yasseen and Ahmad Nasri)

Abstract: Hand drawings are the imprints of shapes in human's mind. How a human expresses a shape is a consequence of how he or she visualizes it. A query-by-sketch 3D object retrieval application is closely tied to this concept from two aspects. First, describing sketches must involve elements in a figure that matter most to a human. Second, the representative 2D projection of the target 3D objects must be limited to "the canonical views" from a human cognition perspective. We advocate for these two rules by presenting a new approach for sketch-based 3D object retrieval that describes a 2D shape by the visual protruding parts of its silhouette. Furthermore, we present a list of candidate 2D projections that represent the canonical views of a 3D object.

The general rule is that humans would visually avoid part occlusion and symmetry. We quantify the extent of part occlusion of the projected silhouettes of 3D objects by skeletal length computations. Sorting the projected views in the decreasing order of skeletal lengths gives access to a subset of best representative views. We experimentally show how views that cause misinterpretation and mismatching can be detected according to the part occlusion criteria. We also propose criteria for locating side, off axis, or asymmetric views.

Short Bio: Anne Verroust-Blondet is a senior research scientist in the RITS research group of Inria Paris, France. She obtained her "Thèse de 3e cycle" and her "Thèse d'Etat" in Computer Science (respectively in database theory and in computer graphics) from the University of Paris-Sud. Her current research interests include 2D and 3D visual information retrieval, object recognition, 2D and 3D geometric modeling and perception problems in the context of intelligent transportation systems.

References: https://who.rocq.inria.fr/Anne.Verroust/


Sylvain Calinon on "Human-robot interaction

2:00pm - 3:00pm, 14 June 2016

RHB Cinema, ground floor, Richard Hoggart Building

Dr. Calinon from IDIAP, Switzerland on "Robot skills acquisition by Human-robot interaction"

Abstract: In this presentation, I will discuss the design of user-friendly interfaces to transfer natural movements and skills to robots. I will show that human-centric robot applications require a tight integration of learning and control, and that this connexion can be facilitated by the use of probabilistic representations of the skills.

In human-robot collaboration, such representation can take various forms. In particular, movements must be enriched with perception, force and impedance information to anticipate the users' behaviours and generate safe and natural gestures. The developed models serve several purposes (recognition, prediction, online synthesis), and are shared by different learning strategies (imitation, emulation, incremental refinement or exploration).

The aim is to facilitate the transfer of skills from end-users to robots, or in-between robots, by exploiting multiple sources of sensory information and by developing intuitive teaching interfaces.

The proposed approach will be illustrated through a wide range of robotic applications, with robots that are either close to us (robots for collaborative artistic creation, robots for dressing assistance), parts of us (prosthetic hands), or far away from us (robots with bimanual skills in deep water).

Bio: Dr Sylvain Calinon is a permanent researcher at the Idiap Research Institute (http://idiap.ch), heading the Robot Learning & Interaction Group. He is also a Lecturer at the Ecole Polytechnique Federale de Lausanne (http://epfl.ch) and an External Collaborator at the Department of Advanced Robotics, Italian Institute of Technology (IIT).

From 2009 to 2014, he was a Team Leader at IIT. From 2007 to 2009, he was a Postdoc at EPFL. He holds a PhD from EPFL (2007), awarded by Robotdalen, ABB and EPFL-Press Awards. He is the author of about 80 publications and a book in the field of robot learning by imitation and human-robot interaction, with recognition including Best Paper Award at Ro-Man'2007 and Best Paper Award Finalist at ICIRA'2015, IROS'2013 and Humanoids'2009.

He currently serves in the Organizing Committee of IROS'2016 and as Associate Editor in IEEE Robotics and Automation Letters, Springer Intelligent Service Robotics, Frontiers in Robotics and AI, and the International Journal of Advanced Robotic Systems.

Personal webpage: http://calinon.ch


 

Spring lectures 2016

Predicting the unpredictable? Anticipation in spontaneous social interactions

4pm Wednesday 13 January 2016

Speaker: Dr Lilla Magyari, Department of General Psychology, Pázmány Péter Catholic University, Budapest, Hungary

Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths, University of London

Abstract: This talk will focus on some of the cognitive processes underlying our ability to coordinate our actions in spontaneous social interactions. In particular, I will present EEG and behavioural studies about spontaneous verbal interactions, such as everyday natural conversations.

I will also present some preliminary experimental data about non-verbal interactions, such as movement improvisation (i.e. dance improvisation). The key focus of my talk will be whether or not and how participants in spontaneous social interactions anticipate others' actions and the timing of these actions.

Bio: Lilla Magyari studied psychology and Hungarian grammar and literature at the ELTE University in Budapest in Hungary and cognitive neuroscience in Nijmegen, in the Netherlands.

She obtained her Ph.D. in the Language & Cognition Department of the Max Planck Institute for Psycholinguistics. Her dissertation explored the cognitive mechanisms involved in the timing of turn-taking in everyday conversations. Complementing her Ph.D. studies, she also worked at the Neuroimaging Center of the Donders Institute for Brain, Cognition and Behaviour, focusing on the implementation of methods for EEG/MEG data-analysis within the FieldTrip software-package.

She also studied theatre-directing at the Amsterdam School of the Arts for a year. Currently, she lives in Budapest where she works as an assistant professor at the Department of General Psychology of Pázmány Péter Catholic University. Her research investigates linguistic and cultural differences in turn-taking of natural conversation, empirical aesthetics and coordination in movement improvisation.

Considering Movement in the Design of Digital Musical Instruments

4pm Wednesday 20 January 2016

Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths, University of London

Nicholas Ward (Digital Media & Arts Research Centre, University of Limerick) explores how designers consider human movement in the creation of new digital musical instruments.

This talk will explore how we consider human movement in the design of Digital Musical Instruments. Starting from a consideration of effort in performance, several existing approaches to the description of movement in the field of DMI design will be discussed. Following this I will consider how approaches from the fields of Tangible Interaction and Product Design, which attempt to privilege Human Movement, might inform DMI design. Two examples of work where a consideration of movement drove the design process will be presented.

Biography

Nicholas Ward is a lecturer at DMARC (the Digital Media and Arts Research Centre, University of Limerick). He holds a PhD from the Sonic Arts Research Centre at Queen’s University, Belfast. His research explores physicality and effort in the context of digital musical instrument performance and game design. Specifically he is interested in movement quality, systems for movement description, and their utility within a design context.

Dynamic Facial Processing and Capture in Academia and Industry

Wednesday 27 January 2016

Speaker: Dr. Darren Cosker

Director (CAMERA) / Associate Prof./ Reader, Department of Computer Science, University of Bath, Royal Society Industry Fellow, Double Negative Visual Effects

Darren Cosker's website

The visual effects and entertainment industries are now a fundamental part of the computer graphics and vision landscapes - as well as impacting across society in general. One of the issues in this area is the creation of realistic characters - including facial animation, creating assets for production, and improving work-flow. Advances in computer graphics, vision and rendering have underlined much of the success of these industries, built on top of academic advances. However, there are still many unsolved problems - some obvious and some less so.
In this talk I will outline some of the challenges in crossing over academic research into the visual effects industry. In particular, I will attempt to distinguish between academic challenges and industrial demands - and how this can impact projects. This draws on experience in several projects involving leading Visual Effects companies, many through our Centre for Digital Entertainment (CDE) and our new Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA)
This includes work on creating facial performances and using facial models in visual effects contexts. It also includes work on non-rigid tracking, shadow removal and object deformation. I will describe how attempting to apply many academic solutions in industrial settings - including attempting to use computer vision solutions on-set - interestingly led to us step back and redirect our focus once more on fundamental computer vision research problems.

Biography

Dr. Darren Cosker is a Royal Society Industrial Research Fellow at Double Negative Visual Effects, London, and a Reader/Associate Prof. at the University of Bath. He is the Director of the Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA), a £10 million incentive co-funded by EPSRC/AHRC and industry. Previously, Dr. Cosker held a Royal Academy of Engineering Research Fellowship (2007-2012), at the University of Bath. He is interested in the convergence of computer vision, graphics and psychology, with applications to creative industries, sport and health. 

My Path to Spatial Interaction & Display: Navigation Beyond the Screen

4pm Wednesday 10 February 2016

Speaker: Dale Herigstad, Advanced Interaction Consultant. Co-founder of SeeSpace.

Venue: Curzon Goldsmiths, Richard Hoggart Building

In my world, motion graphics and computer graphics have always been about objects in a 3D space. And now, as the world moves "beyond the screen" with VR and AR, interaction is more complex and requires greater levels of simplicity. I will show past and current experiments that explore evolving approaches to interaction in spatial contexts, and seek to demonstrate logical progressions over time.

Now living in London, Dale Herigstad spent over 30 years in Hollywood as a Creative Director for motion graphics in TV and film. His mission has been to apply the principles of rich media design to interactive experiences. He began designing interfaces for Television more than 20 years ago, and was a founder of Schematic, a pioneering design firm founded in innovation. 

Biography

Dale has developed a unique spatial approach to designing navigation systems for new screen contexts. He was a part of the research team that conceptualised digital experiences in the film “Minority Report,” and has led the development of gestural navigation for screens at a distance. And as screens begin to disappear, Dale is focusing on navigation and display of information and graphics that are “off screen”. Virtual space and place are new frontiers of design. 

He has an MFA from California Institute of the Arts, where in 1981 he taught the first course in Motion Graphics to be offered to designers in the United States. He served on the founding advisory board of the digital content direction at the American Film Institute in Los Angeles, and also was an active participant in the development of advanced prototypes for Enhanced TV at the American Film Institute for many years. Dale is a member of the Academy of TV Arts & Sciences, and has 4 Emmy awards. 

More recently, Dale co-founded SeeSpace, whose first product, InAiR, places dynamic IP content in the space in front of the Television, perhaps the first Augmented Television experience. And Dale is now researching and developing the design methodology for navigating virtual information for AR and VR.

The critical self and awareness in people with dementia

4pm Wednesday 2 March 2016

Speaker: Robin Morris, Professor of Neuropsychology, Institute of Psychiatry, Psychology and Neuroscience, London
Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths, University of London

Abstract

A prominent aspect of having dementia is the loss of awareness of function. The lecture relates this loss to disturbances of self-knowledge and the neurocognitive systems that support awareness. It explores the notion of the formation of the critical self and how this provides a preserved sense of self in people with dementia but at the cost of loss of awareness. It also considers how awareness may continue to operate paradoxically at a pre-conscious level and how this influences the experience of people with dementia. 

Short bio

Robin Morris is Professor of Neuropsychology at the Institute of Psychiatry, Psychology and Neuroscience as well as Head of the Clinical Neuropsychology Department at King’s College Hospital in London. He has worked at the Institute of Psychiatry for 27 years and combined research into patients with acquired brain disorder with working as a clinician. His research interests include the neuropsychology of awareness, executive functioning and memory. He is recipient of the British Psychological Society, Division of Neuropsychology, award for outstanding contribution to neuropsychology internationally.

Sketched Visual Narratives for Image and Video Search

Wednesday 23 March 2016

Speaker: Dr John Collomosse, Senior Lecturer in the Centre for Vision Speech and Signal Processing (CVSSP) at the University of Surrey.

Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths, University of London

Abstract

The internet is transforming into a visual medium; over 80% of the internet is forecast to be visual content by 2018, and most of this content will be consumed on mobile devices featuring a touch-screen as their primary interface. Gestural interaction, such as sketch, presents an intuitive way to interact with these devices. Imagine a Google image search in which specify your query by sketching the desired image with your finger, rather than (or in addition to) describing it with text words. Sketch offers an orthogonal perspective on visual search - enabling concise specification of appearance (via sketch) in addition to semantics (via text).

In this talk I will present a summary of my group's work on the use of free-hand sketches for the visual search and manipulation of images and video. I will begin by describing a scalable system for sketch based search of multi-million image databases, based upon our state of the art Gradient Field HOG (GF-HOG) algorithm. Imagine a product catalogue in which you sketched, say an engineering part, rather than using a text or serial numbers to find it? I will then describe how scalable search of video can be similarly achieved, through the depiction of sketched visual narratives that depict not only objects but also their motion (dynamics) as a constraint to find relevant video clips. I will show that such visual narratives are not only useful for search, but can also be use to manipulate video through specification of a sketched storyboard that drives video generation - for example, design of novel choreography through a series of sketched poses.

The work presented in this talk has been supported by the EPSRC and AHRC between 2012-2015.

Bio

Dr John Collomosse is a Senior Lecturer in the Centre for Vision Speech and Signal Processing (CVSSP) at the University of Surrey. John joined CVSSP in 2009, following 4 years lecturing at the University Bath where he also completed his PhD in Computer Vision and Graphics (2004). John has spent periods of time at IBM UK Labs, Vodafone R&D Munich, and HP Labs Bristol.

John's research is cross-disciplinary, spanning Computer Vision, Computer Graphics and Artificial Intelligence, focusing on ways to add value and make sense of large, unstructured media collections - to visually search media collections, and present them in aesthetic and comprehensible ways. Recent projects spanning Vision and Graphics include: sketch based search of images/video; plagiarism detection in the arts; visual search of dance; structuring and presenting large visual media collections using artistic rendering; developing characters animation from 3D multi-view capture data. John holds ~70 refereed publications, including oral presentations at ICCV, BMVC, and journal papers in IJCV, IEEE TVCG and TMM. He was general chair for NPAR 2010-11 (at SIGGRAPH), BMVC 2012, and CVMP 2014-15 and is an AE for C&G and Eurographics CGF.

Autumn lectures 2016

Cultural Computing: Looking for Japan

Wednesday 16 November 2016

Speaker: Naoko Tosa

Venue: Lecture Theatre, Ben Pimlott Building, Goldsmiths

Naoko Tosa is a pioneer in the area of media art and is an internationally renowned Japanese media artist. Her artworks became well known worldwide in late 1980s after one of her early artworks was selected for the “New Video, Japan” exhibition at MOMA, New York.

In this talk - part of the Whitehead Lecture Series - she demonstrates the role of information technology in enabling new understandings of our multicultural world, and discusses cross-cultural cultures from the viewpoint of an artist who is herself deeply immersed in both eastern and western cultures. She then proposes a new vision that is founded upon the relationships between diverse cultures.

Biography 

Naoko Tosa's artworks have been exhibited at the Museum of Modern Art (New York), Metropolitan Art Museum (New York) and at many other locations worldwide. She held a solo exhibition at Japan Creative Center (Singapore) in 2011. Her artworks have recently focused on visualising the unconsciousness.

She has been appointed as Japan Cultural Envoy 2016 by the Commissioner of the Agency for Cultural Airs. She is expected to promote Japanese culture to the world by exhibiting her artworks and also through her networking activities with overseas people of culture.?

She has won numerous awards, including awards from ARS Electronica, UNESCO's Nabi Digital Storytelling Competition of Intangible Heritage, Yeosu Marine Expo (Korea) and Good Design Award Japan.

In 2012, she exhibited her digital artwork called 'Four God Fag' which symbolises four traditional Asian gods connecting Asia. In 2014 she was awarded Good Design Award Japan by her projection mapping using only actual images. In 2015 she carried out projection mapping celebrating RIMPA 400 anniversary and attracted more than 16,000 attendees.

She is currently a professor at Kyoto University's Center for the Promotion of Excellence in Higher Education. After receiving a PhD in Art & Technology research from the University of Tokyo, she became a fellow at the Centre for Advanced Visual Studies at MIT.?

Her new book 'Cross-Cultural Computing: An Artist's Journey' is available from Springer UK.