Neuroadaptive modelling for generating images matching perceptual categories
Research output: Contribution to journal › Journal article › Research › peer-review
Standard
Neuroadaptive modelling for generating images matching perceptual categories. / Kangassalo, Lauri; Spapé, Michiel; Ruotsalo, Tuukka.
In: Scientific Reports, Vol. 10, No. 1, 14719, 2020.Research output: Contribution to journal › Journal article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - JOUR
T1 - Neuroadaptive modelling for generating images matching perceptual categories
AU - Kangassalo, Lauri
AU - Spapé, Michiel
AU - Ruotsalo, Tuukka
PY - 2020
Y1 - 2020
N2 - Brain–computer interfaces enable active communication and execution of a pre-defined set of commands, such as typing a letter or moving a cursor. However, they have thus far not been able to infer more complex intentions or adapt more complex output based on brain signals. Here, we present neuroadaptive generative modelling, which uses a participant’s brain signals as feedback to adapt a boundless generative model and generate new information matching the participant’s intentions. We report an experiment validating the paradigm in generating images of human faces. In the experiment, participants were asked to specifically focus on perceptual categories, such as old or young people, while being presented with computer-generated, photorealistic faces with varying visual features. Their EEG signals associated with the images were then used as a feedback signal to update a model of the user’s intentions, from which new images were generated using a generative adversarial network. A double-blind follow-up with the participant evaluating the output shows that neuroadaptive modelling can be utilised to produce images matching the perceptual category features. The approach demonstrates brain-based creative augmentation between computers and humans for producing new information matching the human operator’s perceptual categories.
AB - Brain–computer interfaces enable active communication and execution of a pre-defined set of commands, such as typing a letter or moving a cursor. However, they have thus far not been able to infer more complex intentions or adapt more complex output based on brain signals. Here, we present neuroadaptive generative modelling, which uses a participant’s brain signals as feedback to adapt a boundless generative model and generate new information matching the participant’s intentions. We report an experiment validating the paradigm in generating images of human faces. In the experiment, participants were asked to specifically focus on perceptual categories, such as old or young people, while being presented with computer-generated, photorealistic faces with varying visual features. Their EEG signals associated with the images were then used as a feedback signal to update a model of the user’s intentions, from which new images were generated using a generative adversarial network. A double-blind follow-up with the participant evaluating the output shows that neuroadaptive modelling can be utilised to produce images matching the perceptual category features. The approach demonstrates brain-based creative augmentation between computers and humans for producing new information matching the human operator’s perceptual categories.
UR - http://www.scopus.com/inward/record.url?scp=85090334567&partnerID=8YFLogxK
U2 - 10.1038/s41598-020-71287-1
DO - 10.1038/s41598-020-71287-1
M3 - Journal article
C2 - 32895430
AN - SCOPUS:85090334567
VL - 10
JO - Scientific Reports
JF - Scientific Reports
SN - 2045-2322
IS - 1
M1 - 14719
ER -
ID: 255209973