Global Position Prediction for Interactive Motion Capture
Research output: Contribution to journal › Journal article › Research › peer-review
Standard
Global Position Prediction for Interactive Motion Capture. / Schreiner, Paul; Perepichka, Maksym; Lewis, Hayden; Darkner, Sune; Kry, Paul G.; Erleben, Kenny; Zordan, Victor B.
In: Proceedings of the ACM on Computer Graphics and Interactive Techniques, Vol. 4, No. 3, 3479985, 2021, p. 1-16.Research output: Contribution to journal › Journal article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - JOUR
T1 - Global Position Prediction for Interactive Motion Capture
AU - Schreiner, Paul
AU - Perepichka, Maksym
AU - Lewis, Hayden
AU - Darkner, Sune
AU - Kry, Paul G.
AU - Erleben, Kenny
AU - Zordan, Victor B.
PY - 2021
Y1 - 2021
N2 - We present a method for reconstructing the global position of motion capture where position sensing is poor or unavailable. Capture systems, such as IMU suits, can provide excellent pose and orientation data of a capture subject, but otherwise need post processing to estimate global position. We propose a solution that trains a neural network to predict, in real-time, the height and body displacement given a short window of pose and orientation data. Our training dataset contains pre-recorded data with global positions from many different capture subjects, performing a wide variety of activities in order to broadly train a network to estimate on like and unseen activities. We compare training on two network architectures, a universal network (u-net) and a traditional convolutional neural network (CNN) - observing better error properties for the u-net in our results. We also evaluate our method for different classes of motion. We observe high quality results for motion examples with good representation in specialized datasets, while general performance appears better in a more broadly sampled dataset when input motions are far from training examples.
AB - We present a method for reconstructing the global position of motion capture where position sensing is poor or unavailable. Capture systems, such as IMU suits, can provide excellent pose and orientation data of a capture subject, but otherwise need post processing to estimate global position. We propose a solution that trains a neural network to predict, in real-time, the height and body displacement given a short window of pose and orientation data. Our training dataset contains pre-recorded data with global positions from many different capture subjects, performing a wide variety of activities in order to broadly train a network to estimate on like and unseen activities. We compare training on two network architectures, a universal network (u-net) and a traditional convolutional neural network (CNN) - observing better error properties for the u-net in our results. We also evaluate our method for different classes of motion. We observe high quality results for motion examples with good representation in specialized datasets, while general performance appears better in a more broadly sampled dataset when input motions are far from training examples.
KW - IMU
KW - motion capture
KW - neural networks
UR - http://www.scopus.com/inward/record.url?scp=85116454715&partnerID=8YFLogxK
U2 - 10.1145/3479985
DO - 10.1145/3479985
M3 - Journal article
AN - SCOPUS:85116454715
VL - 4
SP - 1
EP - 16
JO - Proceedings of the ACM on Computer Graphics and Interactive Techniques
JF - Proceedings of the ACM on Computer Graphics and Interactive Techniques
SN - 2577-6193
IS - 3
M1 - 3479985
ER -
ID: 285525802