Repository logo
 

Adapting RGB pose estimation to new domains

dc.contributor.authorMulay, Gururaj, author
dc.contributor.authorDraper, Bruce, advisor
dc.contributor.authorBeveridge, J. Ross, advisor
dc.contributor.authorMaciejewsky, Anthony, committee member
dc.date.accessioned2019-06-14T17:06:55Z
dc.date.available2019-06-14T17:06:55Z
dc.date.issued2019
dc.descriptionZip file contains supplementary videos.
dc.description.abstractMany multi-modal human computer interaction (HCI) systems interact with users in real-time by estimating the user's pose. Generally, they estimate human poses using depth sensors such as the Microsoft Kinect.For multi-modal HCI interfaces to gain traction in the real world, however, it would be better for pose estimation to be based on data from RGB cameras, which are more common and less expensive than depth sensors. This has motivated research into pose estimation from RGB images. Convolutional Neural Networks (CNNs) represent the state-of-the-art in this literature, for example [1–5], and [6]. These systems estimate 2D human poses from RGB images. A problem with current CNN-based pose estimators is that they require large amounts of labeled data for training. If the goal is to train an RGB pose estimator for a new domain, the cost of collecting and more importantly labeling data can be prohibitive. A common solution is to train on publicly available pose data sets, but then the trained system is not tailored to the domain. We propose using RGB+D sensors to collect domain-specific data in the lab, and then training the RGB pose estimator using skeletons automatically extracted from the RGB+D data. This paper presents a case study of adapting the RMPE pose estimation network [4] to the domain of the DARPA Communicating with Computers (CWC) program [7], as represented by the EGGNOG data set [8]. We chose RMPE because it predicts both joint locations and Part Affinity Fields (PAFs) in real-time. Our adaptation of RMPE trained on automatically-labeled data outperforms the original RMPE on the EGGNOG data set.
dc.format.mediumborn digital
dc.format.mediummasters theses
dc.format.mediumZIP
dc.format.mediumMPEG
dc.identifierMulay_colostate_0053N_15457.pdf
dc.identifier.urihttps://hdl.handle.net/10217/195409
dc.languageEnglish
dc.language.isoeng
dc.publisherColorado State University. Libraries
dc.relation.ispartof2000-2019
dc.rightsCopyright and other restrictions may apply. User is responsible for compliance with all applicable laws. For information about copyright law, please see https://libguides.colostate.edu/copyright.
dc.subjectCWC
dc.subjecthuman pose estimation
dc.subjectRMPE
dc.subjectHCI
dc.subjectconvolutional neural networks
dc.subjectMicrosoft Kinect
dc.titleAdapting RGB pose estimation to new domains
dc.typeText
dcterms.rights.dplaThis Item is protected by copyright and/or related rights (https://rightsstatements.org/vocab/InC/1.0/). You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).
thesis.degree.disciplineComputer Science
thesis.degree.grantorColorado State University
thesis.degree.levelMasters
thesis.degree.nameMaster of Science (M.S.)

Files

Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
Mulay_colostate_0053N_15457.pdf
Size:
1.65 MB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
supplemental.zip
Size:
10.98 MB
Format:
Zip File
Description: