A number of researchers have reported that a fullyarticulated
visual representation of oneself in an immersive
virtual environment (IVE) has considerable impact on social
interaction and the subjective sense of presence in the virtual
world. Therefore, many approaches address this challenge
and incorporate a virtual model of the user’s body in
the VE. Usually, a fully-articulated visual identity or or socalled
“virtual body” is manipulated according to user motions
which are defined by feature points detected by a tracking
system. Therefore, markers have to be attached to certain
feature points as done, for instance, with full-body motion
coats which have to be worn by the user. Such instrumentation
is unsuitable in scenarios which involve multiple
persons simultaneously or in which participants frequently
change. Furthermore, individual characteristics such as skin
pigmentation, hairiness or clothes are not considered by this
procedure where the tracked data is always mapped to the
same invariant 3D model.
In this paper we present a software-based approach that
allows to incorporate a realistic visual identity of oneself in
the VE, which can be integrated easily into existing hardware
setups. In our setup we focus on visual representation of a
user’s arms and hands. The idea is to make use of images
captured by cameras that are attached to video-see-through
head-mounted displays. These egocentric frames can be segmented
into foreground showing parts of the human body,
i. e., the human’s hands, and background. Then the extremities
can be overlayed with the user’s current view of the
virtual world, and thus a high-fidelity virtual body can be
visualized.
%0 Conference Paper
%1 SBRH09a
%A Steinicke, Frank
%A Bruder, Gerd
%A Rothaus, Kai
%A Hinrichs, Klaus H.
%B Proceeding of VirtuaI Reality International Conference
%D 2009
%I IEEE Press
%K camera egocentric identity images myown visual
%P 289--290
%T Visual Identity from Egocentric Camera Images for Head-Mounted Display Environments
%U http://www.bibsonomy.org/documents/4ce4fb24bf43706251f5e44112986a27/mcm/av_vric.pdf?qrcode=false
%X A number of researchers have reported that a fullyarticulated
visual representation of oneself in an immersive
virtual environment (IVE) has considerable impact on social
interaction and the subjective sense of presence in the virtual
world. Therefore, many approaches address this challenge
and incorporate a virtual model of the user’s body in
the VE. Usually, a fully-articulated visual identity or or socalled
“virtual body” is manipulated according to user motions
which are defined by feature points detected by a tracking
system. Therefore, markers have to be attached to certain
feature points as done, for instance, with full-body motion
coats which have to be worn by the user. Such instrumentation
is unsuitable in scenarios which involve multiple
persons simultaneously or in which participants frequently
change. Furthermore, individual characteristics such as skin
pigmentation, hairiness or clothes are not considered by this
procedure where the tracked data is always mapped to the
same invariant 3D model.
In this paper we present a software-based approach that
allows to incorporate a realistic visual identity of oneself in
the VE, which can be integrated easily into existing hardware
setups. In our setup we focus on visual representation of a
user’s arms and hands. The idea is to make use of images
captured by cameras that are attached to video-see-through
head-mounted displays. These egocentric frames can be segmented
into foreground showing parts of the human body,
i. e., the human’s hands, and background. Then the extremities
can be overlayed with the user’s current view of the
virtual world, and thus a high-fidelity virtual body can be
visualized.
@inproceedings{SBRH09a,
abstract = {A number of researchers have reported that a fullyarticulated
visual representation of oneself in an immersive
virtual environment (IVE) has considerable impact on social
interaction and the subjective sense of presence in the virtual
world. Therefore, many approaches address this challenge
and incorporate a virtual model of the user’s body in
the VE. Usually, a fully-articulated visual identity or or socalled
“virtual body” is manipulated according to user motions
which are defined by feature points detected by a tracking
system. Therefore, markers have to be attached to certain
feature points as done, for instance, with full-body motion
coats which have to be worn by the user. Such instrumentation
is unsuitable in scenarios which involve multiple
persons simultaneously or in which participants frequently
change. Furthermore, individual characteristics such as skin
pigmentation, hairiness or clothes are not considered by this
procedure where the tracked data is always mapped to the
same invariant 3D model.
In this paper we present a software-based approach that
allows to incorporate a realistic visual identity of oneself in
the VE, which can be integrated easily into existing hardware
setups. In our setup we focus on visual representation of a
user’s arms and hands. The idea is to make use of images
captured by cameras that are attached to video-see-through
head-mounted displays. These egocentric frames can be segmented
into foreground showing parts of the human body,
i. e., the human’s hands, and background. Then the extremities
can be overlayed with the user’s current view of the
virtual world, and thus a high-fidelity virtual body can be
visualized.},
added-at = {2011-07-05T13:25:17.000+0200},
author = {Steinicke, Frank and Bruder, Gerd and Rothaus, Kai and Hinrichs, Klaus H.},
biburl = {https://www.bibsonomy.org/bibtex/24ce4fb24bf43706251f5e44112986a27/mcm},
booktitle = {Proceeding of VirtuaI Reality International Conference},
interhash = {8a9f506a8099a1fb98cc4ae92cb79625},
intrahash = {4ce4fb24bf43706251f5e44112986a27},
keywords = {camera egocentric identity images myown visual},
pages = {289--290},
publisher = {IEEE Press},
timestamp = {2012-04-05T10:42:06.000+0200},
title = {Visual Identity from Egocentric Camera Images for Head-Mounted Display Environments},
url = {http://www.bibsonomy.org/documents/4ce4fb24bf43706251f5e44112986a27/mcm/av_vric.pdf?qrcode=false},
year = 2009
}