2.5D (visual perception)
2.5D describes effects in visual perception – especially stereoscopic vision – where the 3D environment of the observer is projected onto the 2D planes of the retinas. Thus, while the effect is still effectively 2D, it allows for depth perception. A specific aspect of stereoscopic vision in the perception of depth is that the depth perception is easier when it involves evaluating the disparity between two items in the field of view compared to evaluating the exact depth of a lone, single item in the environment.[1] The 2.5D is obtained by a variety of imaging systems which are combined from 2D images taken by a CCD camera. This will allow computer graphics to manipulate human faces to look lifelike.[2]
2.5D is the construction of a three-dimensional environment from 2D retinal projections.[3][4][5] 2.5D is inherently the ability to perceive the physical environment, which allows for the understanding of relationships between objects and ourselves within an environment.[4] Perception of the physical environment is limited because of the visual and cognitive problem. The visual problem is the lack of objects in three-dimensional space to be imaged with the same projection and the cognitive problem is that any object can be a different object depending on the perceiver.[4] David Marr’s work on the 2.5D Sketch has found that 2.5D has visual projection constraints. 2.5D projection constraints exist because "parts of images are always (deformed) discontinuities in luminance";[4] therefore, in reality we do not see all of our surroundings but construct the viewer-centered three-dimensional view of our environment.
A primary aspect in regards to the human visual system is blur perception. It plays a vital role in ocular focusing in order for one to attain clarity central to retinal-imagery.[6] Visual perception is a complex system in which blur perception plays a key role in focusing on near or far objects. Retinal focus patterns are critical in blue perception. These patterns are composed of distal and proximal retinal defocus. Depending on the object’s distance and motion from the individual viewing it, these patterns contain a balance and an imbalance of focus in both directions.[7]
The human blur perceptions involve ideas of blur detection and blur discrimination in detail. It also goes across the central and peripheral retina. The model has a very changing nature, it is shown that a model of the blur perception is in dioptric space while in near viewing. The model can have suggestions according to depth perception and accommodating control.[8]
The 2.5D range data is obtained by a range imaging system, and the 2D colour image is taken by a CCD camera. These two data sets are processed individually and then combined together. The human face output will be lifelike, and can be manipulated by computer graphics tools. In automatic identification of human faces, this tool can provide complete details on the face.[9] There are three different approaches in colour edge detection: (a) to detect edges in each colour independently and then combine them; (b) to detect edges in the 'luminance channel' and use the chrosminance channels to help making other decisions; and (c) to treat the colour image as a vector field, and use the derivatives of the vector field as the colour gradient for edge detection.[10]
Uses
2.5D (visual perception) has become an automatic approach to making human face models. It is an individual system with the input in the form of a range data set and a color perception image of a human face. To derive the information needed to instantly synthesize a lifelike facial model, these two sources are processed separately. Such data portray the anatomical sites of features and the geometrical data of the face. The boundaries of facial features and the attributes of facial textures are resulted from the information taken away from the facial color image. A volumetric facial model is produced by these two sources when integrated.[11] The two methods of feature localization may be utilized by the concept of deformable template, and the technique of chromatic edge detection.[12] There are many uses for a human face model such as for medicine, identification, computer animation, and intelligent coding.[13]
2.5D datasets can be conveniently represented on a framework of boxels, which are axis-aligned non- intersecting boxes that can be used to directly represent objects in the scene or as bounding volumes. Leonidas J. Guibas and Yuan Yao's work showed that axis-aligned disjoint rectangles in the plane can be ordered into four total orders so that any ray meets them in one of the four orders. This work has been proven to also be applicable to boxels in this context, and it is shown that there exist four different partitionings of the boxels into ordered sequences of disjoint sets, called antichains, so that boxels in one antichain can act as occluders of the boxels in subsequent antichains. The expected runtime for the antichain partitioning is O(n log n), where n is the number of boxels. This partitioning can be used for the efficient implementation of virtual drivethroughs and ray tracing.[14]
An automatic approach to making human face models is proposed. It is a separate system with the input in the form of a range of data sets and a colour image of a human face. These two sources are processed individually to derive the information necessary to automatically synthesize a lifelike facial model. The information obtained from the facial range data set includes the anatomical sites of features and the geometrical data of the face. The information extracted from the facial colour image is the boundaries of facial features and the attributes of facial textures. These two sources are integrated to produce a volumetric facial model. The range imaging system contains benefits such as having problems become avoided through contact measurement. This would be easier to keep and is much safer. Other advantages also include how it is needless to calibrate when measuring an object of similarity, and also that the machine would be appropriate for facial range data measurement.[15]
A person's perception of constructing a visual representation of an object corresponds to three successive stages. Firstly, the 2D representation component enables an approximate descriptive process performed on the perceived object. Secondly, the 2.5D representation component adds detailed visuospatial properties to the object's surface. Thirdly, the 3D representation component adds depth and volume perception to the object.[16]
References
- ↑ Read JCA, Phillipson GP, Serrano-Pedraza I, Milner AD, Parker AJ (2010) Stereoscopic Vision on the Absence of the Lateral Occipital Cortex. PLoS ONE 5(9): e12608.doi:10.1371/journal.pone.0012608
- ↑ Kang, C., Chen, Y., & Hsu, W. (1994, January). Automatic approach to mapping a lifelike 2.5d human face. http://journals1.scholarsportal.info.myaccess.library.utoronto.ca/tmp/8086857180618693162.pdf
- ↑ MacEachren, Alan. "GVIS Facilitating Visual Thinking." In How Maps Work: Representation, Visualization, and Design, 355–458. New York: The Guilford Press, 1995.
- ↑ 4.0 4.1 4.2 4.3 Watt, R.J. and B.J. Rogers. "Human Vision and Cognitive Science." In Cognitive Psychology Research Directions in Cognitive Science: European Perspectives Vol. 1, edited by Alan Baddeley and Niels Ole Bernsen, 10–12. East Sussex: Lawrence Erlbaum Associates, 1989.
- ↑ Wood, Jo, Sabine Kirschenbauer, Jurgen Dollner, Adriano Lopes, and Lars Bodum. "Using 3D in Visualization." In Exploring Geovisualization, edited by Jason Dykes, Alan M. MacEachren, and Menno-Jan Kraak, 295–312. Oxford: Elsevier Ltd, 2005.
- ↑ "Conceptual model of human blur perception 10.1016/j.visres.2006.12.001". Sciencedirect.com. Retrieved 2012-03-14.
- ↑ CiuVred, K. J. (2006). Conceptual model of human blur perception. Vision Research , 47, 1245–1252.
- ↑ Vasudevan, Balamurali, Ciuffreda, Kenneth J, Wang, Bin. VISION RESEARCH, V. 47 (9), 04/2007, pp. 1245-1252
- ↑ Kang Chii-Yuan, Chen Yung-Sheng and Hsu Wen-Hsing. Jan/Feb 1994. Image and Vision Computing Volume 12 Number 1. Butterworth-Heinemann Ltd. web source retrieved Aug 2012. http://resolver.scholarsportal.info.myaccess.library.utoronto.ca/resolve/02628856/v12i0001/5_aatmal2hf
- ↑ http://journals1.scholarsportal.info.myaccess.library.utoronto.ca/tmp/11041743830486644056.pdf
- ↑ "University of Toronto Libraries Portal". Bf4dv7zn3u.search.serialssolutions.com.myaccess.library.utoronto.ca. 1994-01-01. Retrieved 2012-03-14.
- ↑ Automatic identification of human faces by 3-D shape of surfaces – using vertices of B spline surface Syst. & Computers in Japan , v.Vol 22 (No 7) , p. 96 , 1991 , Abe T et al.
- ↑ Kang, C. Y., Chen, Y. S., & Hsu, W. H. (1994). Automatic approach to mapping a lifelike 2.5d human face . Image and Vision Computing, 12(1), 5–14.
- ↑ "The BOXEL framework for 2.5D data with applications to virtual drivethroughs and ray tracing" (PDF). 2007-11-07. Retrieved 2012-04-03.
- ↑ Kang Chii-Yuan; Chen Yung-Sheng; Hsu Wen-Hsing. Automatic approach to mapping a lifelike 2.5D human face. Image and Vision Computing (January 1994), 12 (1), pp. 6–7.
- ↑ Serge Bouaziz, Annie Magnan, "Contribution of the visual perception and graphic production systems to the copying of complex geometrical drawings: A developmental Study", University Lumiere Lyon 2, 69676 Bron Cedex, France.
- Kang Chii-Yuan; Chen Yung-Sheng; Hsu Wen-Hsing. Automatic approach to mapping a lifelike 2.5D human face. Image and Vision Computing (January 1994), 12 (1), pp. 6–7.