{Reference Type}: Journal Article {Title}: Complexity of mental geometry for 3D pose perception. {Author}: Guo C;Maruya A;Zaidi Q; {Journal}: Vision Res {Volume}: 222 {Issue}: 0 {Year}: 2024 Jun 7 {Factor}: 1.984 {DOI}: 10.1016/j.visres.2024.108438 {Abstract}: Biological visual systems rely on pose estimation of 3D objects to navigate and interact with their environment, but the neural mechanisms and computations for inferring 3D poses from 2D retinal images are only partially understood, especially where stereo information is missing. We previously presented evidence that humans infer the poses of 3D objects lying centered on the ground by using the geometrical back-transform from retinal images to viewer-centered world coordinates. This model explained the almost veridical estimation of poses in real scenes and the illusory rotation of poses in obliquely viewed pictures, which includes the "pointing out of the picture" phenomenon. Here we test this model for more varied configurations and find that it needs to be augmented. Five observers estimated poses of sloped, elevated, or off-center 3D sticks in each of 16 different poses displayed on a monitor in frontal and oblique views. Pose estimates in scenes and pictures showed remarkable accuracy and agreement between observers, but with a systematic fronto-parallel bias for oblique poses similar to the ground condition. The retinal projection of the pose of an object sloped wrt the ground depends on the slope. We show that observers' estimates can be explained by the back-transform derived for close to the correct slope. The back-transform explanation also applies to obliquely viewed pictures and to off-center objects and elevated objects, making it more likely that observers use internalized perspective geometry to make 3D pose inferences while actively incorporating inferences about other aspects of object placement.