[Paper Review #12 16/1/17] NormalTouch and TextureTouch: High-fidelity 3D Haptic Shape Rendering on Handheld Virtual Reality Controllers

NormalTouch and TextureTouch: High-fidelity 3D Haptic Shape Rendering on Handheld Virtual Reality Controllers

Hrvoje Benko, Christian Holz, Mike Sinclair, Eyal Ofek
UIST '16, 11 pages except references


The paper introduces NormalTouch and TexturesTouch as 3D haptic shape rendering devices. Both devices are handheld and tracked with 6 DOF. They have place holders for index finger that render surface height and orientation of virtual models. NormalTouch uses tiltable and height-adjustable module, and TextureTouch uses 4x4 pin array to render the surface.

Study hypothesis and results

  • H1. Haptic feedback leads to more accurate targeting and tracing compared to VisualOnly feedback. -> True
  • H2. NormalTouch and TextureTouch allow targeting with higher accuracy than VibroTactile, because they render 3D shapes with higher fidelity, facilitating precise touch. -> True
  • H3. TextureTouch produces the lowest error overall, because it renders structure on the participant’s finger as opposed to just the surface normal. -> False
  • H4. Participants complete trials fastest in the VisualOnly condition, because no cues other than visual need cognitive attention and time to process. -> True

What I like in this paper

  • I like the prototypes and I appreciate that they describe how to build it in detail. It's nice to see their progress to build good prototype via Figure2.
  • This paper has a lot of figures (21) to help readers understand implementation of their device, surface penetration policy, evaluation tasks, and the results.
  • The writing is factual, no sugar coating, compact. It makes the paper more informative. 

How I will continue the study if I were the authors

  • I want to see difference between NormalTouch and Soft finger tactile rendering for wearable haptics (http://www.gmrv.es/~gcirio/pdf/SoftFingerTactileRendering_WHC2015.pdf). When referring only figures, probably the second one provides less precise position feedback, because I don't see how the system can track the finger position. Maybe the second one is good enough because it's hard to recognize small distance difference in virtual space.
  • To improve TextureTouch, I would add a marker on index finger, allowing swiping the surface with finger. I think it can be an advantage comparing to glove-based or exoskeleton devices. Maybe it's mentioned somewhere in the paper. I haven't read fully.