Monday, March 6, 2017

Mutual Information Representation between Physical and Digital World

 “Imagine a spherical being living outside of any gravitational field, with no knowledge or imagination of any other kind of experience. What could UP possibly mean to such a being?” [1] Human’s understanding of the world and using their imagination is a lot related to embodiment. We have developed our cognition using our bodies and perceptions. Nowadays, it seems like that we have tied our bodies into keyboard, mouse and touch screen when we learn concepts and develop ideas. 

Computer science has leveraged the way that human accumulate and process information. However, the channel that human can manipulate and comprehend the information is still limited. For instance, when biologists synthesize a medicine, they need to sit in front of computer and play around virtual molecules. Why can’t they play with Lego-like physical objects that they can see which block can fit in which place and “feel” when the block is connected to another block? Why don’t we have a tool for it?

Shape-Changing Interface is a promising solution to the gap between digital and physical world. It can physicalize digital data for display. It can also leverage human’s full-body for creative idea establishment and natural communication between human. Barriers to realize shape-changing interfaces are getting lower and lower, since HCI researchers are adopting technologies from robotics, material science and chemistry.

As a PhD student, I wish to contribute to this big stream of shape-changing interface, and see its fundamental and positive impact on human being’s knowledge development.

[1] Shapiro, Larry (March 2007). "The Embodied Cognition Programme" (PDF). Philosophy Compass. 2 (2). doi:10.1111/j.1747-9991.2007.00064.x

Tuesday, February 7, 2017

How to shorten your rebuttal?

Once you finished your first draft of rebuttal, you would easily find the length is beyond a text limit. You want to answer all the questions by the reviewers, but the space is limited. (for CHI only 5,000 characters)

There are a number of tips by great HCI researchers about writing good rebuttals. You can refer them to prepare your rebuttal. If you don't have much experience in rebuttal, it's always good to read it before rebuttal period to save time.

As a PhD student, I don't have such great tips, but I can share tips to shorten your rebuttal, which actually helped me while writing CHI rebuttal last year. They are somewhat dirty, but useful.
For the record, I learned most of them from my colleague, Huy Viet Le.

  1. Passive tense --> active tense
  2. #AC --> #R
  3. And, but --> ;
  4. did not --> didn't or didnt. Use it at most once, only when you are desperate.
  5. Previous work --> prior work
  6. Cut a long sentence into two sentences. It can save space for connecting terms such as which, that, etc.
  7. Make abbreviation e.g., KnobSlider -> KS
  8. Randomly remove space that is not notable. E.g., We will add suggested statistic analysis. (R1, R4) -> We will add suggested statistic analysis.(R1,R4)

Keep in mind that your writing is academically qualified. I put additional links that give you more tips.

Wednesday, January 25, 2017

FabLab around Stuttgart
They have a broad range of courses including electronics, wood, metal, etc. Personally I'm interested in making wood furniture, so that I will probably participate one of the courses.
Siemensstra├če 140, 70469 Stuttgart
They make courses or event for 2-3 time per month.
It runs based on reservation, but free visit is possible after 18h30.
Walter-Simon-Str.14, 72072 T├╝bingen

Tuesday, January 17, 2017

[Paper Review #12 16/1/17] NormalTouch and TextureTouch: High-fidelity 3D Haptic Shape Rendering on Handheld Virtual Reality Controllers

NormalTouch and TextureTouch: High-fidelity 3D Haptic Shape Rendering on Handheld Virtual Reality Controllers

Hrvoje Benko, Christian Holz, Mike Sinclair, Eyal Ofek
UIST '16, 11 pages except references


The paper introduces NormalTouch and TexturesTouch as 3D haptic shape rendering devices. Both devices are handheld and tracked with 6 DOF. They have place holders for index finger that render surface height and orientation of virtual models. NormalTouch uses tiltable and height-adjustable module, and TextureTouch uses 4x4 pin array to render the surface.

Study hypothesis and results

  • H1. Haptic feedback leads to more accurate targeting and tracing compared to VisualOnly feedback. -> True
  • H2. NormalTouch and TextureTouch allow targeting with higher accuracy than VibroTactile, because they render 3D shapes with higher fidelity, facilitating precise touch. -> True
  • H3. TextureTouch produces the lowest error overall, because it renders structure on the participant’s finger as opposed to just the surface normal. -> False
  • H4. Participants complete trials fastest in the VisualOnly condition, because no cues other than visual need cognitive attention and time to process. -> True

What I like in this paper

  • I like the prototypes and I appreciate that they describe how to build it in detail. It's nice to see their progress to build good prototype via Figure2.
  • This paper has a lot of figures (21) to help readers understand implementation of their device, surface penetration policy, evaluation tasks, and the results.
  • The writing is factual, no sugar coating, compact. It makes the paper more informative. 

How I will continue the study if I were the authors

  • I want to see difference between NormalTouch and Soft finger tactile rendering for wearable haptics ( When referring only figures, probably the second one provides less precise position feedback, because I don't see how the system can track the finger position. Maybe the second one is good enough because it's hard to recognize small distance difference in virtual space.
  • To improve TextureTouch, I would add a marker on index finger, allowing swiping the surface with finger. I think it can be an advantage comparing to glove-based or exoskeleton devices. Maybe it's mentioned somewhere in the paper. I haven't read fully.

Friday, January 13, 2017

[Paper Review #11 13/1/17] TRing: Instant and Customizable Interactions with Objects Using an Embedded Magnet and a Finger-Worn Device

TRing: Instant and Customizable Interactions with Objects Using an Embedded Magnet and a Finger-Worn Device

Sang Ho Yoon, Yunbo Zhang, Ke Huo, Karthik Ramani
UIST '16,  11 pages except references


TRing offers a novel method for making plain objects interactive using an embedded magnet and a finger-worn device. The device can provide relative position of the embedded magnet, using particle filter integrated magnetic sensing technique. They also offer a magnet placement algorithm that guides that magnet installation location based upon the user’s interface customization. It shows 8.6mm average tracking error in 3D space and 4.96mm error in 2D space. The paper reports detailed performance evaluation and new interaction techniques.

What I like

  • Good introduction. It starts with DIY trends and technologies, and well referring related works. The first paragraph was informative for me.
  • The related work is also very informative, I learned a lot of related work that I didn’t know before.
  • The way of writing is not criticizing previous work. For instance, it says “our approach also introduces a simple way…” Instead of criticizing, it says the previous work provided a simple way. Here is another example. “Although these works suggest a new way of …, they do not focus on …” It says that the previous work have contributions, but they simply cover different research questions.

Further to read (Magnetic sensing)

  • Liang, R.-H., Kuo, H.-C., Chan, L., Yang, D.-N., and Chen, B.-Y. GaussStones: Shielded magnetic tangibles for multi-token interactions on portable displays. In Proc. UIST ’14, ACM (2014), 365–372. 
  • Huang, J., Mori, T., Takashima, K., Hashi, S., and Kitamura, Y. IM6D: Magnetic tracking system with 6-DOF passive markers for dexterous 3D interaction and motion. ACM Transactions on Graphics 34, 6 (2015), 217.
  • Cheng, K.-Y., Liang, R.-H., Chen, B.-Y., Laing, R.-H., and Kuo, S.-Y. iCon: Utilizing everyday objects as additional, auxiliary and instant tabletop controllers. In Proc. CHI ’10, ACM (2010), 1155–1164. (Passive magnetic source)
  • Chan, L., Liang, R.-H., Tsai, M.-C., Cheng, K.-Y., Su, C.-H., Chen, M. Y., Cheng, W.-H., and Chen, B.-Y. FingerPad: Private and subtle interaction using fingertips. In Proc. UIST ’13, ACM (2013), 255–260. (hall sensor array)
  • Chen, K.-Y., Lyons, K., White, S., and Patel, S. uTrack: 3D input using two magnetic sensors. In Proc. UIST ’13, ACM (2013), 237–244. (using magnetometere)