• 2014 Location Sensitive We have created a visual interface using the human palm that is location sensitive and always available. To accomplish this, we constructed an augmented space in an actual workspace by ...
    Posted Apr 8, 2015, 3:06 AM by Seokhwan Kim
  • 2012 See and Select We prototyped two selection techniques, Point-Tap and Tap-Tap, and conducted experiments to assess their characteristics, in particular how familiarity with a space affects their usability. Both techniques were ...
    Posted Apr 8, 2015, 3:05 AM by Seokhwan Kim
  • 2011 Dual Views This is a pure software solution to enable dual views on common TN LCDs.We can show two different arbitrarily images on two customized view angles.As you know TN ...
    Posted May 17, 2012, 4:35 PM by Seokhwan Kim
  • 2010 Palm Display In the future, we expect that more sensing devices such as camera and high resolution display system will be around smart space. In that space, we expect that user can ...
    Posted Oct 2, 2012, 6:23 PM by Seokhwan Kim
  • 2009 iTile iled display constructs a ultra large and high resolution display by combining multiple displays. Indeed, development of software solution in such clustered is more complex than the application in one ...
    Posted Oct 2, 2012, 6:23 PM by Seokhwan Kim
Showing posts 1 - 5 of 7. View more »

2014 Location Sensitive

posted Apr 8, 2015, 3:04 AM by Seokhwan Kim   [ updated Apr 8, 2015, 3:06 AM ]








We have created a visual interface using the human palm that is location sensitive and always available. To accomplish this, we constructed an augmented space in an actual workspace by installing several depth cameras. To manage and connect the multiple depth cameras, we constructed a distributed system based on scalable client-server architecture. By merging depth images from different cameras, the distributed system can track the locations of users within their area of coverage. The system also has a convenient feature that allows users to collect the locations of objects while visualizing the objects via images from the depth cameras. Consequently, the locations of both users and objects are available to the system, thus providing a location-based context for determining which user is close to which object. As a result, the visual interface on the palm becomes location sensitive, which could lead to various applications in daily life. In this paper, we describe the implementation of the aforementioned system and demonstrate its potential applicability.


Demo Video




Publication
  • Seokhwan Kim, Shin Takahashi, Jiro Tanaka, “A Location-Sensitive Visual Interface on the Palm: Interacting with Common Objects in an Augmented Space”, Personal and Ubiquitous Computing, Vol 19-1, pp. 175-187, Springer, 2015. 




2012 See and Select

posted Sep 10, 2012, 10:10 PM by Seokhwan Kim   [ updated Apr 8, 2015, 3:05 AM ]

We prototyped two selection techniques, Point-Tap and Tap-Tap, and conducted experiments to assess their characteristics, in particular how familiarity with a space affects their usability. 

Both techniques were developed to enhance the capabilityof the general ”pointing gesture” and ”map with live video” techniques. The goal of both techniques is to acquire a target object in smart space, and they share the concept of ”see-and-select,” which allows users to select an object while seeing the objects with their own eyes. Consequently, users must rely on the  patial locations of objects when using the techniques.

According to spatial cognition science, humans recognize object locations in two ways, egocentrically and allocentrically, and some work has pointed out that users rely on allocentric representations more once they have become familiar with a space. Indeed, in our experiments, users who were familiar with the space could use the ”map with live video” technique more effectively. The two main contributions of this paper are the presentation of the new techniques themselves, and the identification of a major factor for applying the techniques, namely, the users ’expected familiarity with a space.







Demo video


Publication
  • Seokhwan Kim, Shin Takahashi, Jiro Tanaka, “Point-Tap, Tap-Tap, and The Effect of Familiarity: Enhancing the Usability of See-and-Select in Smart Space, Transaction of Human Interface Society, Vol. 14-4, pp. 445-455, Human Interface Society, November, 2012.

2011 Dual Views

posted Mar 28, 2012, 6:45 PM by Seokhwan Kim   [ updated May 17, 2012, 4:35 PM ]

This is a pure software solution to enable dual views on common TN LCDs.
We can show two different arbitrarily images on two customized view angles.



As you know TN LCDs shows unintended colors when you're looking at it from sides.
This is because liquid crystals (LC) in display is rotated in horizontally as the below image illustrates. Thus, humans perceive different brightness over different view angles.



This graph illustrates brightness variances over different view angles and
there are several intersection points.
Those intersections enable to hide / show set of specific range of colors.



Demo Video



Publications
  • Seokhwan Kim, Xiang Cao, Haimo Zhang, Desney S. Tan, Enabling Concurrent Dual Views on Common LCD Screens , ACM CHI 2012, pp. 2175-2184, 2012.
  • Seokhwan Kim, Xiang Cao, Haimo Zhang, Desney S. Tan, Enabling Concurrent Dual Views on Common LCD Screens, Interactivity of ACM CHI 2012, 2012.
You can download materials at :

# This work is a part of the internship program in Microsoft Research Asia, summer 2011.

2010 Palm Display

posted Dec 20, 2011, 9:34 PM by Seokhwan Kim   [ updated Oct 2, 2012, 6:23 PM ]

In the future, we expect that more sensing devices such as camera and high resolution display system will be around smart space. In that space, we expect that user can use his body part for interacting with other system in that space. As a methods means for that future, we designed the palm display system.



Currently, we developed prototype system using common beam projector and networked PTZ camera. PTZ camera enables us to gather higher resolution of image by zooming in user’s hand.



Users can enjoy it within the coverage of a projector and camera on the ceiling

Demo Video


Publication
  • Seokhwan Kim, Shin Takahashi, Jiro Tanaka, “Interactive projected display on the palm without marker using computer vision processing”, 72nd National Convention of Information Processing Society of Japan, 2010.
  • Seokhwan Kim, Shin Takahashi, Jiro Tanaka, “New interface using palm and fingertip without marker for ubiquitous environment”, 9th IEEE/ACIS International Conference on Computer and Information Science, PP. 819-824, 2010.




2009 iTile

posted Dec 20, 2011, 9:27 PM by Seokhwan Kim   [ updated Oct 2, 2012, 6:23 PM ]

iled display constructs a ultra large and high resolution display by combining multiple displays. Indeed, development of software solution in such clustered is more complex than the application in one machine. iTile is developed to help such development. End-developers can port OSG (Open Scene Graph) based application easily for tiled display. It is also possible to show existing application at various configuration (i.e. how many displays in column and row) not by modifying source code but by editing given script.





Demo Video



Publication
  • Yongjoo Cho, Seokhwan Kim, “Design and implementation of a framework for constructing interactive tiled applications”, Journal of Maritime Information and Communication Sciences, Vol. 13(1), pp.37-44, The Korea Institute of Maritime Information and Communication Sciences, 2009. (Korean)
  • Seokhwan Kim, Minyoung Kim, Suhaw Kim, Jihyoun Kim, Chul-Kee Min, Yongjoo Cho, Kyoung Shin Park, "The Development of Interactive Tiled Display Applications Using the iTILE Framework", HCI Korea 2009, pp. 487-492, HCI Korea, 2009. (Korean)
  • Seokhwan Kim, Minyoung Kim, Yongjoo Cho, Kyoung Shin Park, "iTILE framework for Constructing interactive tiled display Applications", 2009 International conference on computer graphics theory and applications (GRAPP 2009), 2009.


2008 SASILE

posted Dec 20, 2011, 9:21 PM by Seokhwan Kim   [ updated Dec 20, 2011, 9:28 PM ]

SASILE (System for Augmenting Scientific Inquiry Learning Environments) is specialized for helping construction of applications for scientific inquiry in VE. End-developers can easily  produce such materials for science education in VE. HIVE is bound as a part of the system.





Publication
  • Jaewon Lim, Seokhwan Kim, Yongjoo Cho, Kyoung Shin Park, “The development of the virtual reality system for augmenting scientific inquiry learning environments”, The KIPS transactions, Vol. 15B(2), pp.95-102, Korea Information Processing Society, 2008. (Korean)


2007 HIVE

posted Dec 20, 2011, 9:12 PM by Seokhwan Kim   [ updated Dec 26, 2011, 7:44 PM ]

In virtual reality, a frequently occurring problem is the difficulty of 3D interface because normal users do not familiar to such 3D interface. In this environment, 2D interface based on GUI and common mobile device can be helpful to navigate the space and collect data. But, such development of GUI application is time consuming and require pretty steep learning curve in development. This framework (HIVE: Handheld-based Interface development framework for Virtual Environments) helps such developments.

Below pictures illustrates entire layout of the system and the applications developed using HIVE. 



Publication
  • Seokhwan Kim, Yongjoo Cho, “A development of mobile-based user interface framework for virtual environments”, The KIPS Transactions, Vol. 14B(5), pp.343-350, Korea Information Processing Society, 2007. (Korean)
  • Seokhwan Kim, Yongjoo Cho, Kyoung Shin Park, Joa-Sang Lim, "Development of a Handheld user Interface Framework for Virtual Environments", International Conference on Human-Computer Interaction (HCII 2007), LNCS 4563, pp. 253-261, Springer, 2007.



1-7 of 7