Updated July 21, 2017

2017

  • F. von Drigalski*, L. El Hafi*, P. M. Uriguen Eljuri*, G. A. Garcia Ricardez*, J. Takamatsu, and T. Ogasawara, “Vibration-Reducing End Effector for Automation of Drilling Tasks in Aircraft Manufacturing”, in IEEE Robotics and Automation Letters (RA-L), vol. 2, no. 4, pp. 1-6, October 2017.
    [Also in Proceedings of 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, September 2017.]
    [*Authors contributed equally.]
    [Accepted for publication.]

    Abstract: In this paper, we present an end effector that can drill holes compliant to aeronautic standards while mounted on a lightweight robot arm. There is an unmet demand for a robotic solution capable of drilling inside an aircraft fuselage, as size, weight and space constraints disqualify current commercial solutions for this task. Our main contribution is the mechanical design of the end effector with vibration-reducing feet that are pressed against the workpiece during the drilling process to increase stability, and a separate linear actuator to advance the drill. The stabilizing properties of the end effector are confirmed experimentally. The solution took 1st place at the Airbus Shopfloor Challenge, an international robotics competition held at ICRA 2016 that modeled the in-fuselage drilling task.

  • L. El Hafi, M. Ding, J. Takamatsu, and T. Ogasawara, “STARE: Real-time, Wearable, Simultaneous Gaze Tracking and Object Recognition from Eye Images”, in SMPTE Motion Imaging Journal, vol. 126, no. 6, August 2017.
    [Accepted for publication.]

    Abstract: We propose STARE, a wearable system to perform real-time, simultaneous eye tracking and focused object recognition for daily-life applications in varied illumination environments. Our proposed method uses a single camera sensor to evaluate the gaze direction and requires neither a front-facing camera nor infrared sensors. To achieve this, we describe 1) a model-based approach to estimate the gaze direction using RGB eye images, 2) a method to recognize objects in the scene reflected on the cornea in real time, and 3) a 3D-printable prototype of a wearable gaze-tracking device. We verify the validity of our approach experimentally with different types of cameras in different illumination settings, and with a proof-of-concept implementation of a state-of-the-art neural network. The proposed system can be used as a framework for RGB-based eye tracking and human behavior analysis.

  • G. A. Garcia Ricardez*, L. El Hafi*, F. von Drigalski*, R. Elizalde Zapata, C. Shiogama, K. Toyoshima, P. M. Uriguen Eljuri, M. Gall, A. Yuguchi, A. Delmotte, V. G. Hoerig, W. Yamazaki, S. Okada, Y. Kato, R. Futakuchi, K. Inoue, K. Asai, Y. Okazaki, M. Yamamoto, M. Ding, J. Takamatsu, and T. Ogasawara, “Climbing on Giant’s Shoulders: Newcomer’s Road into the Amazon Robotics Challenge 2017”, in Proceedings of 2017 IEEE/RAS Warehouse Picking Automation Workshop (WPAW 2017), Singapore, Singapore, May 2017.
    [*Authors contributed equally.]

    Abstract: The Amazon Robotics Challenge has become one of the biggest robotic challenges in the field of warehouse automation and manipulation. In this paper, we present an overview of materials available for newcomers to the challenge, what we learned from the previous editions and discuss the new challenges within the Amazon Robotics Challenge 2017. We also outline how we developed our solution, the results of an investigation on suction cup size and some notable difficulties we encountered along the way. Our aim is to speed up development for those who come after and, as first-time contenders like us, have to develop a solution from zero.

  • L. El Hafi, M. Ding, J. Takamatsu, and T. Ogasawara, “眼球画像を用いた視線追跡と物体認識 日常生活のための装着型デバイス”, in Proceedings of 2017 JSME Robotics and Mechatronics Conference (ROBOMECH 2017), Fukushima, Japan, May 2017.1

    Abstract: 本稿では, 我々がこれまで行ってきた, 眼球画像を用いた視線方向の推定手法を改善することで, 新しい装着型デバイスを用いた注視物体の認識手法を提案する. まず, 視線方向と視点を推定するために, 以前の研究に基づくモデルアプローチを使用する. 次に, 眼球画像と視線方向の情報を用いて, 角膜で反射した画像から注視領域にあたる部分を抽出し, ディープラーニングを利用して物体を認識する. 注視領域を自動的に抽出できるので, ディープラーニングのための学習セットの構築の手間が削減できる.

  • L. El Hafi, M. Ding, J. Takamatsu, and T. Ogasawara, “Gaze Tracking and Object Recognition from Eye Images”, in Proceedings of 2017 IEEE International Conference on Robotic Computing (IRC 2017), pp. 310-315, Taichung, Taiwan, April 2017.

    Abstract: This paper introduces a method to identify the focused object in eye images captured from a single camera in order to enable intuitive eye-based interactions using wearable devices. Indeed, eye images allow to not only obtain natural user responses from eye movements, but also the scene reflected on the cornea without the need for additional sensors such as a frontal camera, thus making it more socially acceptable. The proposed method relies on a 3D eye model reconstruction to evaluate the gaze direction from the eye images. The gaze direction is then used in combination with deep learning algorithms to classify the focused object reflected on the cornea. Finally, the experimental results using a wearable prototype demonstrate the potential of the proposed method solely based on eye images captured from a single camera.

2016

  • L. El Hafi, M. Ding, J. Takamatsu, and T. Ogasawara, “Gaze Tracking Using Corneal Images Captured by a Single High-Sensitivity Camera”, in The Best of IET and IBC 2016-2017, vol. 8, pp. 19-24, September 2016.
    [Also in Proceedings of 2016 International Broadcasting Convention (IBC 2016), pp. 33-43, Amsterdam, Netherlands, September 2016.]

    Abstract: This paper introduces a method to estimate the gaze direction using images of the eye captured by a single high-sensitivity camera. The purpose is to develop wearable devices that enable intuitive eye-based interactions and applications. Indeed, camera-based solutions, as opposed to commercially available infrared-based ones, allow wearable devices to not only obtain natural user responses from eye movements, but also scene images reflected on the cornea, without the need for additional sensors. The proposed method relies on a model approach to evaluate the gaze direction and does not require a frontal camera to capture scene information, making it more socially acceptable if embedded in a glasses-shaped device. Moreover, recent development in high-sensitivity camera sensors allows to consider the proposed method even in low-light condition. Finally, experimental results using a prototype wearable device demonstrate the potential of the proposed method solely based on cornea images captured from a single camera.

  • F. von Drigalski*, L. El Hafi*, P. M. Uriguen Eljuri*, G. A. Garcia Ricardez*, J. Takamatsu, and T. Ogasawara, “NAIST Drillbot: Drilling Robot at the Airbus Shopfloor Challenge”, in Proceedings of 2016 Annual Conference of the Robotics Society of Japan (RSJ 2016), Yamagata, Japan, September 2016.
    [*Authors contributed equally.]

    Abstract: We propose a complete, modular robotic solution for industrial drilling tasks in an aircraft fuselage. The main contribution is a custom-made end effector with vibration-reducing feet that rest on the workpiece during the drilling process to increase stability. The solution took 1st place at the Airbus Shopfloor Challenge, an international robotics competition held at ICRA 2016.

  • L. El Hafi, P. M. Uriguen Eljuri, M. Ding, J. Takamatsu, and T. Ogasawara, “Wearable Device for Camera-Based Eye Tracking: Model Approach Using Cornea Images”, in Proceedings of 2016 JSME Robotics and Mechatronics Conference (ROBOMECH 2016), Yokohama, Japan, June 2016.

    Abstract: The industry’s recent growing interest in virtual reality, augmented reality and smart wearable devices has created a new momentum for eye tracking. Eye movements in particular are viewed as a way to obtain natural user responses from wearable devices alongside gaze information used to analyze interests and behaviors. This paper extends our previous work by introducing a wearable eye-tracking device that enables the reconstruction of 3D eye models of each eye from two RGB cameras. The proposed device is built using high-resolution cameras and a 3D-printed frame attached to a pair of JINS MEME glasses. The 3D eye models reconstructed from the proposed device can be used with any model-based eye-tracking approach. The proposed device is also capable of extracting scene information from the cornea reflections captured by the cameras, detecting blinks from an electrooculography sensor as well as tracking head movements from an accelerometer combined with a gyroscope.

2015

  • L. El Hafi, K. Takemura, J. Takamatsu, and T. Ogasawara, “Model-Based Approach for Gaze Estimation from Corneal Imaging Using a Single Camera”, in Proceedings of 2015 IEEE/SICE International Symposium on System Integration (SII 2015), pp. 88-93, Nagoya, Japan, December 2015.

    Abstract: This paper describes a method to estimate the gaze direction using cornea images captured by a single camera. The purpose is to develop wearable devices capable of obtaining natural user responses, such as interests and behaviors, from eye movements and scene images reflected on the cornea. From an image of the eye, an ellipse is fitted on the colored iris area. A 3D eye model is reconstructed from the ellipse and rotated to simulate projections of the iris area for different eye poses. The gaze direction is then evaluated by matching the iris area of the current image with the corresponding projection obtained from the model. We finally conducted an experiment using a head-mounted prototype to demonstrate the potential of such an eye-tracking method solely based on cornea images captured from a single camera.

  • A. Yuguchi, R. Matsura, R. Baba, Y. Hakamata, W. Yamazaki, F. von Drigalski, L. El Hafi, S. Tsuichihara, M. Ding, J. Takamatsu, and T. Ogasawara, “Development of Robot Control Components for Ball-catching Task Using Motion Capture Device”, in Proceedings of 2015 SICE Symposium on System Integration (SI 2015), Nagoya, Japan, December 2015.1

    Abstract: This paper describes the design and implementation of RT-middleware components for a ball-catching task by humanoid robots. We create a component to get the position of a thrown reflective ball from a motion capture device. We also create component to estimate the trajectory and the point where it will fall. The estimation is used to catch the ball using an HRP-4 humanoid robot with the control component.

2014

  • L. El Hafi, J.-B. Lorent, and G. Rouvroy, “Mapping SDI with a Light-Weight Compression for High Frame Rates and Ultra-HD 4K Transport over SMPTE 2022-5/6”, in Proceedings of VSF VidTrans14 Content in Motion, Annual Technical Conference and Exposition, Arlington, United-States, February 2014.

    Abstract: Considering the necessary bandwidth for the next generation of television with higher resolution video and higher frame rates, live uncompressed transport across 10 GB Ethernet network is not always possible. Indeed, uncompressed 4K video at 60 fps requires 12 Gbps or more. A light-weight compression can be optimal to address this challenge. A pure lossless codec would be the best. However, in general it is difficult to predict the compression ratio achievable by a lossless codec. Therefore, a light-weight visually lossless guaranteeing compression at very low compression ratio with no impact on latency seems optimal to perfectly map SDI links over SMPTE 2022-5/6.

2013

  • L. El Hafi* and T. Denison*, “TICO: Study of a Low-Complexity Video Compression Scheme for FPGA”, Université catholique de Louvain (UCL) & intoPIX, Louvain-la-Neuve, Belgium, June 2013.2
    [*Authors contributed equally.]

    Abstract: L’évolution des techniques d’affichage, en matière de résolution d’écran, de nombre d’images par seconde et de profondeur des couleurs, nécessite de nouveaux systèmes de compression en vue de réduire notamment la puissance consommée aux interfaces vidéo. Face à cette problématique, le consortium Video Electronics Standards Association (VESA) lance en janvier 2013 un appel à propositions pour la création d’un nouveau standard de compression Display Stream Compression (DSC). Ce document, réalisé en collaboration avec intoPIX, répond à l’appel de VESA et propose Tiny Codec (TICO), un schéma de compression vidéo de faible complexité hardware. Il y est décrit d’une part l’étude algorithmique d’un codeur entropique, inspiré de l’Universal Variable Length Coding (UVLC), affichant un rendement de 85% sur du contenu filmé et, d’autre part, l’implémentation sur FPGA d’une transformée en ondelettes discrète, horizontale de type 5:3, traitant des flux vidéo 4K jusqu’à 120 images par seconde. L’implémentation réalisée consomme 340 slices par composante de couleur sur les plateformes basse consommation Artix-7 de Xilinx.

  1. Published in Japanese. 2

  2. Published in French.