Updated September 21, 2019

Under Review

  • L. El Hafi, S. Isobe, Y. Tabuchi, Y. Katsumata, H. Nakamura, T. Fukui, T. Matsuo, G. A. Garcia Ricardez, M. Yamamoto, A. Taniguchi, Y. Hagiwara, and T. Taniguchi, “System for Augmented Human-Robot Interaction Through Mixed Reality and Robot Training by Non-Experts in Customer Service Environments”, in RSJ Advanced Robotics, vol. 34, no. 3, February 2020.

    Abstract: Human-robot interaction during general service tasks in home or retail environment has been proven challenging, partly because 1) robots lack high-level context-based cognition, and 2) humans cannot intuit the perception state of robots as they can for other humans. To solve these two problems, we present a complete robot system that has been given the highest evaluation score at the Customer Interaction Task of the Future Convenience Store Challenge at the World Robot Summit 2018, and which implements several key technologies: 1) a hierarchical spatial concepts formation for general robot task planning, and 2) a mixed reality interface to enable users to intuitively visualize the current state of the robot perception and naturally interact with it. The results obtained during the competition indicate that the proposed system allows both non-expert operators and end users to achieve human-robot interactions in customer service environments. Further, we describe a detailed scenario including employee operation and customer interaction which serves as a set of requirements for service robots and a road map for development. The system integration and task scenario described in this paper should be helpful for groups facing customer interaction challenges and looking for a successfully deployed base to build on.

  • G. A. Garcia Ricardez, S. Okada, N. Koganti, A. Yasuda, P. M. Uriguen Eljuri, T. Sano, P.-C. Yang, L. El Hafi, M. Yamamoto, J. Takamatsu, and T. Ogasawara, “Restock and Straightening System for Retail Automation Using Compliant and Mobile Manipulation”, in RSJ Advanced Robotics, vol. 34, no. 3, February 2020.

    Abstract: As the retail industry keeps expanding and shortage of workers increasing, there is a need for autonomous manipulation of products to support retail operations. The increasing amount of products and customers in establishments such as convenience stores requires the automation of restocking, disposing and straightening of products. The manipulation of products needs to be time-efficient, avoid damaging products and beautify the display of products. In this paper, we propose a robotic system to restock shelves, dispose expired products, and straighten products in retail environments. The proposed mobile manipulator features a custom-made end effector with compact and compliant design to safely and effectively manipulate products. Through experiments in a convenience store scenario, we verify the effectiveness of our system to restock, dispose and rearrange items.

  • G. A. Garcia Ricardez, N. Koganti, P.-C. Yang, S. Okada, P. M. Uriguen Eljuri, A. Yasuda, L. El Hafi, M. Yamamoto, J. Takamatsu, and T. Ogasawara, “Adaptive Motion Generation Using Imitation Learning and Highly-Compliant End Effector for Autonomous Cleaning”, in RSJ Advanced Robotics, vol. 34, no. 3, February 2020.

    Abstract: Recent demographic trends in super aging societies, such as Japan, is leading to severe worker shortage. Service robots can play a promising role to augment human workers for performing various household and assistive tasks. Toilet cleanup is one such challenging task that involves performing complaint motion planning in a constrained toilet setting. In this study, we propose an end-to-end robotic framework to perform various tasks related to toilet cleanup. Our key contributions include the design of a complaint and multipurpose end-effector, a deep learning based vision framework to detect garbage on the floor and adaptive motion generation for cleaning the toilet bowl. We evaluate the performance of our framework with the competition setting used for toilet cleanup of Future Convenience Store Challenge at World Robotics Summit 2018. We demonstrate that our proposed framework is capable of successfully completing all the tasks of the competition within the time limit.

In Print

  • G. A. Garcia Ricardez*, L. El Hafi*, and F. von Drigalski*, “Standing on Giant’s Shoulders: Newcomer’s Experience from the Amazon Robotics Challenge 2017”, in Robotic Item Picking: Applications in Warehouse & E-Commerce, 2019.
    [*Authors contributed equally.]

    Abstract: International competitions have fostered innovation in fields such as artificial intelligence, robotic manipulation, and computer vision, and incited teams to push the state of the art. In this chapter, we present the approach, design philosophy and development strategy that we followed during our participation in the Amazon Robotics Challenge 2017, a competition focused on warehouse automation. After introducing our solution, we detail the development of two of its key features: the suction tool and storage system. A systematic analysis of the suction force and details of the end effector features, such as suction force control, grasping and collision detection, is also presented. Finally, this chapter reflects on the lessons we learned from our participation in the competition, which we believe are valuable to future robot challenge participants, as well as warehouse automation system designers.

2019

  • Y. Katsumata, L. El Hafi, A. Taniguchi, Y. Hagiwara, and T. Taniguchi, “Integrating Simultaneous Localization and Mapping with Map Completion Using Generative Adversarial Networks”, in Proceedings of 2019 IEEE/RSJ Workshop on Deep Probabilistic Generative Models for Cognitive Architecture in Robotics (DPGM-CAR 2019), Macau, China, November 2019.

    Abstract: When autonomous robots perform tasks which include moving in daily human environments, they need to generate environment maps. In this research, we propose a simultaneous localization and mapping method which integrates the prior probability distribution of the map completion trained by a generative model architecture. The contribution of this research is that the method can estimate the environment map efficiently from pre-training in other environments. We show with an experiment that the proposed method performs better than other classic methods to estimate environment maps by observation without moving in a simulator.

  • L. El Hafi, S. Matsuzaki, S. Itadera, and T. Yamamoto, “Deployment of a Containerized Software Development Environment for Human Support Robots”, in Proceedings of 2019 Annual Conference of the Robotics Society of Japan (RSJ 2019), Tokyo, Japan, September 2019.

    Abstract: This paper introduces a containerized Software Development Environment (SDE) for the Toyota Human Support Robot (HSR) to collaborate on large robotics projects. The objective is twofold: 1) enable interdisciplinary teams to quickly start research and development with the HSR by sharing a containerized SDE, and 2) accelerate research implementation and integration within the Toyota HSR Community by deploying a common SDE across its members. The SDE described in this paper is developed and maintained by the HSR Software Development Environment Working Group (SDE-WG) following a solution originally proposed by Ritsumeikan University and endorsed by Toyota Motor Corporation (TMC). The source code and documentation required to deploy the SDE are available to all HSR Community members upon request at: https://gitlab.com/hsr-sde-wg/HSR.

  • H. Nakamura, L. El Hafi, Y. Hagiwara, and T. Taniguchi, “Calibration System Using Semantic-ICP for Visualization of Robot Spatial Perception Through Mixed Reality”, in Proceedings of 2019 Annual Conference of the Japanese Society for Artificial Intelligence (JSAI 2019), Niigata, Japan, June 2019.1

    Abstract: To achieve symbiosis between humans and robots, it is important to know what the robots recognize in their environment. Such information can be displayed using a Mixed Reality (MR) head-mounted device to provide an intuitive understanding of a robot perception. However, a robust calibration system is required because the robot and head-mounted MR device have different coordinate systems. In this paper, we develop a semantic-based calibration system for human-robot interactions in MR using Semantic-ICP. We show that the calibration system using Semantic-ICP is better than using GICP SE(3) when the accuracy of the semantic labels is high.

2018

  • L. El Hafi, Y. Hagiwara, and T. Taniguchi, “Abstraction-Rich Workflow for Agile Collaborative Development and Deployment of Robotic Solutions”, in Proceedings of 2018 Annual Conference of the Robotics Society of Japan (RSJ 2018), Kasugai, Japan, September 2018.

    Abstract: This paper introduces a collaborative workflow for development and deployment of robotic solutions. The main contribution lies in the introduction of multiple layers of abstraction between the different components and processes. These layers enable the collaborators to focus on their individual expertise and rely on automated tests and simulations from the system. The ultimate goal is to help interdisciplinary teams to work together efficiently on robotics projects.

  • J. Takamatsu, L. El Hafi, K. Takemura, and T. Ogasawara, “角膜反射画像を用いた視線追跡と物体認識”, in Proceedings of 149th MOC/JSAP Microoptics Meeting on Recognition and Authentication, Tokyo, Japan, September 2018.1

    Abstract: 眼球カメラから得られる角膜反射画像を用いて, 視線方向の推定と注視対象の追跡を 同時に実現する方法を紹介する. この方法は視線計測装置の簡略化に役立つ. 角膜反射画像では 注視対象が歪んだ状態で表れるため, 画像の歪みを補正し従来の物体検出を適用するアプローチ と, 角膜反射画像に直接ディープラーニングを適用するアプローチについて紹介する. 特に後者 のアプローチでは, 視線計測結果を用いて大量の訓練データを簡便に集める方法について述べる.

2017

  • G. A. Garcia Ricardez, F. von Drigalski, L. El Hafi, S. Okada, P.-C. Yang, W. Yamazaki, V. G. Hoerig, A. Delmotte, A. Yuguchi, M. Gall, C. Shiogama, K. Toyoshima, P. M. Uriguen Eljuri, R. Elizalde Zapata, M. Ding, J. Takamatsu, and T. Ogasawara, “Warehouse Picking Automation System with Learning- and Feature-based Object Recognition and Grasping Point Estimation”, in Proceedings of 2017 SICE System Integration Division Annual Conference (SI 2017), pp. 2249-2253, Sendai, Japan, December 2017.

    Abstract: The Amazon Robotics Challenge (ARC) has become one of the biggest robotic competitions in the field of warehouse automation and manipulation. In this paper, we present our solution to the ARC 2017 which uses both learning-based and feature-based techniques for object recognition and grasp point estimation in unstructured collections of objects and a partially controlled space. Our solution proved effective both for previously unknown items even with little data acquisition, as well as for items from the training set, obtaining the 6th place out of 16 contestants.

  • G. A. Garcia Ricardez, F. von Drigalski, L. El Hafi, M. Ding, J. Takamatsu, and T. Ogasawara, “Lessons from the Airbus Shopfloor Challenge 2016 and the Amazon Robotics Challenge 2017”, in Proceedings of 2017 SICE System Integration Division Annual Conference (SI 2017), pp. 572-575, Sendai, Japan, December 2017.

    Abstract: International robotics competitions bring together the research community to solve real-world, current problems such as drilling in aircraft manufacturing (Airbus Shopfloor Challenge) and warehouse automation (Amazon Robotics Challenge). In this paper, we discuss our approaches to these competitions and describe the technical difficulties, design philosophy, development, lessons learned and remaining challenges.

  • L. El Hafi, “STARE: Real-Time, Wearable, Simultaneous Gaze Tracking and Object Recognition from Eye Images”, Doctoral thesis, Nara Institute of Science and Technology (NAIST), Nara, Japan, September 2017.

    Abstract: This thesis proposes STARE, a wearable system to perform real-time, simultaneous eye tracking and focused object recognition for daily-life applications in varied illumination environments. The proposed system extracts both the gaze direction and scene information using eye images captured by a single RGB camera facing the user’s eye. In particular, the method requires neither infrared sensors nor a front-facing camera to capture the scene, making it more socially acceptable when embedded in a wearable device. This approach is made possible by recent technological advances in increased resolution and reduced size of camera sensors, as well as significantly more powerful image treatment techniques based on deep learning. First, a model-based approach is used to estimate the gaze direction using RGB eye images. A 3D eye model is constructed from an image of the eye by fitting an ellipse onto the iris. The gaze direction is then continuously track by rotating the model to simulate projections of the iris area for different eye poses and matching the iris area of the subsequent images with the corresponding projections obtained from the model. By using an additional one-time calibration, the point of regard (POR) is computed, which allows to identify where a user is looking in the scene image reflected on the cornea. Next, objects in the scene reflected on the cornea are recognized in real time using the gaze direction information. Deep learning algorithms are applied to classify and then recognize the focused object in the area surrounding the reflected POR on the eye image. Additional processes using High Dynamic Range (HDR) demonstrate that the proposed method can perform in varied illumination conditions. Finally, the validity of the approach is verified experimentally with a 3D-printable prototype of a wearable device equipped with dual cameras, and a high-sensitivity camera in extreme illumination conditions. Further, a proof-of-concept implementation of a state-of-the-art neural network shows that the focused object recognition can be performed in real time. To summarize, the proposed method and prototype contribute a novel, complete framework to 1) simultaneously perform eye tracking and focused object analysis in real time, 2) automatically generate datasets of focused objects by using the reflected POR, 3) reduce the number of sensors in current gaze trackers to a single RGB camera, and 4) enable daily-life applications in all kinds of illumination. The combination of these features makes it an attractive choice for eye-based human behavior analysis, as well as for creating large datasets of objects focused by the user during daily tasks.

  • F. von Drigalski*, L. El Hafi*, P. M. Uriguen Eljuri*, G. A. Garcia Ricardez*, J. Takamatsu, and T. Ogasawara, “Vibration-Reducing End Effector for Automation of Drilling Tasks in Aircraft Manufacturing”, in IEEE Robotics and Automation Letters (RA-L), vol. 2, no. 4, pp. 2316-2321, October 2017.
    [Also in Proceedings of 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, September 2017.]
    [*Authors contributed equally.]

    Abstract: In this letter, we present an end effector that can drill holes compliant to aeronautic standards while mounted on a lightweight robot arm. There is an unmet demand for a robotic solution capable of drilling inside an aircraft fuselage, as size, weight, and space constraints disqualify current commercial solutions for this task. Our main contribution is the mechanical design of the end effector with high-friction, vibration-reducing feet that are pressed against the workpiece during the drilling process to increase stability, and a separate linear actuator to advance the drill. This relieves the robot arm of the task of advancing and stabilizing the drill, and leaves it with the task of positioning and holding the end effector. The stabilizing properties of the end effector are confirmed experimentally. The solution took first place at the Airbus Shopfloor Challenge, an international robotics competition held at ICRA 2016 that modeled the in-fuselage drilling task.

  • L. El Hafi, M. Ding, J. Takamatsu, and T. Ogasawara, “STARE: Realtime, Wearable, Simultaneous Gaze Tracking and Object Recognition from Eye Images”, in SMPTE Motion Imaging Journal, vol. 126, no. 6, pp. 37-46, August 2017.

    Abstract: We propose STARE, a wearable system to perform realtime, simultaneous eye tracking and focused object recognition for daily-life applications in varied illumination environments. Our proposed method uses a single camera sensor to evaluate the gaze direction and requires neither a front-facing camera nor infrared sensors. To achieve this, we describe: 1) a model-based approach to estimate the gaze direction using red-green-blue (RGB) eye images; 2) a method to recognize objects in the scene reflected on the cornea in real time; and 3) a 3D-printable prototype of a wearable gaze-tracking device. We verify the validity of our approach experimentally with different types of cameras in different illumination settings, and with a proof-of-concept implementation of a state-of-the-art neural network. The proposed system can be used as a framework for RGB-based eye tracking and human behavior analysis.

  • G. A. Garcia Ricardez*, L. El Hafi*, F. von Drigalski*, R. Elizalde Zapata, C. Shiogama, K. Toyoshima, P. M. Uriguen Eljuri, M. Gall, A. Yuguchi, A. Delmotte, V. G. Hoerig, W. Yamazaki, S. Okada, Y. Kato, R. Futakuchi, K. Inoue, K. Asai, Y. Okazaki, M. Yamamoto, M. Ding, J. Takamatsu, and T. Ogasawara, “Climbing on Giant’s Shoulders: Newcomer’s Road into the Amazon Robotics Challenge 2017”, in Proceedings of 2017 IEEE/RAS Warehouse Picking Automation Workshop (WPAW 2017), Singapore, Singapore, May 2017.
    [*Authors contributed equally.]

    Abstract: The Amazon Robotics Challenge has become one of the biggest robotic challenges in the field of warehouse automation and manipulation. In this paper, we present an overview of materials available for newcomers to the challenge, what we learned from the previous editions and discuss the new challenges within the Amazon Robotics Challenge 2017. We also outline how we developed our solution, the results of an investigation on suction cup size and some notable difficulties we encountered along the way. Our aim is to speed up development for those who come after and, as first-time contenders like us, have to develop a solution from zero.

  • L. El Hafi, M. Ding, J. Takamatsu, and T. Ogasawara, “眼球画像を用いた視線追跡と物体認識 日常生活のための装着型デバイス”, in Proceedings of 2017 JSME Robotics and Mechatronics Conference (ROBOMECH 2017), Fukushima, Japan, May 2017.1

    Abstract: 本稿では, 我々がこれまで行ってきた, 眼球画像を用いた視線方向の推定手法を改善することで, 新しい装着型デバイスを用いた注視物体の認識手法を提案する. まず, 視線方向と視点を推定するために, 以前の研究に基づくモデルアプローチを使用する. 次に, 眼球画像と視線方向の情報を用いて, 角膜で反射した画像から注視領域にあたる部分を抽出し, ディープラーニングを利用して物体を認識する. 注視領域を自動的に抽出できるので, ディープラーニングのための学習セットの構築の手間が削減できる.

  • L. El Hafi, M. Ding, J. Takamatsu, and T. Ogasawara, “Gaze Tracking and Object Recognition from Eye Images”, in Proceedings of 2017 IEEE International Conference on Robotic Computing (IRC 2017), pp. 310-315, Taichung, Taiwan, April 2017.

    Abstract: This paper introduces a method to identify the focused object in eye images captured from a single camera in order to enable intuitive eye-based interactions using wearable devices. Indeed, eye images allow to not only obtain natural user responses from eye movements, but also the scene reflected on the cornea without the need for additional sensors such as a frontal camera, thus making it more socially acceptable. The proposed method relies on a 3D eye model reconstruction to evaluate the gaze direction from the eye images. The gaze direction is then used in combination with deep learning algorithms to classify the focused object reflected on the cornea. Finally, the experimental results using a wearable prototype demonstrate the potential of the proposed method solely based on eye images captured from a single camera.

2016

  • L. El Hafi, M. Ding, J. Takamatsu, and T. Ogasawara, “Gaze Tracking Using Corneal Images Captured by a Single High-Sensitivity Camera”, in The Best of IET and IBC 2016-2017, vol. 8, pp. 19-24, September 2016.
    [Also in Proceedings of 2016 International Broadcasting Convention (IBC 2016), pp. 33-43, Amsterdam, Netherlands, September 2016.]

    Abstract: This paper introduces a method to estimate the gaze direction using images of the eye captured by a single high-sensitivity camera. The purpose is to develop wearable devices that enable intuitive eye-based interactions and applications. Indeed, camera-based solutions, as opposed to commercially available infrared-based ones, allow wearable devices to not only obtain natural user responses from eye movements, but also scene images reflected on the cornea, without the need for additional sensors. The proposed method relies on a model approach to evaluate the gaze direction and does not require a frontal camera to capture scene information, making it more socially acceptable if embedded in a glasses-shaped device. Moreover, recent development in high-sensitivity camera sensors allows to consider the proposed method even in low-light condition. Finally, experimental results using a prototype wearable device demonstrate the potential of the proposed method solely based on cornea images captured from a single camera.

  • F. von Drigalski*, L. El Hafi*, P. M. Uriguen Eljuri*, G. A. Garcia Ricardez*, J. Takamatsu, and T. Ogasawara, “NAIST Drillbot: Drilling Robot at the Airbus Shopfloor Challenge”, in Proceedings of 2016 Annual Conference of the Robotics Society of Japan (RSJ 2016), Yamagata, Japan, September 2016.
    [*Authors contributed equally.]

    Abstract: We propose a complete, modular robotic solution for industrial drilling tasks in an aircraft fuselage. The main contribution is a custom-made end effector with vibration-reducing feet that rest on the workpiece during the drilling process to increase stability. The solution took 1st place at the Airbus Shopfloor Challenge, an international robotics competition held at ICRA 2016.

  • L. El Hafi, P. M. Uriguen Eljuri, M. Ding, J. Takamatsu, and T. Ogasawara, “Wearable Device for Camera-Based Eye Tracking: Model Approach Using Cornea Images”, in Proceedings of 2016 JSME Robotics and Mechatronics Conference (ROBOMECH 2016), Yokohama, Japan, June 2016.

    Abstract: The industry’s recent growing interest in virtual reality, augmented reality and smart wearable devices has created a new momentum for eye tracking. Eye movements in particular are viewed as a way to obtain natural user responses from wearable devices alongside gaze information used to analyze interests and behaviors. This paper extends our previous work by introducing a wearable eye-tracking device that enables the reconstruction of 3D eye models of each eye from two RGB cameras. The proposed device is built using high-resolution cameras and a 3D-printed frame attached to a pair of JINS MEME glasses. The 3D eye models reconstructed from the proposed device can be used with any model-based eye-tracking approach. The proposed device is also capable of extracting scene information from the cornea reflections captured by the cameras, detecting blinks from an electrooculography sensor as well as tracking head movements from an accelerometer combined with a gyroscope.

2015

  • L. El Hafi, K. Takemura, J. Takamatsu, and T. Ogasawara, “Model-Based Approach for Gaze Estimation from Corneal Imaging Using a Single Camera”, in Proceedings of 2015 IEEE/SICE International Symposium on System Integration (SII 2015), pp. 88-93, Nagoya, Japan, December 2015.

    Abstract: This paper describes a method to estimate the gaze direction using cornea images captured by a single camera. The purpose is to develop wearable devices capable of obtaining natural user responses, such as interests and behaviors, from eye movements and scene images reflected on the cornea. From an image of the eye, an ellipse is fitted on the colored iris area. A 3D eye model is reconstructed from the ellipse and rotated to simulate projections of the iris area for different eye poses. The gaze direction is then evaluated by matching the iris area of the current image with the corresponding projection obtained from the model. We finally conducted an experiment using a head-mounted prototype to demonstrate the potential of such an eye-tracking method solely based on cornea images captured from a single camera.

  • A. Yuguchi, R. Matsura, R. Baba, Y. Hakamata, W. Yamazaki, F. von Drigalski, L. El Hafi, S. Tsuichihara, M. Ding, J. Takamatsu, and T. Ogasawara, “Development of Robot Control Components for Ball-catching Task Using Motion Capture Device”, in Proceedings of 2015 SICE System Integration Division Annual Conference (SI 2015), Nagoya, Japan, December 2015.1

    Abstract: This paper describes the design and implementation of RT-middleware components for a ball-catching task by humanoid robots. We create a component to get the position of a thrown reflective ball from a motion capture device. We also create component to estimate the trajectory and the point where it will fall. The estimation is used to catch the ball using an HRP-4 humanoid robot with the control component.

2014

  • L. El Hafi, J.-B. Lorent, and G. Rouvroy, “Mapping SDI with a Light-Weight Compression for High Frame Rates and Ultra-HD 4K Transport over SMPTE 2022-5/6”, in Proceedings of VSF VidTrans14 Content in Motion, Annual Technical Conference and Exposition, Arlington, United-States, February 2014.

    Abstract: Considering the necessary bandwidth for the next generation of television with higher resolution video and higher frame rates, live uncompressed transport across 10 GB Ethernet network is not always possible. Indeed, uncompressed 4K video at 60 fps requires 12 Gbps or more. A light-weight compression can be optimal to address this challenge. A pure lossless codec would be the best. However, in general it is difficult to predict the compression ratio achievable by a lossless codec. Therefore, a light-weight visually lossless guaranteeing compression at very low compression ratio with no impact on latency seems optimal to perfectly map SDI links over SMPTE 2022-5/6.

2013

  • L. El Hafi* and T. Denison*, “TICO: Study of a Low-Complexity Video Compression Scheme for FPGA”, Master thesis, Université catholique de Louvain (UCLouvain) & intoPIX, Louvain-la-Neuve, Belgium, June 2013.2
    [*Authors contributed equally.]

    Abstract: L’évolution des techniques d’affichage, en matière de résolution d’écran, de nombre d’images par seconde et de profondeur des couleurs, nécessite de nouveaux systèmes de compression en vue de réduire notamment la puissance consommée aux interfaces vidéo. Face à cette problématique, le consortium Video Electronics Standards Association (VESA) lance en janvier 2013 un appel à propositions pour la création d’un nouveau standard de compression Display Stream Compression (DSC). Ce document, réalisé en collaboration avec intoPIX, répond à l’appel de VESA et propose Tiny Codec (TICO), un schéma de compression vidéo de faible complexité hardware. Il y est décrit d’une part l’étude algorithmique d’un codeur entropique, inspiré de l’Universal Variable Length Coding (UVLC), affichant un rendement de 85% sur du contenu filmé et, d’autre part, l’implémentation sur FPGA d’une transformée en ondelettes discrète, horizontale de type 5:3, traitant des flux vidéo 4K jusqu’à 120 images par seconde. L’implémentation réalisée consomme 340 slices par composante de couleur sur les plateformes basse consommation Artix-7 de Xilinx.

  1. Published in Japanese. 2 3 4

  2. Published in French.