Summary

  • Assistant Professor at Ritsumeikan

  • Appointed Researcher for Toyota

  • Research Advisor for Panasonic

Lotfi El Hafi received his MScEng in Mechatronics from the Université catholique de Louvain (UCLouvain), Belgium, in 2013. His Master thesis explored the implementation of TICO compression, now JPEG XS, in collaboration with intoPIX for Ultra HD 4K/8K video distribution. In 2014, he joined the Robotics Laboratory of the Nara Institute of Science and Technology (NAIST), Japan, under the MEXT Scholarship Program where his Doctor thesis focused on STARE, a novel eye-tracking approach that leveraged deep learning to extract behavioral information from scenes reflected in the eyes. Today, Lotfi El Hafi works as a Research Assistant Professor at the Ritsumeikan Global Innovation Research Organization (R-GIRO), a Specially Appointed Researcher for the Toyota HSR Community, and a Research Advisor for Robotics Competitions for Panasonic Robotics Hub. His research interests include service robotics and artificial intelligence. Lotfi El Hafi is also the recipient of multiple research awards and competition prizes, and an active member of RSJ, JSME and IEEE.

Skills

Service Robotics

Artificial Intelligence

Information Science

Project Management

Academic Research

Experiences

  1. Research Assistant Professor

    Oct. 2019 - Present

    at Ritsumeikan University, Ritsumeikan Global Innovation Research Organization (R-GIRO)
    in Kusatsu (Kyoto Area), Japan

    Details
    • Pursuing research on developing novel model architectures for multimodal unsupervised learning in service robotics contexts.
    • Daily supervision of graduate students to help them produce higher-quality research in the fields of service robotics and artificial intelligence.
  2. Research Advisor for Robotics Competitions

    Apr. 2019 - Present

    at Panasonic Corporation, Panasonic Robotics Hub
    in Osaka, Japan

    Details
    • Working as a research advisor for successful participation in international robotics competitions.
    • Developing robotics solutions for novel human-robot interactions in service contexts.
    • Supervising Ritsumeikan University's research and development contributions to Team NAIST-RITS-Panasonic.
  3. Specially Appointed Researcher of HSR Community

    Apr. 2019 - Present

    at Toyota Motor Corporation, Toyota HSR Community
    in Nagoya, Japan

    Details
    • Developing a containerized Software Development Environment (SDE) for the Toyota Human Support Robot (HSR).
    • Accelerating research integration within the Toyota HSR Community by deploying a common SDE across its 100+ members.
    • Formally invited by Toyota to lead the HSR Software Development Environment Working Group (SDE-WG).
  4. Senior Researcher

    Oct. 2017 - Sep. 2019

    at Ritsumeikan University, Ritsumeikan Global Innovation Research Organization (R-GIRO)
    in Kusatsu (Kyoto Area), Japan

    Details
    • Joint the International and Interdisciplinary Research Center for the Next-generation Artificial Intelligence and Semiotics (AI+Semiotics) project.
    • Coordinated cross-laboratory efforts to participate in World Robot Summit (WRS) and RoboCup@Home international robotics competitions.
    • Developed unsupervised learning methods that enable robots to learn a variety of knowledge through daily human-robot interactions.
  5. Sales Engineer

    Sep. 2013 – Mar. 2014

    at intoPIX SA, Marketing & Sales Department (M&S)
    in Mont-Saint-Guibert (Brussels Area), Belgium

    Details
    • Met with business partners to introduce new product lines: TICO, JPEG 2000 GPU SDK.
    • Demonstrated intoPIX's products at multiple exhibitions across Europe and United States: IBC, ISE, VSF VidTrans.
    • Introduced intoPIX's newest technologies to industrial standardization committees such as SMPTE.
    • Contributed to commercial and technical documentation.
  6. Research Intern

    Sep. 2012 – Jun. 2013

    at intoPIX SA, Research & Development Department (R&D)
    in Louvain-la-Neuve (Brussels Area), Belgium

    Details
    • Carried out Master thesis research within a professional environment as an intern in R&D.
    • Explored the FPGA implementation of TICO compression, now JPEG XS, for Ultra HD 4K/8K video distribution.
    • Introduced to Scrum, an iterative and incremental agile framework for collaborative software development.

Education

  1. Doctor of Engineering (DrEng), Robotics

    Oct. 2014 - Sep. 2017

    at Nara Institute of Science and Technology (NAIST), Graduate School of Information Science (IS)
    in Ikoma (Osaka Area), Japan

    Details
    • Doctor thesis explored Simultaneous Tracking and Attention Recognition from Eyes (STARE), a novel eye-tracking approach that leveraged deep learning to extract behavioral information from scenes reflected in the eyes.
    • Published in results in international journals and conferences, and was awarded Best of IET and IBC 2016-2017.
    • Research sponsored by the Japan Ministry of Education, Culture, Sports, Science and Technology (MEXT) Scholarship Program.
    • Co-founded Team NAIST, now Team NAIST-RITS-Panasonic, and successfully participated in the Airbus Shopfloor Challenge (ASC) and the Amazon Robotics Challenge (ARC).
  2. Research Internship

    Apr. 2014 - Sep. 2014

    at Nara Institute of Science and Technology (NAIST), Graduate School of Information Science (IS)
    in Ikoma (Osaka Area), Japan

    Details
    • Selected to conduct research on robotics in Japan through recommendation by the Japanese diplomatic mission in Belgium.
  3. Training in International Business & Management EXPLORT

    Sep. 2013 - Mar. 2014

    at Wallonia Foreign Trade and Investment Agency (AWEX), Centre de Compétence Management & Commerce
    in Charleroi (Brussels Area), Belgium

    Details
    • Intensive training in international business and management as a part-time trainee in a company.
    • Strict candidate selection process supervised by the AWEX regional public authority for foreign trade and investment.
    • Training put to practice with a mandatory mission abroad to represent the commercial interests of a Belgian company.
    • Stay in the United States to expand intoPIX's core market and product line outside Belgium.
  4. Master of Science in Electromechanical Engineering (MScEng), Professional Focus in Mechatronics: Cum Laude

    Sep. 2011 - Jun. 2013

    at Université catholique de Louvain (UCLouvain), École polytechnique de Louvain (EPL)
    in Louvain-la-Neuve (Brussels Area), Belgium

    Details
    • Graduation from Master with Distinction (Cum Laude), last year with High Distinction (Magna Cum Laude).
    • Major in electromechanical science with a professional focus in mechatronics, robotics and hardware video processing.
    • Master thesis explored the FPGA implementation of TICO compression, now JPEG XS, in collaboration with intoPIX for Ultra HD 4K/8K video distribution.
    • Participated in the international robotics contest Eurobot 2012 as a member of Team Kraken and reached the quarterfinals of the Belgian national qualifications.
  5. Bachelor of Science in Engineering (BScEng), Focus in Mechanics and Electricity

    Sep. 2007 - Sep. 2011

    at Université catholique de Louvain (UCLouvain), École polytechnique de Louvain (EPL)
    in Louvain-la-Neuve (Brussels Area), Belgium

    Details
    • Introduced to a wide range of engineering disciplines with a major in mechanics science and a minor in electricity science.
    • Mandatory entrance examination in mathematics for all aspiring engineer as dictated by the Belgian law.
    • Followed a two-year introduction to Japanese language and culture as an extracurricular program.

Awards

  1. Best Award for Forecasting Research Proposal, CREST Research Area Meeting 2019

    Oct. 2019

    by Japan Science and Technology Agency (JST) with 3,000,000 JPY
    in Osaka, Japan

    Details

    Awarded with 3,000,000 JPY by JST for representing the Ambient Assisted Living Services Group in forecasting novel "Intelligent information processing systems creating co-experience knowledge and wisdom with human-machine harmonious collaboration".

  2. Best Award for Research Proposal Breaking Hagita-CREST Shell, CREST Research Area Meeting 2019

    Oct. 2019

    by Japan Science and Technology Agency (JST) with 1,000,000 JPY
    in Osaka, Japan

    Details

    Awarded with 1,000,000 JPY by JST for proposing and developing novel "Intelligent information processing systems creating co-experience knowledge and wisdom with human-machine harmonious collaboration".

  3. NEDO Chairman's Award for Excellence in World Robot Summit, World Robot Summit 2018

    Oct. 2018

    by New Energy and Industrial Technology Development Organization (NEDO)
    in Tokyo, Japan

    Details

    Awarded by the NEDO office for overall excellence in competition with Team NAIST-RITS-Panasonic during World Robot Summit 2018.

  4. SICE Award for Future Convenience Store Challenge, World Robot Summit 2018

    Oct. 2018

    by Society of Instrument and Control Engineers (SICE)
    in Tokyo, Japan

    Details

    Awarded by the SICE society for displaying advanced research integration with Team NAIST-RITS-Panasonic during World Robot Summit 2018.

  5. 1st Place, Customer Interaction Task, Future Convenience Store Challenge 2018, World Robot Summit 2018

    Oct. 2018

    by World Robot Summit (WRS) with 3,000,000 JPY
    in Tokyo, Japan

    Details

    Ranked 1st and awarded 3,000,000 JPY with Team NAIST-RITS-Panasonic in one of the three main tasks of the Future Convenience Store Challenge at the World Robot Summit 2018.

  6. Finalist Prize, Amazon Robotics Challenge 2017

    Jul. 2017

    by Amazon with 10,000 USD
    in Nagoya, Japan

    Details

    Ranked 6th and awarded 10,000 USD with Team NAIST-Panasonic in the finals of the 2017 Amazon Robotics Challenge (formerly Amazon Picking Challenge) among 16 top international teams and 27 entries worldwide.

  7. Best of IET and IBC 2016-2017

    Sep. 2016

    by Institution of Engineering and Technology (IET) & International Broadcasting Convention (IBC)
    in Amsterdam, Netherlands

    Details

    Selected among more than 360 submissions to figure in the top 8 papers at 2016 International Broadcasting Convention (IBC 2016) for "Outstanding research in broadcasting and entertainment and very best professional excellence in media technology".

  8. 1st Place, Airbus Shopfloor Challenge 2016

    May 2016

    by Airbus Group with 20,000 EUR
    in Stockholm, Sweden

    Details

    Winner of the biggest robotics challenge held at 2016 IEEE International Conference on Robotics and Automation (ICRA 2016) with a cash prize of 20,000 EUR.

  9. Japan Tent Ambassador, Japan Tent 2014

    Aug. 2014

    by Japan Tent Steering Committee
    in Kanazawa, Japan

    Details

    Selected among 365 international students representing 102 countries gathering in Ishikawa Prefecture to foster friendly relations between Japan and my home country, Belgium.

Certifications

  1. Highly Skilled Professional (i)(a): Advanced Academic Research Activities

    Oct. 2017

    by Japan Ministry of Justice (MOJ)
    in Tokyo, Japan

    Details

    Received the residence status of Highly Skilled Professional in advanced academic research activities for being among "The quality, unsubstitutable human resources who have a complementary relationship with domestic capital and labor, and who are expected to bring innovation to the Japanese industries, to promote development of specialized/technical labor markets through friendly competition with Japanese people and to increase efficiency of the Japanese labor markets".

  2. Doctor of Engineering (DrEng), Robotics

    Sep. 2017

    by Nara Institute of Science and Technology (NAIST), Graduate School of Information Science (IS)
    in Ikoma (Osaka Area), Japan

    Details

    Received the academic title of Doctor after 3 years of research and contribution to the state of the art at the Robotics Laboratory of the Nara Institute of Science and Technology (NAIST).

  3. Japanese-Language Proficiency Test (JLPT): N2

    Dec. 2016

    by Japan Educational Exchanges and Services (JEES)
    in Nara, Japan

    Details

    The JLPT is a standardized criterion-referenced test administered by the Japan Ministry of Education, Culture, Sports, Science and Technology (MEXT) to evaluate and certify Japanese language proficiency for non-native speakers. The N2 level is the "The ability to understand Japanese used in everyday situations, and in a variety of circumstances to a certain degree".

  4. Japan Kanji Aptitude Test (Kanken): 9

    Oct. 2015

    by Japan Kanji Aptitude Testing Foundation
    in Nara, Japan

    Details

    The Kanken is a standard test aimed at Japanese native speakers that evaluates one's knowledge of kanji, and especially one's ability to write them without computing aid. The level 9 covers 240 kanji learned up to the second grade of elementary school.

  5. Test of English for International Communication, Institutional Program (TOEIC IP): 955/990

    Jun. 2014

    by Educational Testing Service (ETS)
    in Ikoma (Osaka Area), Japan

    Details

    The TOEIC IP is a standard test administered in Japan by the Institute for International Business Communication (IIBC) to measure the English reading and listening skills of people working in international environments. A score above 800 means an advanced command of the language.

  6. Training in International Business & Management EXPLORT

    Mar. 2014

    by Wallonia Foreign Trade and Investment Agency (AWEX)
    in Brussels, Belgium

    Details

    Certificate of successful completion of the EXPLORT Program in which top candidates selected by the AWEX are intensively trained in international business and management, and dispatched abroad for representing the commercial interests of a Belgian company.

  7. Test of English as a Foreign Language, Internet-Based Test (TOEFL iBT): 102/120

    Sep. 2013

    by Educational Testing Service (ETS)
    in Brussels, Belgium

    Details

    The TOEFL iBT test measures the ability to use and understand English at the university level by evaluating reading, listening, speaking, and writing skills in performing academic tasks. A score above 95 means an advanced command of the language.

  8. Ingénieur civil (Ir)

    Jun. 2013

    by Université catholique de Louvain (UCLouvain), École polytechnique de Louvain (EPL)
    in Louvain-la-Neuve (Brussels Area), Belgium

    Details

    Legally protected title under the Belgian law applicable to the graduates of the 5-year engineering course of the top national universities.

  9. Master of Science in Electromechanical Engineering (MScEng), Professional Focus in Mechatronics: Cum Laude

    Jun. 2013

    by Université catholique de Louvain (UCLouvain), École polytechnique de Louvain (EPL)
    in Louvain-la-Neuve (Brussels Area), Belgium

    Details

    Received the academic title of Master after 5 years of higher education in engineering at the Université catholique de Louvain (UCLouvain).

  10. Bachelor of Science in Engineering (BScEng), Focus in Mechanics and Electricity

    Sep. 2011

    by Université catholique de Louvain (UCLouvain), École polytechnique de Louvain (EPL)
    in Louvain-la-Neuve (Brussels Area), Belgium

    Details

    Received the academic title of Bachelor after 3 years of higher education in engineering at the Université catholique de Louvain (UCLouvain).

Publications

  1. L. El Hafi, S. Isobe, Y. Tabuchi, Y. Katsumata, H. Nakamura, T. Fukui, T. Matsuo, G. A. Garcia Ricardez, M. Yamamoto, A. Taniguchi, Y. Hagiwara, and T. Taniguchi, "System for Augmented Human-Robot Interaction Through Mixed Reality and Robot Training by Non-Experts in Customer Service Environments", in RSJ Advanced Robotics, vol. 34, no. 3, Feb. 2020. [In print.]

    Abstract

    Human-robot interaction during general service tasks in home or retail environment has been proven challenging, partly because 1) robots lack high-level context-based cognition, and 2) humans cannot intuit the perception state of robots as they can for other humans. To solve these two problems, we present a complete robot system that has been given the highest evaluation score at the Customer Interaction Task of the Future Convenience Store Challenge at the World Robot Summit 2018, and which implements several key technologies: 1) a hierarchical spatial concepts formation for general robot task planning, and 2) a mixed reality interface to enable users to intuitively visualize the current state of the robot perception and naturally interact with it. The results obtained during the competition indicate that the proposed system allows both non-expert operators and end users to achieve human-robot interactions in customer service environments. Further, we describe a detailed scenario including employee operation and customer interaction which serves as a set of requirements for service robots and a road map for development. The system integration and task scenario described in this paper should be helpful for groups facing customer interaction challenges and looking for a successfully deployed base to build on.

  2. G. A. Garcia Ricardez, S. Okada, N. Koganti, A. Yasuda, P. M. Uriguen Eljuri, T. Sano, P.-C. Yang, L. El Hafi, M. Yamamoto, J. Takamatsu, and T. Ogasawara, "Restock and Straightening System for Retail Automation Using Compliant and Mobile Manipulation", in RSJ Advanced Robotics, vol. 34, no. 3, Feb. 2020. [In print.]

    Abstract

    As the retail industry keeps expanding and shortage of workers increasing, there is a need for autonomous manipulation of products to support retail operations. The increasing amount of products and customers in establishments such as convenience stores requires the automation of restocking, disposing and straightening of products. The manipulation of products needs to be time-efficient, avoid damaging products and beautify the display of products. In this paper, we propose a robotic system to restock shelves, dispose expired products, and straighten products in retail environments. The proposed mobile manipulator features a custom-made end effector with compact and compliant design to safely and effectively manipulate products. Through experiments in a convenience store scenario, we verify the effectiveness of our system to restock, dispose and rearrange items.

  3. G. A. Garcia Ricardez, N. Koganti, P.-C. Yang, S. Okada, P. M. Uriguen Eljuri, A. Yasuda, L. El Hafi, M. Yamamoto, J. Takamatsu, and T. Ogasawara, "Adaptive Motion Generation Using Imitation Learning and Highly-Compliant End Effector for Autonomous Cleaning", in RSJ Advanced Robotics, vol. 34, no. 3, Feb. 2020. [In print.]

    Abstract

    Recent demographic trends in super aging societies, such as Japan, is leading to severe worker shortage. Service robots can play a promising role to augment human workers for performing various household and assistive tasks. Toilet cleanup is one such challenging task that involves performing complaint motion planning in a constrained toilet setting. In this study, we propose an end-to-end robotic framework to perform various tasks related to toilet cleanup. Our key contributions include the design of a complaint and multipurpose end-effector, a deep learning based vision framework to detect garbage on the floor and adaptive motion generation for cleaning the toilet bowl. We evaluate the performance of our framework with the competition setting used for toilet cleanup of Future Convenience Store Challenge at World Robotics Summit 2018. We demonstrate that our proposed framework is capable of successfully completing all the tasks of the competition within the time limit.

  4. G. A. Garcia Ricardez*, L. El Hafi*, and F. von Drigalski*, "Standing on Giant's Shoulders: Newcomer's Experience from the Amazon Robotics Challenge 2017", in Robotic Item Picking: Applications in Warehouse & E-Commerce, 2019. [*Authors contributed equally.][In print.]

    Abstract

    International competitions have fostered innovation in fields such as artificial intelligence, robotic manipulation, and computer vision, and incited teams to push the state of the art. In this chapter, we present the approach, design philosophy and development strategy that we followed during our participation in the Amazon Robotics Challenge 2017, a competition focused on warehouse automation. After introducing our solution, we detail the development of two of its key features: the suction tool and storage system. A systematic analysis of the suction force and details of the end effector features, such as suction force control, grasping and collision detection, is also presented. Finally, this chapter reflects on the lessons we learned from our participation in the competition, which we believe are valuable to future robot challenge participants, as well as warehouse automation system designers.

  5. Y. Katsumata, L. El Hafi, A. Taniguchi, Y. Hagiwara, and T. Taniguchi, "Integrating Simultaneous Localization and Mapping with Map Completion Using Generative Adversarial Networks", in Proceedings of 2019 IEEE/RSJ Workshop on Deep Probabilistic Generative Models for Cognitive Architecture in Robotics (DPGM-CAR 2019), Macau, China, Nov. 2019.

    Abstract

    When autonomous robots perform tasks which include moving in daily human environments, they need to generate environment maps. In this research, we propose a simultaneous localization and mapping method which integrates the prior probability distribution of the map completion trained by a generative model architecture. The contribution of this research is that the method can estimate the environment map efficiently from pre-training in other environments. We show with an experiment that the proposed method performs better than other classic methods to estimate environment maps by observation without moving in a simulator.

  6. L. El Hafi, S. Matsuzaki, S. Itadera, and T. Yamamoto, "Deployment of a Containerized Software Development Environment for Human Support Robots", in Proceedings of 2019 Annual Conference of the Robotics Society of Japan (RSJ 2019), Tokyo, Japan, Sep. 2019.

    Abstract

    This paper introduces a containerized Software Development Environment (SDE) for the Toyota Human Support Robot (HSR) to collaborate on large robotics projects. The objective is twofold: 1) enable interdisciplinary teams to quickly start research and development with the HSR by sharing a containerized SDE, and 2) accelerate research implementation and integration within the Toyota HSR Community by deploying a common SDE across its members. The SDE described in this paper is developed and maintained by the HSR Software Development Environment Working Group (SDE-WG) following a solution originally proposed by Ritsumeikan University and endorsed by Toyota Motor Corporation (TMC). The source code and documentation required to deploy the SDE are available to all HSR Community members upon request at: https://gitlab.com/hsr-sde-wg/HSR.

  7. H. Nakamura, L. El Hafi, Y. Hagiwara, and T. Taniguchi, "Calibration System Using Semantic-ICP for Visualization of Robot Spatial Perception Through Mixed Reality", in Proceedings of 2019 Annual Conference of the Japanese Society for Artificial Intelligence (JSAI 2019), Niigata, Japan, Jun. 2019. [Published in Japanese.]

    Abstract

    To achieve symbiosis between humans and robots, it is important to know what the robots recognize in their environment. Such information can be displayed using a Mixed Reality (MR) head-mounted device to provide an intuitive understanding of a robot perception. However, a robust calibration system is required because the robot and head-mounted MR device have different coordinate systems. In this paper, we develop a semantic-based calibration system for human-robot interactions in MR using Semantic-ICP. We show that the calibration system using Semantic-ICP is better than using GICP SE(3) when the accuracy of the semantic labels is high.

  8. L. El Hafi, Y. Hagiwara, and T. Taniguchi, "Abstraction-Rich Workflow for Agile Collaborative Development and Deployment of Robotic Solutions", in Proceedings of 2018 Annual Conference of the Robotics Society of Japan (RSJ 2018), Kasugai, Japan, Sep. 2018.

    Abstract

    This paper introduces a collaborative workflow for development and deployment of robotic solutions. The main contribution lies in the introduction of multiple layers of abstraction between the different components and processes. These layers enable the collaborators to focus on their individual expertise and rely on automated tests and simulations from the system. The ultimate goal is to help interdisciplinary teams to work together efficiently on robotics projects.

  9. J. Takamatsu, L. El Hafi, K. Takemura, and T. Ogasawara, "角膜反射画像を用いた視線追跡と物体認識", in Proceedings of 149th MOC/JSAP Microoptics Meeting on Recognition and Authentication, Tokyo, Japan, Sep. 2018. [Published in Japanese.]

    Abstract

    眼球カメラから得られる角膜反射画像を用いて, 視線方向の推定と注視対象の追跡を 同時に実現する方法を紹介する. この方法は視線計測装置の簡略化に役立つ. 角膜反射画像では 注視対象が歪んだ状態で表れるため, 画像の歪みを補正し従来の物体検出を適用するアプローチ と, 角膜反射画像に直接ディープラーニングを適用するアプローチについて紹介する. 特に後者 のアプローチでは, 視線計測結果を用いて大量の訓練データを簡便に集める方法について述べる.

  10. G. A. Garcia Ricardez, F. von Drigalski, L. El Hafi, S. Okada, P.-C. Yang, W. Yamazaki, V. G. Hoerig, A. Delmotte, A. Yuguchi, M. Gall, C. Shiogama, K. Toyoshima, P. M. Uriguen Eljuri, R. Elizalde Zapata, M. Ding, J. Takamatsu, and T. Ogasawara, "Warehouse Picking Automation System with Learning- and Feature-based Object Recognition and Grasping Point Estimation", in Proceedings of 2017 SICE System Integration Division Annual Conference (SI 2017), pp. 2249-2253, Sendai, Japan, Dec. 2017.

    Abstract

    The Amazon Robotics Challenge (ARC) has become one of the biggest robotic competitions in the field of warehouse automation and manipulation. In this paper, we present our solution to the ARC 2017 which uses both learning-based and feature-based techniques for object recognition and grasp point estimation in unstructured collections of objects and a partially controlled space. Our solution proved effective both for previously unknown items even with little data acquisition, as well as for items from the training set, obtaining the 6th place out of 16 contestants.

  11. G. A. Garcia Ricardez, F. von Drigalski, L. El Hafi, M. Ding, J. Takamatsu, and T. Ogasawara, "Lessons from the Airbus Shopfloor Challenge 2016 and the Amazon Robotics Challenge 2017", in Proceedings of 2017 SICE System Integration Division Annual Conference (SI 2017), pp. 572-575, Sendai, Japan, Dec. 2017.

    Abstract

    International robotics competitions bring together the research community to solve real-world, current problems such as drilling in aircraft manufacturing (Airbus Shopfloor Challenge) and warehouse automation (Amazon Robotics Challenge). In this paper, we discuss our approaches to these competitions and describe the technical difficulties, design philosophy, development, lessons learned and remaining challenges.

  12. L. El Hafi, "STARE: Real-Time, Wearable, Simultaneous Gaze Tracking and Object Recognition from Eye Images", in Doctor thesis, Nara Institute of Science and Technology (NAIST), Ikoma, Japan, Sep. 2017.

    Abstract

    This thesis proposes STARE, a wearable system to perform real-time, simultaneous eye tracking and focused object recognition for daily-life applications in varied illumination environments. The proposed system extracts both the gaze direction and scene information using eye images captured by a single RGB camera facing the user's eye. In particular, the method requires neither infrared sensors nor a front-facing camera to capture the scene, making it more socially acceptable when embedded in a wearable device. This approach is made possible by recent technological advances in increased resolution and reduced size of camera sensors, as well as significantly more powerful image treatment techniques based on deep learning. First, a model-based approach is used to estimate the gaze direction using RGB eye images. A 3D eye model is constructed from an image of the eye by fitting an ellipse onto the iris. The gaze direction is then continuously track by rotating the model to simulate projections of the iris area for different eye poses and matching the iris area of the subsequent images with the corresponding projections obtained from the model. By using an additional one-time calibration, the point of regard (POR) is computed, which allows to identify where a user is looking in the scene image reflected on the cornea. Next, objects in the scene reflected on the cornea are recognized in real time using the gaze direction information. Deep learning algorithms are applied to classify and then recognize the focused object in the area surrounding the reflected POR on the eye image. Additional processes using High Dynamic Range (HDR) demonstrate that the proposed method can perform in varied illumination conditions. Finally, the validity of the approach is verified experimentally with a 3D-printable prototype of a wearable device equipped with dual cameras, and a high-sensitivity camera in extreme illumination conditions. Further, a proof-of-concept implementation of a state-of-the-art neural network shows that the focused object recognition can be performed in real time. To summarize, the proposed method and prototype contribute a novel, complete framework to 1) simultaneously perform eye tracking and focused object analysis in real time, 2) automatically generate datasets of focused objects by using the reflected POR, 3) reduce the number of sensors in current gaze trackers to a single RGB camera, and 4) enable daily-life applications in all kinds of illumination. The combination of these features makes it an attractive choice for eye-based human behavior analysis, as well as for creating large datasets of objects focused by the user during daily tasks.

  13. F. von Drigalski*, L. El Hafi*, P. M. Uriguen Eljuri*, G. A. Garcia Ricardez*, J. Takamatsu, and T. Ogasawara, "Vibration-Reducing End Effector for Automation of Drilling Tasks in Aircraft Manufacturing", in IEEE Robotics and Automation Letters (RA-L), vol. 2, no. 4, pp. 2316-2321, Oct. 2017. [Also in Proceedings of 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, Sept 2017.][*Authors contributed equally.]

    Abstract

    In this letter, we present an end effector that can drill holes compliant to aeronautic standards while mounted on a lightweight robot arm. There is an unmet demand for a robotic solution capable of drilling inside an aircraft fuselage, as size, weight, and space constraints disqualify current commercial solutions for this task. Our main contribution is the mechanical design of the end effector with high-friction, vibration-reducing feet that are pressed against the workpiece during the drilling process to increase stability, and a separate linear actuator to advance the drill. This relieves the robot arm of the task of advancing and stabilizing the drill, and leaves it with the task of positioning and holding the end effector. The stabilizing properties of the end effector are confirmed experimentally. The solution took first place at the Airbus Shopfloor Challenge, an international robotics competition held at ICRA 2016 that modeled the in-fuselage drilling task.

  14. L. El Hafi, M. Ding, J. Takamatsu, and T. Ogasawara, "STARE: Realtime, Wearable, Simultaneous Gaze Tracking and Object Recognition from Eye Images", in SMPTE Motion Imaging Journal, vol. 126, no. 6, pp. 37-46, Aug. 2017.

    Abstract

    We propose STARE, a wearable system to perform realtime, simultaneous eye tracking and focused object recognition for daily-life applications in varied illumination environments. Our proposed method uses a single camera sensor to evaluate the gaze direction and requires neither a front-facing camera nor infrared sensors. To achieve this, we describe: 1) a model-based approach to estimate the gaze direction using red-green-blue (RGB) eye images; 2) a method to recognize objects in the scene reflected on the cornea in real time; and 3) a 3D-printable prototype of a wearable gaze-tracking device. We verify the validity of our approach experimentally with different types of cameras in different illumination settings, and with a proof-of-concept implementation of a state-of-the-art neural network. The proposed system can be used as a framework for RGB-based eye tracking and human behavior analysis.

  15. G. A. Garcia Ricardez*, L. El Hafi*, F. von Drigalski*, R. Elizalde Zapata, C. Shiogama, K. Toyoshima, P. M. Uriguen Eljuri, M. Gall, A. Yuguchi, A. Delmotte, V. G. Hoerig, W. Yamazaki, S. Okada, Y. Kato, R. Futakuchi, K. Inoue, K. Asai, Y. Okazaki, M. Yamamoto, M. Ding, J. Takamatsu, and T. Ogasawara, "Climbing on Giant's Shoulders: Newcomer's Road into the Amazon Robotics Challenge 2017", in Proceedings of 2017 IEEE/RAS Warehouse Picking Automation Workshop (WPAW 2017), Singapore, Singapore, May 2017. [*Authors contributed equally.]

    Abstract

    The Amazon Robotics Challenge has become one of the biggest robotic challenges in the field of warehouse automation and manipulation. In this paper, we present an overview of materials available for newcomers to the challenge, what we learned from the previous editions and discuss the new challenges within the Amazon Robotics Challenge 2017. We also outline how we developed our solution, the results of an investigation on suction cup size and some notable difficulties we encountered along the way. Our aim is to speed up development for those who come after and, as first-time contenders like us, have to develop a solution from zero.

  16. L. El Hafi, M. Ding, J. Takamatsu, and T. Ogasawara, "眼球画像を用いた視線追跡と物体認識 日常生活のための装着型デバイス", in Proceedings of 2017 JSME Robotics and Mechatronics Conference (ROBOMECH 2017), Fukushima, Japan, May 2017. [Published in Japanese.]

    Abstract

    本稿では, 我々がこれまで行ってきた, 眼球画像を用いた視線方向の推定手法を改善することで, 新しい装着型デバイスを用いた注視物体の認識手法を提案する. まず, 視線方向と視点を推定するために, 以前の研究に基づくモデルアプローチを使用する. 次に, 眼球画像と視線方向の情報を用いて, 角膜で反射した画像から注視領域にあたる部分を抽出し, ディープラーニングを利用して物体を認識する. 注視領域を自動的に抽出できるので, ディープラーニングのための学習セットの構築の手間が削減できる.

  17. L. El Hafi, M. Ding, J. Takamatsu, and T. Ogasawara, "Gaze Tracking and Object Recognition from Eye Images", in Proceedings of 2017 IEEE International Conference on Robotic Computing (IRC 2017), pp. 310-315, Taichung, Taiwan, Apr. 2017.

    Abstract

    This paper introduces a method to identify the focused object in eye images captured from a single camera in order to enable intuitive eye-based interactions using wearable devices. Indeed, eye images allow to not only obtain natural user responses from eye movements, but also the scene reflected on the cornea without the need for additional sensors such as a frontal camera, thus making it more socially acceptable. The proposed method relies on a 3D eye model reconstruction to evaluate the gaze direction from the eye images. The gaze direction is then used in combination with deep learning algorithms to classify the focused object reflected on the cornea. Finally, the experimental results using a wearable prototype demonstrate the potential of the proposed method solely based on eye images captured from a single camera.

  18. L. El Hafi, M. Ding, J. Takamatsu, and T. Ogasawara, "Gaze Tracking Using Corneal Images Captured by a Single High-Sensitivity Camera", in The Best of IET and IBC 2016-2017, vol. 8, pp. 19-24, Sep. 2016. [Also in Proceedings of 2016 International Broadcasting Convention (IBC 2016), pp. 33-43, Amsterdam, Netherlands, Sep. 2016.]

    Abstract

    This paper introduces a method to estimate the gaze direction using images of the eye captured by a single high-sensitivity camera. The purpose is to develop wearable devices that enable intuitive eye-based interactions and applications. Indeed, camera-based solutions, as opposed to commercially available infrared-based ones, allow wearable devices to not only obtain natural user responses from eye movements, but also scene images reflected on the cornea, without the need for additional sensors. The proposed method relies on a model approach to evaluate the gaze direction and does not require a frontal camera to capture scene information, making it more socially acceptable if embedded in a glasses-shaped device. Moreover, recent development in high-sensitivity camera sensors allows to consider the proposed method even in low-light condition. Finally, experimental results using a prototype wearable device demonstrate the potential of the proposed method solely based on cornea images captured from a single camera.

  19. F. von Drigalski*, L. El Hafi*, P. M. Uriguen Eljuri*, G. A. Garcia Ricardez*, J. Takamatsu, and T. Ogasawara, "NAIST Drillbot: Drilling Robot at the Airbus Shopfloor Challenge", in Proceedings of 2016 Annual Conference of the Robotics Society of Japan (RSJ 2016), Yamagata, Japan, Sep. 2016. [*Authors contributed equally.]

    Abstract

    We propose a complete, modular robotic solution for industrial drilling tasks in an aircraft fuselage. The main contribution is a custom-made end effector with vibration-reducing feet that rest on the workpiece during the drilling process to increase stability. The solution took 1st place at the Airbus Shopfloor Challenge, an international robotics competition held at ICRA 2016.

  20. L. El Hafi, P. M. Uriguen Eljuri, M. Ding, J. Takamatsu, and T. Ogasawara, "Wearable Device for Camera-Based Eye Tracking: Model Approach Using Cornea Images", in Proceedings of 2016 JSME Robotics and Mechatronics Conference (ROBOMECH 2016), Yokohama, Japan, Jun. 2016.

    Abstract

    The industry's recent growing interest in virtual reality, augmented reality and smart wearable devices has created a new momentum for eye tracking. Eye movements in particular are viewed as a way to obtain natural user responses from wearable devices alongside gaze information used to analyze interests and behaviors. This paper extends our previous work by introducing a wearable eye-tracking device that enables the reconstruction of 3D eye models of each eye from two RGB cameras. The proposed device is built using high-resolution cameras and a 3D-printed frame attached to a pair of JINS MEME glasses. The 3D eye models reconstructed from the proposed device can be used with any model-based eye-tracking approach. The proposed device is also capable of extracting scene information from the cornea reflections captured by the cameras, detecting blinks from an electrooculography sensor as well as tracking head movements from an accelerometer combined with a gyroscope.

  21. L. El Hafi, K. Takemura, J. Takamatsu, and T. Ogasawara, "Model-Based Approach for Gaze Estimation from Corneal Imaging Using a Single Camera", in Proceedings of 2015 IEEE/SICE International Symposium on System Integration (SII 2015), pp. 88-93, Nagoya, Japan, Dec. 2015.

    Abstract

    This paper describes a method to estimate the gaze direction using cornea images captured by a single camera. The purpose is to develop wearable devices capable of obtaining natural user responses, such as interests and behaviors, from eye movements and scene images reflected on the cornea. From an image of the eye, an ellipse is fitted on the colored iris area. A 3D eye model is reconstructed from the ellipse and rotated to simulate projections of the iris area for different eye poses. The gaze direction is then evaluated by matching the iris area of the current image with the corresponding projection obtained from the model. We finally conducted an experiment using a head-mounted prototype to demonstrate the potential of such an eye-tracking method solely based on cornea images captured from a single camera.

  22. A. Yuguchi, R. Matsura, R. Baba, Y. Hakamata, W. Yamazaki, F. von Drigalski, L. El Hafi, S. Tsuichihara, M. Ding, J. Takamatsu, and T. Ogasawara, "Development of Robot Control Components for Ball-catching Task Using Motion Capture Device", in Proceedings of 2015 SICE System Integration Division Annual Conference (SI 2015), Nagoya, Japan, Dec. 2015. [Published in Japanese.]

    Abstract

    This paper describes the design and implementation of RT-middleware components for a ball-catching task by humanoid robots. We create a component to get the position of a thrown reflective ball from a motion capture device. We also create component to estimate the trajectory and the point where it will fall. The estimation is used to catch the ball using an HRP-4 humanoid robot with the control component.

  23. L. El Hafi, J.-B. Lorent, and G. Rouvroy, "Mapping SDI with a Light-Weight Compression for High Frame Rates and Ultra-HD 4K Transport over SMPTE 2022-5/6", in Proceedings of VSF VidTrans14 Content in Motion, Annual Technical Conference and Exposition, Arlington, United States, Feb. 2014.

    Abstract

    Considering the necessary bandwidth for the next generation of television with higher resolution video and higher frame rates, live uncompressed transport across 10 GB Ethernet network is not always possible. Indeed, uncompressed 4K video at 60 fps requires 12 Gbps or more. A light-weight compression can be optimal to address this challenge. A pure lossless codec would be the best. However, in general it is difficult to predict the compression ratio achievable by a lossless codec. Therefore, a light-weight visually lossless guaranteeing compression at very low compression ratio with no impact on latency seems optimal to perfectly map SDI links over SMPTE 2022-5/6.

  24. L. El Hafi* and T. Denison*, "TICO: Study of a Low-Complexity Video Compression Scheme for FPGA", in Master thesis, Université catholique de Louvain (UCLouvain) & intoPIX, Louvain-la-Neuve, Belgium, Jun. 2013. [*Authors contributed equally.][Published in French.]

    Abstract

    L'évolution des techniques d'affichage, en matière de résolution d'écran, de nombre d'images par seconde et de profondeur des couleurs, nécessite de nouveaux systèmes de compression en vue de réduire notamment la puissance consommée aux interfaces vidéo. Face à cette problématique, le consortium Video Electronics Standards Association (VESA) lance en janvier 2013 un appel à propositions pour la création d'un nouveau standard de compression Display Stream Compression (DSC). Ce document, réalisé en collaboration avec intoPIX, répond à l'appel de VESA et propose Tiny Codec (TICO), un schéma de compression vidéo de faible complexité hardware. Il y est décrit d'une part l'étude algorithmique d'un codeur entropique, inspiré de l'Universal Variable Length Coding (UVLC), affichant un rendement de 85% sur du contenu filmé et, d'autre part, l'implémentation sur FPGA d'une transformée en ondelettes discrète, horizontale de type 5:3, traitant des flux vidéo 4K jusqu'à 120 images par seconde. L'implémentation réalisée consomme 340 slices par composante de couleur sur les plateformes basse consommation Artix-7 de Xilinx.

Fundings

  1. Research Promotion Program for Acquiring Grants-in-Aid for Scientific Research (KAKENHI)

    Sep. 2019

    by Ritsumeikan University with 200,000 JPY
    in Kyoto, Japan

    Details

    Awarded with a grant stimulate the acquisition of additional competitive funds through the application of Grants-in-Aid for Scientific Research (KAKENHI).

  2. Research Promotion Program for Acquiring Grants-in-Aid for Scientific Research (KAKENHI)

    Sep. 2018

    by Ritsumeikan University with 200,000 JPY
    in Kyoto, Japan

    Details

    Awarded with a grant stimulate the acquisition of additional competitive funds through the application of Grants-in-Aid for Scientific Research (KAKENHI).

  3. CREST AIP Challenge Program

    Jun. 2018

    by Japan Science and Technology Agency (JST) with 1,000,000 JPY
    in Tokyo, Japan

    Details

    Awarded with a research grant as a young researcher who belongs to a CREST team under the AIP Network Laboratory in order to explore and develop an original research work related to the CREST project objectives.

  4. MEXT Scholarship Program

    Feb. 2014

    by Japan Ministry of Education, Culture, Sports, Science and Technology (MEXT) with 6,060,000 JPY
    in Tokyo, Japan

    Details

    Selected to conduct research in Japan through recommendation by the Japanese diplomatic mission in Belgium.

  5. EXPLORT Program

    Sep. 2013

    by Wallonia Foreign Trade and Investment Agency (AWEX) with 3,000 EUR
    in Brussels, Belgium

    Details

    Selected among more than 70 candidates for an intensive training in international business and management followed by a mission in the United States.

Projects

  • Airbus Shopfloor Challenge 2016

    Mar. 2016 – May 2016

    as Team Member

    Details

    My teammates and I reached the 1st place at the Airbus Shopfloor Challenge, the biggest robotics challenge of the 2016 IEEE International Conference on Robotics and Automation (ICRA 2016). We designed an advanced lightweight robotic system able to perform autonomous and accurate drilling compliant with aeronautic standards. Our robot performed several rounds of a drilling task on an artefact representing part of the aircraft fuselage with some access constraints. Success was measured based on the number of holes drilled within a specified time and accuracy.

  • TICO: Tiny intoPIX Codec

    Sep. 2012 - Mar. 2014

    as Research Intern

    Details

    Master thesis carried out at intoPIX, leading provider of JPEG 2000 video compression IP for FPGA. We assisted intoPIX in the development of TICO: a low-complexity, FPGA-based video compression solution to manage Ultra HD content at low costs. In particular, we successfully designed and implemented a discrete wavelet transform on FPGA to process DCI 4K video at 120 fps.

  • Eurobot 2012: Treasure Island

    Sep. 2011 - Apr. 2012

    as Team Member

    Details

    A year-long mechatronics project to build a robot from scratch in order to compete in the international robotics cup Eurobot. My responsibilities included defining the scope statement, designing mechanical parts using CAD tools, assisting in the assembling, creating a remote Web control interface using Wi-Fi and providing support for debugging the AI. The TEAM KRAKEN proudly reached the quarterfinals during the Belgian finals.

Travel