Maritime RobotX Challenge, Sydney 2022


We participated in the 2022 Maritime RobotX Challenge and awarded 3rd place at the final stage. We built a heterogeneous USV and UAV team to solve multiple tasks including autonomous docking, navigation, obstacle avoidance, UAV launch & recovery, scan the code, racquetball flinging and acoustic pinging.

  • Won 3rd place out of 20 teams in the competition, team leader of team NYCU
  • Developed deep reinforcement learning autonomy system using TensorFlow with Gazebo
    simulator; achieved sim-to-real results for goal navigation and collision avoidance
  • Integrated autonomy system with perception module and used behavior tree to handle
    the state of WAM-V with Python, C++ and ROS
  • In charge of WAM-V-related missions; responsible for perception and autonomy
    as well as cooperating with UAV
  • Head of fifteen-member team conducting research as well as responsible
    for communicating with competition organizer
  • Competition Paper News



    Curriculum Reinforcement Learning for Navigating among Movable Obstacles

    We ranked the navigation difficulty metrics of various large-scale representative environments and trained DRL policies from scratch within a certain computation budgets. We found that low difficulty environments received high rewards, in particular in a relatively open tunnel-like environment that only required wall following. To facilitate more complex policies in NAMO task, we leveraged curriculum learning built upon pre-trained policies, and developed pace functions appropriate to the difficulty of the environment. The proposed scheme proved highly effective to train a local planner capable of clearing the path of movable obstacles.

  • Implemented distributed distributional deterministic policy gradient for navigation among movable obstacles using TensorFlow
  • Applied curriculum learning to stimulate deep reinforcement learning agent and achieve high reward space; dealt with complex tasks including passing narrow gates and interacting with doors
  • Paper Website



    Milimeter Wave Radar for Robot Navigation in Smoke

    we propose the use of single-chip millimeter-wave (mmWave) radar, which is lightweight and inexpensive, for learning-based autonomous navigation. However, because mmWave radar signals are often noisy and sparse, we propose a cross-modal contrastive learning for representation (CM-CLR) method that maximizes the agreement between mmWave radar data and LiDAR data in the training stage to enable autonomous navigation using radar signal.

  • Built and calibrated sensor system to collect synchronized data for millimeter wave radar navigation
  • Responsibile for the hardware system of the UGV and executed the experiment
  • Paper Website



    DARPA SubT Challenge Urban Circuit, Seattle 2020

    The DARPA Subterranean (SubT) Challenge aims to develop innovative technologies that would augment operations underground. The SubT Challenge will explore new approaches to rapidly map, navigate, search, and exploit complex underground environments. I participated the Urban challenge with my team and our robots in a decommissioned nuclear power plant.

  • Built emergency stop system to adhere to competition safety criteria; designed sensor brackets for unmanned ground vehicles using SolidWorks
  • Designed a pan-tilt system with correct coordinate transformation using dynamixel motors to enhance D435 camera field of view
  • Built movable spherical nodes including mesh WiFi and Xbee using Python and ROS for communication systems
  • Executed fireproof experiment using spherical robot; robot still functioned after being on fire for thirty seconds
  • Paper Video Fireproof Video



    Embedded Operating Systems

  • Designed card matching game with PXA270 by socket, semaphore, multi-thread and timer using C++


  • Pyrobot-Pick and Place Mission in Simulation

    The goal is to identify objects and pick it up, then move to the target location and place the item
    Task 1: Object Detection Mask R-CNN
    Task 2: Pose Estimation and Pick Dope
    Task 3: Move to destination A*
    Task 4: Place in the box

  • Responsibile for task2 and task3, I used Dope algorithm for pose estimation, and A* algorithm for goal navigation.
  • Report