Robot Research and Development

The Robotics team in Markham, Ontario, is leading innovation efforts to teach our robots to be smarter and bequeath them the ability to See, Think & React as humans do.

We develop applications and products that will have a direct impact on the next generation of Epson robotic systems.

Our Research

We are primarily product focused and are driven by real world problems and challenges including:

  • A factory environment
  • Acquisition of perceptions
  • Vision

Our Innovation

Our work and research ideas include:

  • Deep learning
  • Reinforcement learning
  • Transfer learning
  • Virtual simulations
  • Other machine learning disciplines

Academic Collaboration

To keep abreast with the latest theoretical advances in the field of robotics our R&D group is actively collaborating with academic partners on various projects.

Steven L. Waslander

Associate Professor, Institute for Aerospace Studies
Director, Toronto Robotics and AI Laboratory

In a factory environment, acquisition of perceptions on the operating environment is negatively impacted by many factors including lighting conditions, reflectance from object materials, occlusions, clutter scenes, limited field of view and limited resolution of the sensors.


The project explores the usage of actuated sensors to avoid aforementioned issues as an alternative solution to multi-sensor static clusters.


Dynamic sensor systems (DSSs), that is systems with actuated sensors, can be stabilized and directed at areas of interest independent of robot motion, leading to higher value and quality measurements.

Oliver Kroemer

Assistant Professor, Intelligent Autonomous Manipulation (IAM) Lab

Most industrial robot solutions in manipulation rely on specific programs designed by an expert. Those programs often follow fixed task plans and trajectories and therefore, rely on well-defined environments and exact object models. As a result, they limit the possible robot applications to repeatable tasks in structured environments.

The project focuses on usage of Deep Learning algorithm to create a closed loop-control and planning policy to generalize between a range of robot tasks, while capturing the detailed requirements of the individual tasks.

See: The Visual Cortex of Robotic System

The team seeks to improve on Image Sensing capabilities and functionalities to generate optimal image data sets and to automatically control and adapt to new environment lighting, object materials and scenes.

  • Environment Control: Reinforcement Learning methods to generate policy for fast adaption to new factory environments and robotic tasks.
  • Multi-modality Imaging: Deep Learning approaches to fuse info from different sensor types.
  • Active Vision: Computer Vision & Reinforcement Learning methods to predict camera positions to minimize data acquisition cycle time while optimizing the quality of image data sets.

Think: The Cerebrum of Robotic System

The team leverages extensive experience in the fields of Computer Vision & Image Processing to detect and estimate 3D poses of objects in the scene. The technologies target challenging industrial use cases, bin picking with support for a large breadth of object types (rigid, flexible) and materials (matte, shiny).

  • Object Detection & Pose Estimation (ODPE): Deep Learning approaches to compute the 3D poses of objects in the scene.

React: The Motor Cortex of Robotic System

The team pursues to improve on Robot Control & Grasping capabilities and functionalities to minimize setup time and improve user experience.

  • Motion Optimization: Machine Learning algorithms to optimize robot control parameters to reduce setup time & minimize the need of user expertise while increasing throughput.
  • Robot Grasping: Deep Learning algorithms to predict grasp points via virtual simulations.
Complete complex tasks using precise, dual arm manipulation.

This next-generation robot is easy to set up, adapts quickly to new environments and can use the same tools as human worker -- enabling a flexible, scalable, and fast-reacting production line.

It can carry out varying tasks almost autonomously using its arm geometry, that is based on human physiology. With wide-ranging integral sensors such as cameras, force sensors and acceleration meters, even tasks which to date have been difficult to automate can now be performed with ease.

Features Include:
  • Highly flexible design featuring seven-axis dual arms and single-axis waist
  • Handles payloads of 3kg with each arm, or 6kg using both arms E30
  • Versatile, dexterous hands with force sensors on E30 arms
  • Built-in head and wrist-mounted cameras, with external cameras available as options
  • Compact dimensions - W600mm x D780mm x H1464mm; weight 150kg
Learn More
Join the Team

Do you want to ride with us on the next wave of disruptive technologies? Please check our openings, ranging from Research Scientist to Algorithm Developer and Evaluation Specialist.

CURRENT JOB OPENINGS