Objective and Motivation

A new wave in Artificial Intelligence based on Deep Learning enabled by massive datasets, GPUs, and cloud computing is being embraced by industry for applications from autonomous driving to digital personal assistants. What are AI's implications for Factory Automation, where labor costs are increasing, product lifecycles are getting shorter, and customized products are more popular? 

 

This workshop will explore the latest advances with experts from academia and industry, including national project updates from Japan and China. One area of interest is advances in Deep Learning for Machine Vision with applications in inspection.  Another is Deep Learning from Demonstrations, where robots learn from observations of humans performing tasks such as assembly and warehouse order fulfillment. 


Scope

Relevant topics of our workshop includes but is not limited to

  1. Assembly Planning/Motion Planning of Industrial Manipulators
  2. AI Technologies such as Deep Learning and Big Data Used in Automation
  3. Advanced Machine Vision Used in Automation
  4. Human-Machine Collaboration in Automation
  5. Warehouse Automation
  6. Scheduling 

Program

Industrial Application 1


8:30-8:50

Rosen Diankov (MUJIN Co. Ltd.)

Challenges in Deploying Reliable Picking Systems in the Toughest Market in the World


8:50-9:10

Satyandra K. Gupta (Univ. Southern Calfornia)


9:10-9:30

Henrik Christensen (UC San Diego)

AI/ML for Asset Monitoring/Supply Chain Optimization in Manufacturing


Research Project


9:30-9:40

Kensuke Harada (Osaka Univ./AIST)


9:40-10:10

Break


Deep Learning


10:10-10:30

Pieter Abeel (UC Berkeley)


10:30-10:50

Tetsuya Ogata (Waseda Univ.)


10:50-11:10

Hironobu Fujiyoshi (Chubu Univ.)


11:10-11:30

Sergey Levine (UC  Berkeley)

Deep Robotic Learning


11:30-13:00

Lunch Break


Selected Papers 1


13:00-13:15

A. W. de Jong, J. I. U. Rubrico (Univ. Tokyo), M. Adachi (Yaskawa Electric Corp.), T. Nakamura, and J. Ota (Univ. Tokyo)


13:15-13:30

I. Clavera, D. Held, and P. Abeel (UC Berkeley)


13:30-13:45

A. K. Singh and Q.-C. Pham (Nanyang Tech. Univ.)

Reactive Path Coordination for Warehouse Automation


13:45-14:00

T. Barbie, R. Kabutan, R. Tanaka, and T. Nishida (Kyushu Inst. Tech.)


14:00-14:30

Break


From EiC 


14:30-14:40

Michael Y. Wang (HKUST, EiC of T-ASE)


Industrial Application 2


14:40-15:00

Toru Nishikawa (Preferred Networks Inc.)

Deep Learning: IoT's Driving Engine


15:00-15:20

Yukiyasu Domae (Mitsubishi Electric Corp.)

Picking robots with factory, warehouse and “the most advanced telescope on Earth"


15:20-15:40

Juan Rojas (GuanDong Univ. Tech.)


15:40-16:10

Break



Planning


16:10-16:30

Alberto Rodriguez (MIT)


Selected Papers 2


17:10-17:25

C. D. Tsai and M. Kaneko (Osaka Univ.)

Machine Vision for Cell Manipulation and Automation on a Chip



Call for Papers (Closed)

 We call for papers which will be presented in the interactive session. We encourage submission of extended abstracts/short papers in the standard IEEE ICRA conference format. Abstract/short papers are a maximum of 4 (four) pages indicating the category of paper. We strongly encourage participation from industry whether they highlight the use of AI Automation techniques in industrial applications, and also describe some open problems in this area. Please send PDF file of the paper via e-mail attachments to 

  

 2017_ai_automation[at]googlegroups.com  (Please replace [at] by @)

  

Schedule

  Deadline of Submission: 1th of March, 2017 (Closed). 

  Notification of Acceptance: 7th of March, 2017.


Abstract of Invited Talks

Rosen Diankov (MUJIN Co.Ltd.)

Challenges in Deploying Reliable Picking Systems in the Toughest Market in the World

 

Abstract: Mujin started with a simple core technology and perfected it into a solid robotics product that is starting to accomplish the next-generation manipulation applications of the manufacturing and logistics sectors. Some of these applications include picking up randomized metal parts from bins and loading them into machines, performing e-commerce order fulfillment supporting thousands of products, de-palletizing of boxes of various sizes, and assembling parts. Unfortunately, the complexities of making reliable and fast autonomous picking systems that can compete with human performance are colossal, and solving these challenges requires a perfect understanding of all levels of the system and how information flows throughout it. Every algorithm chosen has to be well understood and predictable so that the system as a whole is safe and trusted. In this presentation, I'll talk about the history of the Mujin PickWorker product and the plethora of unforeseen challenges that had to be solved in order to deploy in the industrial robotics market.

 


Satyandra K. Gupta (Univ. of Southern California)

Smart Robotic Assistants for Non-Repetitive Manufacturing Tasks

Abstract: Traditionally, industrial robots have been used on mass production lines, where the same manufacturing operation is repeated many times.  Many sectors of manufacturing such as aerospace, defense, ship building, mold and die making involve small production volumes and non-repetitive tasks. Currently, industrial robots are not used in such applications.  The use of robotic assistants can significantly improve human operator productivity in small production volume manufacturing and eliminate the need for human involvement in tasks that pose risks to humans. Recent advances in human-safe industrial robots present an opportunity for creating hybrid work cells, where humans and robots can collaborate in close physical proximities. This capability enables realizing systems that utilize the complementary strengths of humans and robots. Several new low-cost robots have been introduced in the market over the last few years, making them attractive in many new manufacturing applications where robot utilization is not expected to be very high. This makes the idea of hybrid cells economically viable for small volume production. This presentation will describe computational foundations for creating robotic assistants for non-repetitive manufacturing tasks. We will begin with an overview of an integrated decision making approach that brings together concepts from perception, planning, control, and learning to realize robotic assistants that can aid human workers in manufacturing. Traditional off-line robot programming approaches cannot be used on non-repetitive tasks.  We will describe a new decision making approach based on the integration of real-time planning and perception for performing non-repetitive tasks using robots. There are many challenging tasks for which a simulation-based planning approach cannot be used to select the optimal process parameters. For such tasks, we will describe a new approach for robots to learn task parameters from self-exploration.  Both humans and robots can make errors in a hybrid cell, hence creating contingency situations. Unless handled promptly, a contingency situation may lead to significant operational inefficiencies. We will describe a decision making approach for detecting and managing contingencies.  Bin picking, assembly, and cleaning tasks will be used as illustrative examples to show how robots can be used on non-repetitive manufacturing tasks.  

 


Pieter Abeel (UC Berkeley)

Deep Reinforcement Learning for High Precision Manipulation

 

Abstract: Recent applications of deep reinforcement learning (DRL) have demonstrated impressive capabilities in learning robotic skills for various domains, including contact-rich manipulation. In these works, a controller is typically learned from scratch - by a repeated trial and error interaction of the robot with the environment. By basing the control on learning instead of an analytical modeling of the task, DRL is naturally equipped to handle difficult-to-model aspects such as friction, contacts, and non-rigid deformations, which are typical challenges in robotic assembly tasks.

In many industrial applications, however, we often have substantial prior knowledge about the domain and task, which could potentially improve the learning process. Here we focus on assembly tasks, where a geometric CAD model of the parts and a corresponding assembly plan is often available as part of the product design. We ask - how to combine the robust skill learning of DRL with the rich geometric knowledge available in the CAD model?

We propose to use classical motion planning with the CAD model to construct a reference trajectory, and then *learn* to track this trajectory using DRL. This approach would naturally balance planning on the easy-to-model parts of the trajectory, with learning for the more difficult-to-model areas.  We empirically evaluate our approach in contact-rich manipulation tasks both in simulated and real environments, such as peg insertion by a PR2 robot.  

 


Tetsuya Ogata (Waseda Univ.)

End to End Learning Models for Robot Object Manipulation

 

Abstract: In this talk, I will present two topics of our research on deep learning models for robot manipulation. The first model is a combination of a convolution neural model and a recurrent neural model which enable a humanoid robot to manipulate the various objects including soft materials. By the end-to-end learning of temporal sequences of raw images and joint angles, the robot can generate the object manipulation behaviors from the corresponding image sequences, and vice versa. The other is a preliminal work on recurrent neural model for an imitation learning. By utilizing the transfer learning technique, a robot can predict the motions and the viewpoint of the teacher for imitation learning.

 


Hironobu Fujiyoshi (Chubu Univ.)

 Cloud-based Visual Recognition by Deep Learning

 

Today, it is commonly recognized that deep learning is a very effective technique for visual recognition; however, its high computing power is a major concern. Cloud robotics is one solution that enables the inferences of deep convolution neural networks (DCNNs) to be processed on a robot. In this talk, we will introduce a dynamic DCNN partitioning method that considers the latency of cloud-based visual recognition with the aim of optimizing the computation performance between a robot and a cloud server. We will show how a DCNN split layer is automatically selected on the basis of four factors, that is, client processing time, cloud server processing time, network transmission time, and latency, while reducing the computational cost.

 


Michael Y. Wang (HongKong Univ. of Science and Tech.)

 Special Interest of IEEE T-ASE in AI in Automation

 

Abstract: I will briefly describe Topic-Based Special Issues of T-ASE, and the particular interest in AI in Automation, with an aim for a Special Issue that has a well-articulated unifying theme and reflect the best work in the area of significant importance to Automation Science and Engineering Research or Application.

 


Toru Nishikawa (Preferred Networks Inc.)

Deep Learning: IoT's Driving Engine

 

Abstract: Preferred Networks Inc. (PFN), has been actively working on applications of deep learning on real world problems. In collaboration with leading companies and research institues, PFN has been focusing on deep learning in three domains: Industrial machinery including manufacturing robots, Smart transportation including autonomous driving, and Lifescience including cancer diagnosis and treatment. In December 2016, together with FANUC, a world leader in industrial machinery and industrial robots, we launch the world’s first commercial IoT platform for manufacturing that has Deep Learning technology at its core. In IoT, among other industries, Deep Learning is not only a research topic any more, but an important key technology in driving business.

The dramatic evolvement in the functional capabilities of IoT devices, and the fact that data generated by devices is incomparably larger than that generated by humans, are two particularly important factors contributing to the fast-paced innovation in various industries. Similarly, advancement in Deep Learning research is expanding its applications beyond pure data analysis to device actuation and control in the physical world. However, in order for algorithms to be able to efficiently learn real-time control of real world devices, a combination of advancement in both Deep Learning and computing is essential. That is the concept of Edge-Heavy-Computing where by bringing intelligence close to the network edge devices, the overall system makes it possible for those devices to efficiently learn in a distributed and collaborative manner, while resolving the data communication bottleneck often faced in IoT applications. In this talk, I will introduce some of the work we have been doing at PFN, highlight some results, and also give examples of how new computing boosts the value brought by Deep Learning. 

 


Yukiyasu Domae (Mitsubishi Electric Corp.)

Picking robots with factory, warehouse and “the most advanced telescope on Earth"

 

Abstract: We applied advanced machine vision approaches to pick various objects: parts with complex shapes  in factories, daily items in warehouses and mirrors of the most advanced telescope on earth. I will show the detail of the systems, algorithms for the unique applications.


Juan Rojas (Guandong Inst. Tech.)

Online Autonomous Manipulation Decision Making

 

Abstract: Online decision making is fundamentally associated with understanding of behaviors executed by the robot before a decision can be made about what to do next. Our presentation focuses on a number of machine learning and nonparametric Bayesian techniques that serve to model current robot behaviors. We also will present possible uses for failure recovery. 

 


Alberto Rodriguez(MIT)

Dexterous Manipulation with non-Dexterous Manipulators

 

Abstract:In this presentation I will describe a simple and robust approach to bring dexterous manipulation to part handling. The key idea is to manipulate grasped objects by pushing them against the environment, harnessing extrinsic dexterity. The precision and control of these actions relies on understanding the contact interaction between the gripper, the part, and the environment, which builds on assumptions of hard-contact rigid-body coulomb-friction interactions. I will describe efforts to model and validate the contact dynamics of these actions, and recent work on developing a planner that respects and exploits the physics of contact dynamics, and that searches for sequences of continuous pushes with discrete contact switchovers to change an initial grasp into a goal grasp.

 


Jia Pan (City Univ. Hong Kong)

Deep-Learned Collision Avoidance Policy for Distributed Multi-Robot Navigation

 

Abstract: We present an end-to-end framework to generate reactive collision avoidance policy for efficient distributed multi-robot navigation. The learned policy has been validated in a set of simulated and real scenarios with noisy measurements and can be well generalized to scenarios that do not appear in our training data, including scenes with static obstacles and agents with different sizes. (Videos: https://sites.google.com/view/deepmaca)

 


Weiwei Wan (AIST)

Assembly Planning and Execution for Dual-arm Industrial Manipulators

 

Abstract: In this talk, I am going to present an integrated planning system for dual-arm assembly. Given the initial poses of objects, the system automatically plans a sequence of grasp configurations and robot motions to reorient and assemble objects. The system is composed of a grasp planning component, a placement planning component, and a regrasp planning component. It works as a supporting software platform for a single arm. Conventional transfer and transmit planning (regrasp planning) of a single arm is inherently implemented by the system. Dual-arm regrasp or dual-arm assembly is available with the help of some additional components that connect the planning systems that respectively support two arms. The system uses relational database to save and retrieve data, which improves the performance of data management and allows searching among more than 10,000s of automatically planned grasps.

 



Contact

Kensuke Harada (E-mail: harada[at]sys.es.osaka-u.ac.jp)