Machine Learning for Robots Fleet Optimization
The STAR project will bring new capabilities for dynamic and adaptive Automatic Mobile Robots Fleet management. This will allow the use of such kind of robots in changing environment, without the need of a time consuming configuration.
Modern factories rely more and more on mobile robots to perform logistic tasks. They are efficient means to move goods from one place to another within the factory. However, they currently require a specific configuration to be used safely within an environment with human workers. Before installing a Robot Fleet, a digital map of the environment needs to be created. Possible paths for the robots are designed once for all. This kind of solution is adapted when the factory layout and working processes are fixed.
STAR is developing new technologies relying on Artificial Intelligence and simulation to bring the adaptive capabilities needed to use Robot Fleet in wider types of factory environments. Our solution consists in:
- Creating an updated digital view of the environment, thanks to low cost cameras deployed in the factory and advanced Machine Learning to analyses the situation.
- Anticipating human movements within the factory, thanks to Machine Learning trained on huge set of factory data, created by simulation.
- Optimizing “on the fly” Robot Fleet commands to adapt to the current layout of the factory and human workers behaviors, thanks to Machine Learning trained by trials and errors, within simulation (Reinforcement Learning).
To keep the cost of our solution low, we rely on a few standard cameras deployed in the factory instead of adding expensive sensors embedded on the robots. Moreover, the occlusion that face embedded sensors will not limit our global situation awareness. Having a clear picture of where are the obstacles (that could be temporary be left on possible paths), and analyzing human presence is mandatory to optimize efficiently robot fleet behavior.
The anticipation of human presence within the factory is a challenge on its own. Machine Learning is key here to extrapolate the current situation in a near future. Like always in Machine Learning, training is crucial. Thanks to our simulation capabilities, we can simulated a huge variety of factory layouts and working processes that will encompass the situations encountered, when our solution will be deployed in a real factories.
Finally yet importantly, we compute optimized Robot Fleet commands, sending the more suitable robot with the most efficient and safest path. Constraints on the fleet can be hard, because of the dynamic environment and because of the need to avoid as much as possible interfering with human movements. Moreover, optimized commands must be very fast to compute, to adapt to the ever-evolving situation within the factory. Once again, Machine Learning, and in particular Reinforcement Learning, is the solution we develop. This kind of solutions as be made famous thanks to impressive results where AI beats professional human players at different games such as Chess, Go, or StarCraft2. Here the AI trains itself in an interactive and fast simulation through a very large number of runs. Thanks to this huge training, it will be able to cope with the large diversity of real situations.
By: Bertrand Duqueroie, THALES