VGR is rapidly transforming production processes by enabling robots to be highly adaptable and more easily implemented, while dramatically reducing the cost and complexity of fixed tooling previously associated with the design and set up of robotic cells, whether for material handling, automated assembly, agricultural applications,[1] life sciences, and more.
[2] In one classic but rather dated example of VGR used for industrial manufacturing, the vision system (camera and software) determines the position of randomly fed products onto a recycling conveyor.
It is a rapidly evolving technology that is proving to be economically advantageous in countries with high manufacturing overheads and skilled labor costs by reducing manual intervention, improving safety, increasing quality, and raising productivity rates, among other benefits.
[3] [4] [5] The expansion of vision-guided robotic systems is part of the broader growth within the machine vision market, which is expected to grow to $17.72 billion by 2028.
In recent years, start-ups have started to appear, offering softwares simplifying the programming and integration of these 3D systems, in order to make them more accessible for industries.
By leveraging 3D vision technologies, robots can navigate and perform tasks in environments with dynamic or uncontrolled lighting, which significantly expands their applications in real-world settings.
3D stationary mount cameras create large image files and point clouds that require substantial computing resources to process.
An arm-mounted camera has a smaller field of view and can operate successfully at a lower resolution, even VGA, because it only surveys a fraction of the entire work cell at any point in time.
However, arm-mounted cameras, whether 2D or 3D, typically suffer from XYZ disorientation because they are continually moving and have no way of knowing the robot arm's position.
This is visible in essentially all published videos of arm-mounted camera performances, whether 2D or 3D, and can increase cycle times by as much as double what would otherwise be required.
The company claims a pending patent covering techniques for ensuring the camera knows its location in 3D space without stopping to get reoriented, leading to substantially faster cycle times.
Conversely, Inbolt presents a platform-independent 3D Vision-based robotic guidance system that integrates a 3D camera, advanced algorithms, and the fastest point cloud processing AI currently available.
Parts with various geometry can be fed in any random orientation to the system and be picked and placed without any mechanical changes to the machine, resulting in quick changeover times.