Nine Challenges of Robot Vision

Robot vision solutions are several of our challenges in realizing robot vision. Even as it gets easier and easier to use, there are still some tricky issues. Many factors affect a robot's vision in the environment, task setting and workplace.

Here are 9 summarized robot vision challenges:

1. Lighting

If you've ever taken digital photos in low light, you'll know that lighting is critical. Bad lighting can ruin everything. Imaging sensors are not as adaptable or sensitive as the human eye. If the type of lighting is wrong, vision sensors will not be able to detect objects reliably.

There are various ways to overcome lighting challenges. One approach is to incorporate active lighting into the vision sensor itself. Other solutions include the use of infrared illumination, stationary lighting in the environment, or technologies that use other forms of light, such as lasers.

2. Deformation or articulation

Balls are simple objects to be detected with a computer vision setting. You might just detect its circular outline, perhaps using a template matching algorithm. However, if the ball is squashed, it changes shape and the same method will no longer work. This is deformation. It can cause considerable problems with some robotic vision technologies.

Similar to articulation, it refers to the deformation caused by movable joints. For example, when you bend your arm at the elbow, the shape of the arm changes. The individual links (bones) keep the same shape, but the outline is deformed. Since many vision algorithms use shape outlines, sharpness makes object recognition more difficult.

3. Position and direction

The most common function of robotic vision systems is to detect the position and orientation of known objects. Therefore, most integrated vision solutions generally overcome the challenges faced by both.

Detecting the position of an object is usually straightforward as long as the entire object can be viewed within the camera image. Many systems are also robust to changes in object orientation. However, not all directions are created equal. While detecting an object rotating along one axis is simple enough, detecting when an object is rotated in 3D is more complex.

4. Background

The background of the image has a big impact on how easy object detection is. Imagine an extreme example where an object is placed on a sheet of paper on which an image of the same object is printed. In this case, it may not be possible for the robot vision setup to determine which is the real object.

The perfect background is blank and provides a good contrast to the detected object. Its exact properties will depend on the visual inspection algorithm being used. If an edge detector is used, the background should not contain sharp lines. The color and brightness of the background should also be different from the color and brightness of the object.

5. Occlusion

Occlusion means that part of the object is occluded. In the previous four challenges, the entire object appeared in the camera image. Occlusion is different because part of the object is lost. The vision system obviously cannot detect things that are not present in the image.

There are various things that can cause occlusion, including: other objects, parts of the robot, or bad placement of the camera. Methods to overcome occlusion usually involve matching the visible part of the object to its known model, assuming that the hidden part of the object exists.

6. Proportion

In some cases, the human eye is easily fooled by differences in scale. Robot vision systems may also be confused by them. Imagine you have two identical objects, only one is larger than the other. Imagine you are using a fixed 2D vision setup and the size of an object determines its distance from the robot. If you train the system to recognize smaller objects, it will falsely detect that the two objects are the same, and that the larger object is closer to the camera.

Another problem with scale, perhaps less obvious, is that of pixel values. If the robot camera is placed far away, objects in the image will be represented by fewer pixels. Image processing algorithms work better when there are more pixels representing objects, with some exceptions.

7. Camera placement

Improper camera placement can cause any problems that have occurred before, so it's important to use it correctly. Try to place the camera in a well-lit area to see the object as clearly as possible without distortion, as close to the object as possible without occlusion. There should be no interfering background or other objects between the camera and the viewing surface.

8. Exercise

Movement can sometimes cause problems with computer vision settings, especially when blurring occurs in images. This can happen, for example, with objects on a fast-moving conveyor belt. Digital imaging sensors capture images for a short period of time, but do not capture the entire image instantaneously. If an object moves too fast during capture, it will result in a blurry image. Our eyes might not notice blur in video, but algorithms do. Robot vision works best when there is a clear static image.

9. Expectations

The final challenge has more to do with your vision setup method than the technical aspects of the vision algorithm. One of the biggest challenges in robotic vision is the unrealistic expectations of workers about what vision systems can provide. You'll get the most out of your technology by making sure your expectations match the technology's capabilities. You can do this by making sure your employees are educated about the vision system.

Source of this article: Robot Network

Oxygenator Machine

Electric Oxygen Machine,Oxygen Machine,Portable Oxygen Concentrator,Portable Oxygen Machine

Changshu Herun Import & Export Co.,Ltd , https://www.herunchina.com

Posted on