The Scarlet 3D stereo camera (Sales Rauscher) by Nerian is currently the fastest stereo vision and 3D depth camera with the highest image resolution on the market. Predestined applications can be found in the Agricultural engineering, Robotic, Bin picking, pick-and-place as well as for autonomous driving or in self-driving cars. Details on this and another stereo camera can be found here:

Rauscher stereo camera

 

Contents


High-performance stereo vision camera

08.12.2022 | The further developed systems of the 3D cameras from the Ensenso N series from IDS have twice the resolution and accuracy at the same cost. The new stereo vision image processing systems N31, N36, N41, N46 have a compact housing made of aluminum or Plasticnetwork and an integrated pattern projector. In industrial image processing, the systems are equally suitable for recording static and moving objects.


By far the fastest 3D measuring rate in a stereo camera

18.02.2021/XNUMX/XNUMX | The new stereo camera delivers exactly the image data and depth data, which many Machine vision Applications require. These can be found in static environments or hard and critical real-time applications in dynamic environments.



With up to 120 fps and over 70 million 3D points / s, the Scarlet depth cameras offer the by far fastest 3D measuring rate in the machine vision area. The resolution is up to 5 mega pixels in the camera and depth image. This stereo camera beats its predecessor by a factor of 2,5 in terms of frame rate. In addition, Scarlet processes a disparity area that is twice as large as Scenescan Pro with 512 pixels. This leads to a doubling of the depth resolution and even more precise 3D measurement results.


SWIR camera with dual-mode InGaAs sensor for ns range


The image data is saved in real time processed by means of a powerful FPGA and modern stereo algorithm. The result is a subpixel accurate disparity map (inverse depth image) which is transmitted to the computer or an embedded system via one or 10 Gigabit Ethernet. As part of the post-processing of data, incorrect disparities are recognized and noise is suppressed. Nerian's open source and cross platform API converts the disparity map into a dense 3D point cloud.

Integrated into the stereo cameras is a high-performance FPGA and the second generation of the Sony Pregius IMX250 image sensor. This offers a high dynamic of 73 dB and a quantum efficiency of 67% with a pixel size of 3,45 µm.

Movement rates of up to 400 Hertz

In addition, the stereo camera became extremely fast Inertial sensor (IMU) integrated. It records movement data at up to 400 Hz. Such inertial data are particularly valuable in mobile robotics applications such as Simultaneous Localization And Mapping (SLAM). The Scarlet makes the use of a separate IMU superfluous.


Vision Sensor solves many tasks in factory automation


The stereo camera has Protection class IP 67. This makes it very suitable for outdoor applications or use in dusty surroundings. The chemically hardened glass window of the stereo cameras protects the high-resolution optics even in very harsh environments. The additional automatic recalibration ensures the functionality of the system even with mechanical stress and a long service life.

The stereo cameras are available in two versions. With a base width of 10 cm (distance between the image sensors) for measurements at close range and with a base width of 25 cm for depth measurements in the more distant environment. Thanks to the flexible selection of lenses from 5 mm to 25 mm focal length, the depth camera can be easily adapted to different field of view requirements from 19 ° to 80 ° horizontal FOV. By selecting the right camera and lenses, working distances from a distance of 0,14 m can also be configured.

3D stereo camera for real-time 3D depth perception

25.02.2019 | Scene scan from Nerian (sales: Rauscher) is a powerful stereo camera for 3D depth perception with the help of stereo vision. In contrast to conventional depth cameras, this passive method does not have to emit any light in the visible or invisible spectral range in order to obtain robust measured values.

Equipped with a powerful FPGA and a modern stereo algorithm, Scenescan processes the image data from two cameras and uses this to calculate a depth map or 3D point cloud in real time. The system allows up to 100 frames/s and resolutions of up to 3 megapixels. Accurate 3D perception is possible with Scenescan even under difficult conditions such as in bright daylight, measurements from a long distance, overlapping measurement areas or even measurements under water.

carmine2 is the name of the latest version of the pre-assembled 3D camera "Karmin" based on two Basler cameras, each with 1600 x 1200 pixels. This stereo camera is specially designed for easy use with the Scenescan stereo vision sensor. This combination creates a fully-fledged 3D depth camera that enables high-precision distance measurements even in bright ambient light and over long distances.


Self-driving cars with Continental | Nvidia supercomputer


The system is available with stereo base widths of 10 and 25 cm. The 10 cm model is suitable for depth measurements at close range Measuring distances up to 0,5 m. The 25 cm model is designed for depth measurement from a longer distance and allows better depth resolution at longer distances. A wide range of lenses is available for each version to allow precise tuning of the field of view.

The Stereovision IP core for FPGAs are at the heart of the Scenescan stereo vision sensor. Users can license this IP core to integrate stereo vision capabilities into their own FPGA-based products. The process is as follows: The stereo vision IP core first processes two grayscale images using stereo matching. The images are first rectified in the FPGA to compensate for lens errors and camera alignment errors.


Telecentric Lens | new developments


This is followed by processing optimized for FPGA-based processing stereo matchingto get a disparity map with sub-pixel resolution. To improve the quality of the depth data, various post-processings can then be applied before the disparity map is output via an AXI4 stream interface.