Camera algorithms

MM Solutions EAD is experienced in developing camera solutions, with over 20 years of providing services. The initial steps were: camera control and image processing solutions. Upon further development of the company, advanced image processing and calibration algorithms to facilitate image quality tuning were mastered and are to this day offered to clients. Even further Computational photography, Multi-frame processing and Computer vision were mastered by our experts. Automotive ultra HDR, Surround view, Mirrors replacement and other needs of the industry are all included in our services and are adapted to the top world standards.

Cam algo newimg

CAMERA ALGORITHMS

CAMERA CONTROL
SOLUTIONS

COMPUTATIONAL PHOTOGRAPHY
MACHINE VISION MULTI-FRAME PROCESSING

360 DEGREE PANORAMA

360 STITCHING

BASIC IMAGE PROCESSING

RAW LOW-LIGHT FILTER (MFNR)

ELECTRONIC IMAGE
STABILIZATION (EIS)

SMART ZOOM

CALIBRATION SERVICES

HIGH DYNAMIC RANGE (HDR)

MMS SURROUND VIEW

NIGHT

DISTORTION CORRECTION

AUGMENTED REALITY

CAMERA CONTROL SOLUTIONS

  • MMS Auto Exposure Compensation (AEC) algorithm
    MMS AEC is using sophisticated heuristic algorithms in order to calculate the best exposure for a given scene, taking into account not only the brightness of the scene, but many additional factors such as minimizing the motion blur in the captured picture, improving details in the shadows, compensating the backlight, and etc.

    AEC for automotive HDR sensors (3 exposures)
  • MMS Auto White Balance (AWB) algorithm

    AWB is designed to handle complex corner cases where no white patch is available on the scene or the scene has predominant color.

    Special attention is paid to the human skin tone and memory colors. When combined with advanced methods to control the saturation and color shift, the algorithm produces vivid pictures with excellent colors accuracy.

    It has advanced color analysis based on both potentially white colors and memory colors detection. Connection to AE for flash AWB
  • MMS Auto Focus (AF) algorithm

AF provides excellent focusing accuracy. Specially designed algorithms are used to ensure optimal driving methods inimizing the drive time and increasing the speed and accuracy, as well as accommodating a broad range of production tolerances.

  • Lens control

Fast and accurate lens positioning for various types of lens actuators

BASIC IMAGE PROCESSING

MMS programmable Image Pipe provides state of the art image quality at very reasonable computational cost. It was under development for more than 10 years and most of its building blocks were used in a volume production by Tier 1 handset OEMs and DSC makers.

IT CONSISTS OF THE FOLLOWING BLOCKS:

  • Data pedestal subtraction and image
  • Defect pixel correction and impulse noise removal
  • Bayer domain scaling
  • Lens and color shading correction
  • Bayer domain noise filter – temporal and spatial adaptive
  • Green imbalance correction
  • Edge adaptive CFA
  • Overall image pipe – separate edge enhancement and noise suppression paths
  • Demosaicing – adaptive, edge directed
  • Dynamic IPIPE configuration based on scene characteristice lens positioning for various types of lens actuators
  • Color conversions – RGB2RGB, RGB2YUV
  • Gamma correction
  • Tone mapping and DRC – GBCE, LBCE
  • RGB domain filtering
  • Color enhancement
  • YUV domain noise filtering
  • Adaptive edge enhancement
  • YUV domain scaling
  • Edge enhancement
  • Denoising – simple edge directed filters, frequency domain filters, optimized bilateral filters, IIR approximations of Gausian and bilateral

CALIBRATION solutions

  • Lens shading calibration for normal and fish-eye lenses without expensive all angles uniform source.
  • Lens distortion correction and factory calibration
  • Mutual camera position calibration for dual and quad camera module
  • Stereo couple factory and run time calibration as well as rectification
Group 2423
Rectangle 861
Group 2421

Surround view calibration chart

Multi-functional calibration cube

COMPUTATIONAL PHOTOGRAPHY
MACHINE VISION
MULTI-FRAME PROCESSING

  • Optical flow
  • Features detection and tracking, using edges, but not only corners, on normal HDR, binary, ternary images
  • Frames registration and alignment
  • Camera model estimation and projection from pin-hole to Panini and arbitrary distortions
  • Camera pose and orientation estimation RANSAC, ORSA, SCG Use edges too, not only features
  • Dewarp, reproject
  • Optimized edge extraction and lines as well as circles detection
  • Various image segmentations, based on concrete needs
  • Depth map extraction from stereo and moving single camera

If you are considering introducing our technology or products, or if you are interested in co-creation with us, please feel free to contact us from here.

HIGH DYNAMIC RANGE (HDR)

  • Uses multiple frames from conventional image sensor, captured with different exposure settings.
  • The input frames exposure times are determined automatically based on scene analysis.
  • Detects moving objects and does not fuse the corresponding areas to avoid ghosting artifacts.
  • Embedded local and global brightness and contrast enhancement (tone mapping) to enhance the visual quality of produced image.
  • Low-light HDR combines both technologies, Low-light filter (MFNR) and HDR to output high quality images with low noise level and well preserved highlights. Perfect solution for typical nightscapes scenes.
img 121 1 scaled img 121 hdr scaled
img 184 1 scaled img 184 hdr scaled
img 73 0 scaled img 73 hdr scaled

OFF

ON

RAW LOW-LIGHT FILTER (MFNR)

  • Noise reduction is essential for quality low-light imaging. Contemporary ISPs incorporate sophisticated noise reduction filters, which behave well at lightness down to about 5Lux. Getting satisfactory image quality at further darker conditions, where the image information is quite weaker than the noise, requires more sophisticated filters.
  • MMS Low-light filter runs in Bayer (RAW) domain, utilizing maximum information which would get contaminated and lost while passing through the ISP processing pipe.
  • MMS Low-light filter can work as single- or multi-frame, single- or multi- scale or combined filter to cover wide range of customer and capture conditions requirements regarding quality-versus-processing time.
  • Produced RAW output is passed through the ISP, allowing processing with all built-in features.
MMS Low-light filter includes additional features:
img 73 0 scaled img 73 hdr scaled

LOW-LIGHT HDR (LLHDR)

MMS Still-HDR in a combination with MMS Low-light filter produces high quality image with low noise levels and details kept in high light parts of the scene.
img 194 2 scaled img 194 hdr scaled
img10 frame single scaled img10 HDR v5 scaled

OFF

ON

If you are considering introducing our technology or products, or if you are interested in co-creation with us, please feel free to contact us from here.

360° PANORAMA

  • Noise reduction is essential for quality low-light imaging. Contemporary ISPs incorporate sophisticated noise reduction filters, which behave well at lightness down to about 5Lux. Getting satisfactory image quality at further darker conditions, where the image information is quite weaker than the noise, requires more sophisticated filters.
  • MMS Low-light filter can work as single- or multi-frame, single- or multi- scale or combined filter to cover wide range of customer and capture conditions requirements regarding quality-versus-processing time.
Rectangle 790
Rectangle 789
Rectangle 788

360° STITCHING

SMART ZOOM

  • Technology allowing for smooth camera-to-camera transition during zoom in and out.
  • It supports optical and electronic image stabilization.
  • While zooming, MMS Smart Zoom prevents from cameras transition artifacts like live-view shifts, jumps and other visual differences .
Group 2343
Path 4923

Smooth transition

Path 4924
Group 2346

NIGHT

  • Night algorithm is used to enhance the image quality at very low light conditions.
  • It uses different exposure multi-frames blending to produce bright output with well preserved highlights and details.
  • In this mode the camera can operate with longer exposure time to produce less noisy output.
img12 frame single scaled img12 HDR v7 scaled
img16 frame single scaled img16 HDR v7 scaled

OFF

ON

DISTORTION CORRECTION

  • The algorithms can use GPU for dewarp or any existing specialized HW for that purpose.
  • MMS offers a few algorithms for lens distortion correction (dewarp):
  • Fully dewarp algorithm for FOV < 140deg
  • Crop-and-dewarp algorithm to crop a user-defined portion of a fish-eye image and dewarp it
  • Warp algorithm to convert fish-eye image to equirectangular (world map) projection
  • Calibration tools for estimating the lens distortions – for laboratory or factory calibration
distortion before distortion after

OFF

ON

AUGMENTED REALITY

  • The algorithms referred as “augmented reality” render artificial objects om the frames captured by a real camera while moving.
  • Rendering the artificial objects is a well-known task, incorporated in most computer games. In order to render them in proper places, poses and orientations, scene analysis is needed:
  • Determine the 3D positions of specific scene features and their positions on successive frames
  • Such algorithms are known as SLAM (Simultaneous Localization And Mapping). MMS has implemented and optimized one of the popular such algorithms (PTAM) on an embedded platform – TI’s OMAP3, achieving 25fps.
  • Determine camera position and orientation in the scene for every frame
  • Analysis of the features in the scene and finding as ground plane – a plane on which the artificial object should lie.

ELECTRONIC IMAGE STABILIZATION (EIS)

  • MMS EIS algorithm targets to stabilize preview and video
  • It uses IMU data from gyroscope sensor and optionally camera orientation
  • MMS EIS could use OIS (optical image stabilization) data to combine the motion blur reduction of OIS with the strong stabilization of EIS
  • “Stabilize horizon”, which guarantees the horizon will be kept horizontal in stabilized video
  • MMS EIS does rolling shutter distortions compensation
  • Zoom ROI Lock feature helps user to keep the camera view stable while zooming

CAMERA TYPE AND USE-CASE

CAMERA TYPE AND USE-CASE

Group 2626

Pin-hole input and stabilized cameras

Used with standard cameras with negligible geometrical distortions – field of view is typically up to 65deg.

Group 2626

Camera stabilization for drones (digital gimbal)

Compensates drones tilt and vibrations. Optimized for drones with IMU data rate >=1kHz. Requires per-module factory calibration of lens distortions.

Group 2626

Wide angle and fish-eye cameras

Used with distorted cameras. The stabilized video may be either undistorted (pin-hole stabilized camera), fish-eye distorted (equiangular stabilized camera) or equirectangular projection. Requires per-module factory calibration of lens distortions. This algorithm is MMS property.

Group 2626

Dual back-to-back fish-eye cameras for 360-deg panorama stitching

This stabilization algorithm is integrated in the MMS 360deg stitching algorithm and implements simultaneous stitch and stabilization in a single GPU pass per frame. Uses the per-module factory calibration for stitching. This algorithm is MMS property

MMS SURROUND VIEW

SURROUND VIEW MONITOR

FEATURES

Rectangle 957

Uses GPU for fast and low-power stitching

Rectangle 957

Maintains various views and a few views in the output window

Rectangle 957

No cameras splitting lines (seamless stitching)

Rectangle 957

Draws 3D car model, animations for doors and lights, parking guide lines, sonar distance info, 3D walls, showing text and icons

Rectangle 957

Runtime ground plane estimation and adaptation (GPE)

Rectangle 957

Maintains Transparent Car Chassis on top view and 3D views

Rectangle 957

Adaptive stitching seam

Rectangle 957

Adaptive quality for system load

Rectangle 957

Adaptive color and brightness correction (ACC)

Rectangle 957

Inline Factory calibration

MMS SURROUND VIEW MONITORING

Rectangle 1617
Runtime ground plane estimation and adaptation (GPE)
  • The car cameras see the ground from very small angles. Hence, even small changes in car position due to tilt, load or tires pressure cause visible shifts of the objects as seen by the cameras, hence ghosts may occur in stitched SVM image.
  • GPE finds and tracks points on the ground, constantly estimates the car position with respect to ground and adapts the stitching parameters accordingly.
GPE1 and GPE2 sample videos are taken by an external camera, recording the toy car and the screen, while pushing the toy corners to the ground in order to mimic a changes the car tilt change.
Rectangle 1617

Adaptive seam

  • Shoes – only view
A common problem of SVM algorithms is the “shoes-only” issue. When a pedestrian stands exactly on the seam line between two cameras, starting from a car corner, only her/his shoes are visible in the stitched image – the body disappears.
  • Curbstones
Due to do the large distance between the cameras, they see the surrounding objects from very different view points. Hence the object look very differently. A typical example are the curbs along the sidewalks. Stitching such different objects inevitably leads to artifacts. To minimize such effects MMS uses adaptive seam, which can walk around the artifacts.

The video below shows side-by-side comparison of fixed seam (“shoes-only” issue) and MMS’s adaptive seam. The seam moves when an object passes by the car corners.

MMS SVM solution maintains a few seam modes – 1) fixed; 2) seam angle depending on speed, steering and sonar; 3) adaptive seam.

Rectangle 1617

Adaptive color and brightness correction (ACC)

There are differences in brightness and color of same objects, seen by neighboring cameras, because the cameras see the ground around the car from very different positions.

ACC finds and corrects those differences to create a uniform or smoothly changing brightness and color across the scene.

Rectangle 1617

Various views

Various views are maintained and new ones can easily be added:

  • top view(bird-eye)
  • top view(bird-eye)
  • outside view (3D, arbitrary)
  • blind-spot
  • etc.
  • front/back/left/right

Multiple views on one screen can be defined and rendered by a single SVM API call.

If you are considering introducing our technology or products, or if you are interested in co-creation with us, please feel free to contact us from here.

CAR MODEL RENDERING

  • 3D view
  • In top-view, the car model is 2D for fastest rendering. Another view directions are maintained (called “outside”, “3D” view). 3D model of the car can be rendered as shown in the free-view video example.
  • The doors, wheels and lights can also be animated and rendered according to their status information from the system.

GUIDE LINES

Various guide lines can be rendered. Their curvature follows the vehicle motion and steering info coming through the interface. Distance information from the sonars can be shown too. The shape and colors can be customized. 3D walls can be shown on the 3D views. Arbitrary text and icons can be shown at positions, requested via the interface.

The video shows an example of parking guide lines. Different shapes and colours can be implemented and set, based on the Customer preferences.

ADAPTIVE QUALITY-VS-LOAD

The processing in the SVM library is parallelized in a few threads, allowing the most important stages (rendering the output image) to take priority over the less important tasks – adaptations. Hence, when the system is heavily loaded, the SVM automatically decreases its quality, trying to maintain the frame rate. There are configuration parameters that can control the quality – versus – system load trade-off.

ONLINE FACTORY CALIBRATION

  • SVM needs to be calibrated on the factory line in order to guarantee good stitching regardless the cameras mounting tolerances. The factory calibration model is embedded in the library. Typically it runs in automatic mode, requiring the operator simply to approve the result.
  • There is also a manual mode, which is entered automatically if the auto-mode fails, providing a convenient interface for the operator to coarsely adjust the camera positions. Then auto-mode finishes the calibration.
  • There is also a manual mode, which is entered automatically if the auto-mode fails, providing a convenient interface for the operator to coarsely adjust the camera positions. Then auto-mode finishes the calibration.
Rectangle 1617

Remark: Cameras intrinsic parameters calibration is not part of the SVM calibration. For best quality, the cameras need to be calibrated on their production lines.

If you are considering introducing our technology or products, or if you are interested in co-creation with us, please feel free to contact us from here.