Will Apple produce multi-lens mobile phones within five years (includes 16-lens camera map)

With the advent of the iPhone 7 Plus with a dual camera on the back, in the eyes of many mobile phones (especially fruit powder), this design has become a new "standard" for mobile phones. A new era is coming.

However, Apple's "standard" is actually not something new. Some manufacturers' smartphones have used dual cameras for many years. So, why is Apple setting up a dual camera? Will this design become your most desired mobile phone function?

4 reasons to set up multiple cameras

Let's take a look at what the dual camera phone did before the release of the iPhone 7 Plus.

Looking at the history of smartphones, you will find multiple cameras on the back of your phone for 4 reasons:

First, take a "3D photo."

The idea of ​​HTC EVO 3D can be traced back to 2011, not only with a 3D camera, but also with a 3D display. Although this concept was good, it was considered by many users as a gimmick. It was considered that the utility value was small and it quickly disappeared in the market.

The fate of several LG Optimus 3D camera phones is also very similar. In 2012, these mobile phones were once scattered around the world and they wanted to attract users through 3D camera effects, even in LG's tablet computers. However, manufacturers quickly gave up on the production of such mobile phones. Instead, they aim to increase the number of pixels per year. As a result, the multi-camera camera phone returns to the "sleep" state.

Second, as a depth camera.

In 2014, HTC proposed a very similar dual sensor concept. The advent of the HTC One M8 with a dual camera behind it lifted appetite for dual camera phones and sparked interest in the phone. Shortly afterwards, some manufacturers also introduced such handsets, including Huawei's Glory 6 Plus (2014) and more recently Xiaomi's Red Rice Pro Mobile Phone (2016).

These dual-lens cameras can see image depth and 3D in the same way as our eyes. If you look at the two cameras "eyes", you need to calculate the "depth map" of any scene. The function of the second auxiliary camera is to record the depth of field data so as to achieve the function of "focusing after taking pictures first".

The biggest problem with the "depth camera" is that the quality of processing is uneven. If you want to shoot an irregularly shaped object, you may end up looking like a jagged image, or even an edge line. Auxiliary depth calculation sensors generally have poor quality and therefore cannot improve shooting quality. Like the 3D camera, the depth camera was later seen as a gimmick and not recognized by the market.

Some smartphones such as the HTC One M8 use it to simulate a variable aperture, which means changing the aperture size of the lens through which light passes. In addition to some old Nokia phones, almost every camera phone has a fixed aperture. If the aperture is large, you can blur the background of the image and focus only on the subject you want. The larger the sensor size and the larger the single pixel area, the better the picture quality. The area of ​​a full-frame sensor is 50 times larger than that of a camera phone. Only with this large-area image sensor will professional-level, artistic photographic effects be captured.

Third, use computational photography.

Huawei introduced the P9 smartphone a few days ago, equipped with dual cameras, and greatly improved the performance of image processing. This time, the P9's dual camera configuration is no longer intended as a depth camera. The Apple iPhone 7 dual camera also has a similar application to the P9.

This is the third reason for adopting dual cameras, using the so-called "computational photography" technology that has emerged in recent years. This technique uses a combination of multiple small cameras to handle the image quality problems caused by the camera's small sensor size through software intelligent algorithm processing. This technology is used on smart phones, and it is most appropriate to raise the image quality of camera phones to the level of SLR cameras.

Fourth, take pictures of fast-moving objects.

Mobile phones with the best low-light performance currently use optical image stabilization to improve their night-time image quality. A small motor is constantly moving slightly to compensate for the natural jitter of the hand while taking pictures, while keeping the shutter open for a longer period of time. However, too long exposure time is not good for shooting fast-moving objects. Optical image stabilization can make up for the movement of the photographer's hand, and cannot compensate for the movement of the photographed object. So the best anti-shake system in the world has such problems, and dual cameras can solve this problem. This is the 4th reason to use a dual camera.

Light L16 is a stand-alone digital camera based on Android. It has 16 cameras with 3 different focal lengths to create a camera equivalent to 28-150mm zoom.

Apple's intention to acquire LinX

In fact, from the acquisition of the Israeli company LinX in April last year, Apple can know what Apple wants to do next.

Prior to the Apple buyout, LinX had announced their detailed multi-image sensor module development plan, with several design options. The first scenario is what we saw in the HTC One M8, using two color image sensors to create a “depth map”. This program can now be ignored.

Another option, most likely the basic technology of the iPhone 7 Plus dual camera. A pair of cameras, one for only monochrome (black and white) images and the other for color images, is exactly what the Huawei P9 phone uses. LinX’s claim is that this setup will improve low-light performance, improve overall picture quality, and have lower costs than single-focus cameras with the same resolution.

The final result of image quality depends on the merging of the image information of the color and black and white cameras. This in turn depends on the way the color camera filters light before it reaches the sensor. Bayer filters are typically used. Kodak scientist Bayer's best known invention is Bayer color filter, which is currently used in almost all digital cameras, camcorders, and cell phone cameras.

A standard image sensor is made up of millions of small "photosensitive units" pixels that are stimulated by light. Just as LCD TVs are composed of red, green and blue sub-pixels, which can be illuminated with different light intensities, the image sensor also has red, green and blue sub-pixels for determining the color of each pixel. . Bayer filters are used to confirm that only the correct color of light hits these sub-pixels.

But this time, the problem has arisen. Green and blue light towards the red sub-pixel will be rejected for reception; green and red light will not be received by the blue sub-pixel, and so on. This means that a lot of light is wasted by the Bayer filter. A black and white sensor can avoid this problem. It does not need Bayer color filter because it does not need to determine the color. If you let a lot of light hit the sensor, the low-light quality can be significantly improved.

Therefore, if a dual camera setup is used, a monochrome sensor will increase the dynamic range, increase detail and reduce the noise level (the black and white sensor absorbs 3 times more light than the color sensor absorbs), while the other sensor Only used to fill in color. After the camera captures the image, it needs the software to intelligently process how to mix the information obtained by the two cameras, but the resulting image is always slightly different from the real world. The closer you are to the subject, the more noticeable the difference will be. Therefore, image processing is very important.

Let's take a look at the "next generation" multi-camera design announced by LinX. Perhaps the most interesting is the design of a 2x2 camera array. It uses an idea similar to putting two cameras in one column, but adds more "eyes." This can further enhance the dynamic range, or further reduce the image noise from the beginning.

LinX's last design was to use two small sensors and act as a range finder for large image sensors. Since these paired sensors can measure how far a subject is, AF can be directly aligned to the correct position without searching. This design may replace the "phase detection" focusing method currently used by the iPhone.

There is also an important application for multi-camera camera phones. Cameras with different focal lengths can simulate optical zoom and use the software to achieve "seamless" zoom. The LG G5 also has a similar function, especially when shooting still photos, you can use an auxiliary camera to zoom in or out.

The next 5 years: "Next Generation" multi-camera photography

Multi-camera photography now has further progress, and we are likely to see "next generation" camera devices with multiple cameras in the next five years.

One of them is the Light L16 which is currently in hot discussion. It is an independent digital camera based on Android. It has 16 cameras with 3 different focal lengths to create a camera with a zoom equivalent to 28~150mm. According to the company, the picture quality is comparable to or even beyond the full-frame SLR camera, but the entire thickness is only slightly thicker than the iPhone. The 16 small cameras (lens size equivalent to the size of the lens used in the camera phone) fill the back of the camera (this camera is currently similar to the size of the iPhone, only slightly thicker). Light L16 also enables SR (Super Resolution) and HDR (High Dynamic Range) image capture.

At present, the Light L16 is expensive (1699 US dollars), but it has been booked until 2017, and the announcement has been fully booked. This is a brand new camera. Its novel point is not only the use of 16 cameras, but also the use of advanced intelligent software. It also gives the camera and other smart phones camera features that point out future product design direction, that is to continue to take the "multi-camera" technology, and at the same time really put "computed photography" technology into software, and make the software more intelligent ( Apple iPhone 7 has adopted the "machine learning" algorithm).

According to reports, there are 800 people (including the original LinX company) in Apple's research and development team. They added "additional" cameras to the iPhone. They are no longer the only 3D that can be "played" in the past, but they are pursuing more perfect image quality, such as enhancing the night shooting effect and expanding the field of view of the shooting scene. Increase optical zoom and more. For those who like to shoot with a mobile phone, the dual camera gives them a new look. Mobile photography has become a part of people's real life. Forget about VR, AR, and MR, try a dual-camera smart phone first! (This article is part of the October 2016 issue of BT Media and “Commercial Values” magazine. The exclusive first network of titanium media on the Internet)

More primary information, focusing on titanium media micro-signals: Titanium media (ID: taimeiti)

Titanium Media WeChat QR Code

 

Mono Solar Panels 110-140w

Monocrystalline Solar Panel,Seraphim Mono Solar Panels,Mono Silicon Solar Panel,Mono Solar Panels System

Zhejiang G&P New Energy Technology Co.,Ltd , https://www.solarpanelgp.com

This entry was posted in on