Smartphone photography: The battle has shifted from cameras to chips
- ConsenSys and AMD develop blockchain-based cloud computing infrastructure
- Google Drive, Apple iCloud and Dropbox: Which is the best cloud storage?
- Benefits of ERP technology in cloud computing
- 5 reasons why enterprises should use cloud computing
- The combination of cloud computing and virtual private network
- Microsoft arrangements to utilize ARM chips for cloud computing
- Cloud computing - A simple explanation
- Adobe earns big on 'the cloud'
- New Window Server: Breakthrough on cloud security (Part 2)
- New Window Server: Breakthrough on cloud security (Part 1)
If you don't believe it, look at Google's Pixel phones: there's only one rear camera, but it is praised and recognized as one of the best camera phones (if not the best).
The secret lies in the special-purpose chips used by companies like Google, Huawei, Samsung, and Apple - chips equipped with AI to improve the image captured by the device. For example, Pixel phones from generation 2 onwards use a Visual Core chip with Machine Learning functions that automatically adjust the camera settings to suit the lighting conditions and other elements of your scene. shooting. It also controls HDR +: combines multiple images together to create the best possible photo.
AI/Machine Learning chips have become a new battleground among high-end device manufacturers
During Apple's latest new product launch event, the company mentioned that part of the A13 Bionic chipset contained a "neural engine" to help new iPhones take better photos in low-light conditions.
According to what the tech giant pointed out during the introduction of the iPhone Pro models, this is a form of computational photography based on digital image processing instead of optical processing as usual.
An example of this form is the Deep Fusion feature, which will be brought to new devices through a software update next fall. As you probably know, Deep Fusion will allow the phoned to take 8 photos before the shutter button is pressed down.
With another image obtained when the shutter button is pressed, we have a total of 9 images for the aforementioned nervous apparatus to analyze in less than a second then combine them to create the best picture.
Unlike the HDR + process, which takes the "average" of many photos, Apple says Deep Fusion will use 9 combined photos into a 24MP photo, process each pixel and produce the best image with high detail and low noise.
According to Ryan Reith, a researcher at IDC, the AI/Machine Learning chips have become a new battleground, where high-end smartphone makers are fighting hard. Reith emphasized that the manufacturers who are "engaged" in this area are those that are able to invest in the chips and software needed to optimize the camera on their devices.
According to him, today, owning things inside the chipset is much more important than before, because the outer shell of a phone is just a regular commodity.
IDC's program vice president also pointed out that these chips would be able to be used in future devices and referred to Apple's long-rumored AR headset as a product to benefit from the result this company had achieved for the nervous apparatus. He said: "It's all being built for something bigger later - augmented reality, starting on the phone and eventually coming to other products.”
While adding features like Night Mode and a super wide-angle camera sounds revolutionary - as Apple explained it, the company is simply chasing with some more advanced Android manufacturers. And with both Huawei and Google about to launch their latest flagship phones, it will be interesting to see where the major manufacturers will stand once everything is over.
By: Joe Cook