Apple’s new scanning system criticised over lack of privacy

Apple recently announced that it would release a new tool that will scan photo libraries on iPhone in the US for child abuse photography.

Indeed, the software, called neuralMatch, will match images found on iCloud Photos with a database of known child abuse imagery. If a photo is flagged, then the company will be able to manually review the reported images and then disable the account.

However, there are some concerns over the use of the tool, which could lead to a breach of privacy, surveillance by government organizations, and wrongful accusation. Security researchers have stated that the system, could be used to frame innocent people by sending them images designed to trigger matches for child abuse images. Moreover, government agencies could use it to scan private content and control residents.

Besides, the company plans on extending the AI tool by scanning users’ encrypted messages as they are sent and received using iMessage and allowing easier identification of sexually explicit images. It would be aimed at providing tools to warn children and their parents when receiving or sending sexually explicit photos.

It was also reported that Apple has been under government pressure for years to allow for increased surveillance of encrypted data. Hence, these new security measures meant that Apple has to have a good balance between stopping child abuse and protecting the privacy of its users.

 

More
articles