App was launched by Russian developer in 2017 and uses AI to change peoples features

The developer of a popular app which transforms users' faces to predict how they will look as older people has insisted they are not accessing users' photographs without permission.

FaceApp, which was launched by a Russian developer in 2017, employs artificial intelligence allowing people to see how they would look with different hair colour, eye colour or as a different gender.

The app has topped download charts again this week, after users homed in on its ageing filter, which has since been used by dozens of celebrities and prominent figures to picture how they will supposedly look in several decades' time.

This surge of interest has in turn made concerns that FaceApp is systematically harvesting users' images. People who upload their image to the app transfer the picture to a server controlled by the developer, with the photograph processing done remotely, rather than on their phone.

These concerns have been heightened by growing awareness of online privacy issues in recent years and the facts of the case that the developer is based in Russia, where many high-profile online misinformation campaigns have been based, in addition to being able to a loosely-phrased privacy policy.

In the US, senior Democrat Chuck Schumer has urged the FBI to investigate, saying FaceApp could pose” national security and privacy risks for millions of US citizens”, according to a letter insured by Associated Press. He said it would be ” deeply troubling” if sensitive personal information was provided” to a hostile foreign power actively engaged in cyber hatreds against the United Country “.

The FaceApp CEO, Yaroslav Goncharov, said only a single painting specifically chosen by the user would be uploaded from a phone and the app did not harvest a user's entire photo library, a claim backed by security researchers.

He said the data was never transferred to Russia and was instead stored on US-controlled cloud calculating services provided by Amazon and Google.” FaceApp performs most of the photo processing in the cloud. We only upload a photo selected by a user for editing. We never transfer any other images from the phone to the cloud .”

The developer insisted that users had the right to request for their photographs to be removed from the server.” We might store an uploaded painting in the cloud. The most important reason for that is performance and traffic: we want to make sure that the user doesn't upload the photo repeatedly for every edit operation. Most images are deleted from our servers within 48 hours from the upload date .”

Goncharov said his company did not sell or share any user data with any third parties, and that most features can exist without logging in- meaning the app does not have a large amount of data it can sell on individual users.

However, users ultimately have to rely on the word of the developer that the images are being removed from the system.

FaceApp's was formerly received attention for the ethics of some of its filters. In April 2017 the app's makers apologised for a feature that whitened people's faces when they selected the “hot” filter, leading to accusations that it considered lighter skin to be synonymous with attractiveness. The developers said this was due to a flaw in the underlying neural network, which was skewed towards Caucasian faces.

Later that year the app pulled a different filter which allowed users to see how they would look if they were a different race, after it was accused of promoting” digital blackface.

Read more: www.theguardian.com