\"\"
<\/span><\/figcaption><\/figure>When Google<\/a> released its stand-alone Photos<\/a> app in May 2015, people were wowed by what it could do: analyze images to label the people, places and things in them, an astounding consumer offering at the time. But a couple of months after the release, a software developer, Jacky Alcine<\/a>, discovered that Google had labeled photos of him and a friend, who are both Black, as \"gorillas,\" a term that is particularly offensive because it echoes centuries of racist tropes.

In the ensuing controversy, Google prevented its software from categorizing anything in Photos as gorillas, and it vowed to fix the problem. Eight years later, with significant advances in artificial intelligence, we tested whether Google had resolved the issue, and we looked at comparable tools from its competitors:
Apple<\/a>, Amazon<\/a> and Microsoft<\/a>.

There was one member of the primate family that Google and Apple were able to recognize - lemurs, the permanently startled-looking, long-tailed animals that share opposable thumbs with humans, but are more distantly related than are apes.

Google's and Apple's tools were clearly the most sophisticated when it came to image analysis.

Yet Google, whose
Android<\/a> software underpins most of the world's smartphones, has made the decision to turn off the ability to visually search for primates for fear of making an offensive mistake and labeling a person as an animal. And Apple, with technology that performed similarly to Google's in our test, appeared to disable the ability to look for monkeys and apes as well.

Consumers may not need to frequently perform such a search - though in 2019, an iPhone user complained on Apple's customer support forum that the software \"can't find monkeys in photos on my device.\" But the issue raises larger questions about other unfixed, or unfixable, flaws lurking in services that rely on computer vision - a technology that interprets visual images - as well as other products powered by AI.

Alcine was dismayed to learn that Google has still not fully solved the problem and said society puts too much trust in technology.

\"I'm going to forever have no faith in this AI,\" he said.

Computer vision products are now used for tasks as mundane as sending an alert when there is a package on the doorstep, and as weighty as navigating cars and finding perpetrators in law enforcement investigations.

Errors can reflect racist attitudes among those encoding the data. In the gorilla incident, two former Google employees who worked on this technology said the problem was that the company had not put enough photos of Black people in the image collection that it used to train its AI system. As a result, the technology was not familiar enough with darker-skinned people and confused them for gorillas.

As AI becomes more embedded in our lives, it is eliciting fears of unintended consequences. Although computer vision products and AI chatbots like ChatGPT are different, both depend on underlying reams of data that train the software, and both can misfire because of flaws in the data or biases incorporated into their code.

Microsoft recently limited users' ability to interact with a chatbot built into its search engine, Bing, after it instigated inappropriate conversations.

Microsoft's decision, like Google's choice to prevent its algorithm from identifying gorillas altogether, illustrates a common industry approach - to wall off technology features that malfunction rather than fixing them.

\"Solving these issues is important,\" said Vicente Ordonez, a professor at Rice University who studies computer vision. \"How can we trust this software for other scenarios?\"

Michael Marconi, a Google spokesperson, said Google had prevented its photo app from labeling anything as a monkey or ape because it decided the benefit \"does not outweigh the risk of harm.\"

Apple declined to comment on users' inability to search for most primates on its app.

Representatives from Amazon and Microsoft said the companies were always seeking to improve their products.

Bad Vision<\/b>

When Google was developing its photo app, it collected a large amount of images to train the AI system to identify people, animals and objects.

Its significant oversight - that there were not enough photos of Black people in its training data - caused the app to later malfunction, two former Google employees said. The company failed to uncover the \"gorilla\" problem back then because it had not asked enough employees to test the feature before its public debut, the former employees said.

Google profusely apologized for the gorillas incident, but it was one of a number of episodes in the wider tech industry that have led to accusations of bias.

Other products that have been criticized include HP's facial-tracking webcams, which could not detect some people with dark skin, and the Apple Watch, which, according to a lawsuit, failed to accurately read blood oxygen levels across skin colors. The lapses suggested that tech products were not being designed for people with darker skin. (Apple pointed to a paper from 2022 that detailed its efforts to test its blood oxygen app on a \"wide range of skin types and tones.\")

Years after the
Google Photos<\/a> error, the company encountered a similar problem with its Nest home-security camera during internal testing, according to a person familiar with the incident who worked at Google at the time. The Nest camera, which used AI to determine whether someone on a property was familiar or unfamiliar, mistook some Black people for animals. Google rushed to fix the problem before users had access to the product, the person said.

However, Nest customers continue to complain on the company's forums about other flaws. In 2021, a customer received alerts that his mother was ringing the doorbell but found his mother-in-law instead on the other side of the door. When users complained that the system was mixing up faces they had marked as \"familiar,\" a customer support representative in the forum advised them to delete all of their labels and start over.

Marconi, the Google spokesperson, said that \"our goal is to prevent these types of mistakes from ever happening.\" He added that the company had improved its technology \"by partnering with experts and diversifying our image data sets.\"

In 2019, Google tried to improve a facial-recognition feature for Android smartphones by increasing the number of people with dark skin in its data set. But the contractors whom Google had hired to collect facial scans reportedly resorted to a troubling tactic to compensate for that dearth of diverse data: They targeted homeless people and students. Google executives called the incident \"very disturbing\" at the time.

The Fix?<\/b>

While Google worked behind the scenes to improve the technology, it never allowed users to judge those efforts.

Margaret Mitchell, a researcher and co-founder of Google's Ethical AI group, joined the company after the gorilla incident and collaborated with the Photos team. She said in a recent interview that she was a proponent of Google's decision to remove \"the gorillas label, at least for a while.\"

\"You have to think about how often someone needs to label a gorilla versus perpetuating harmful stereotypes,\" Mitchell said. \"The benefits don't outweigh the potential harms of doing it wrong.\"

Ordonez, the professor, speculated that Google and Apple could now be capable of distinguishing primates from humans, but that they didn't want to enable the feature given the possible reputational risk if it misfired again.

Google has since released a more powerful image analysis product,
Google Lens<\/a>, a tool to search the web with photos rather than text. Wired discovered in 2018 that the tool was also unable to identify a gorilla.

These systems are never foolproof, said Mitchell, who is no longer working at Google. Because billions of people use Google's services, even rare glitches that happen to only one person out of a billion users will surface.

\"It only takes one mistake to have massive social ramifications,\" she said, referring to it as \"the poisoned needle in a haystack.\"

This article originally appeared in
The New York Times<\/a>.

<\/body>","next_sibling":[{"msid":100449332,"title":"Wish you could tweak that text? WhatsApp is letting users edit messages","entity_type":"ARTICLE","link":"\/news\/mvas-apps\/wish-you-could-tweak-that-text-whatsapp-is-letting-users-edit-messages\/100449332","category_name":null,"category_name_seo":"mvas-apps"}],"related_content":[],"msid":100451564,"entity_type":"ARTICLE","title":"Google's photo app still can't find gorillas. And neither can Apple's","synopsis":"In the ensuing controversy, Google prevented its software from categorizing anything in Photos as gorillas, and it vowed to fix the problem. Eight years later, with significant advances in artificial intelligence, we tested whether Google had resolved the issue, and we looked at comparable tools from its competitors: Apple, Amazon and Microsoft.","titleseo":"mvas-apps\/googles-photo-app-still-cant-find-gorillas-and-neither-can-apples","status":"ACTIVE","authors":[],"Alttitle":{"minfo":""},"artag":"NYT News Service","artdate":"2023-05-23 19:14:00","lastupd":"2023-05-23 19:34:42","breadcrumbTags":["google","apple","jacky alcine","google lens","photos","android","microsoft","amazon","google photos","mvas\/apps"],"secinfo":{"seolocation":"mvas-apps\/googles-photo-app-still-cant-find-gorillas-and-neither-can-apples"}}" data-authors="[" "]" data-category-name="MVAS/Apps" data-category_id="16" data-date="2023-05-23" data-index="article_1">

谷歌的照片应用程序仍然不能发现大猩猩。和苹果也不会

在随后的争议,谷歌阻止其软件分类任何照片大猩猩,发誓要解决这个问题。八年后,在人工智能和显著的进步,我们测试了谷歌是否解决了问题,从竞争对手和我们看类似的工具:苹果、亚马逊和微软。

  • 更新2023年5月23日07:34点坚持
谷歌发布了独立的照片应用2015年5月,人们惊叹于它可以做什么:分析图像标签的人,地方和事情,惊人的消费者提供。但几个月发布后,软件开发人员,杰克Alcine发现,谷歌标记他和一个朋友的照片,都是黑人,因为“大猩猩”,尤其是进攻是因为它几个世纪以来的种族主义比喻回声。

在随后的争议,谷歌阻止其软件分类任何照片大猩猩,发誓要解决这个问题。八年后,在人工智能和显著的进步,我们测试了谷歌是否解决了问题,从竞争对手和我们看类似的工具:苹果,亚马逊微软

广告
灵长类动物家族的一个成员,谷歌和苹果能够识别——狐猴,永久startled-looking,长尾的动物与人类分享对生拇指,但比猿是远亲。

谷歌和苹果的工具时显然是最复杂的图像分析。

然而,谷歌的安卓软件支撑着世界上大部分的智能手机,已经决定关闭视觉搜索能力灵长类动物害怕犯了进攻错误和标识一个人作为一个动物。和苹果,技术,执行类似于谷歌的在我们的测试中,似乎禁用寻找猴子和猿的能力。

消费者可能不需要频繁地执行这样的搜索——尽管在2019年,iPhone用户在苹果客户支持论坛上抱怨软件“找不到猴子的照片在我的设备上。”But the issue raises larger questions about other unfixed, or unfixable, flaws lurking in services that rely on computer vision - a technology that interprets visual images - as well as other products powered by AI.

Alcine失望地得知谷歌仍没有完全解决了这个问题,说社会把过多的对技术的信任。

“我要永远不相信这个AI,”他说。

广告
计算机视觉产品现在用于任务发送一个警报当有一样平凡的包在门口,和重要的导航汽车和在执法调查找到肇事者。

错误可以反映种族主义态度这些编码的数据。在大猩猩事件中,两名前谷歌员工参与这一技术问题是说,该公司没有把足够的黑人的照片图像集合,它用来训练人工智能系统。因此,技术不够熟悉的深色肤质和困惑人们的大猩猩。

随着人工智能嵌入在我们的生活中,它是引起意想不到的后果的担忧。虽然计算机视觉产品和人工智能聊天机器人就像ChatGPT不同,都依赖于底层的大量数据,训练软件,并且都可以失败,因为缺陷的数据或偏见纳入他们的代码。

微软最近有限用户的能力与聊天机器人内置搜索引擎,Bing,煽动后不恰当的对话。

微软的决定,就像谷歌选择完全阻止算法识别大猩猩,演示了一个常见的工业方法——故障隔离技术特性,而不是修复它们。

“解决这些问题是很重要的,”文森特说德莱斯大学教授研究计算机视觉。“我们怎么能相信这个软件对于其他场景吗?”

迈克尔·马可尼,谷歌发言人说,谷歌已经阻止了它的照片应用程序标记任何一只猴子或模仿,因为它决定效益”不超过损害的风险。”

苹果拒绝评论用户无法搜索大多数灵长类动物在其应用程序。

亚马逊和微软的代表表示,公司总是寻求改善他们的产品。

不好的视力

当谷歌正在开发其照片应用程序,它收集了大量的图片来训练人工智能系统来识别人,动物和对象。

其重要的监督——没有足够的黑人的照片在训练数据,导致应用程序后故障,两位前谷歌员工说。该公司未能揭示当时的“大猩猩”的问题,因为它没有要求足够的员工测试功能公开首映之前,前雇员说。

谷歌丰富地为大猩猩事件道歉,但这是一个更广泛的科技行业的事件数量,导致偏见的指责。

其他产品已经批评包括惠普facial-tracking网络摄像头,不能检测一些黑皮肤的人,和苹果观察,根据诉讼,未能准确地阅读血氧水平的皮肤颜色。科技产品的失误提出不为深色皮肤的人设计的。(苹果公司指出,从2022年的一篇论文,详细的测试它的血氧的努力应用在一个“广泛的皮肤类型和音调。”)

年之后谷歌图片错误,该公司遇到了一个类似的问题与它的巢家庭安全摄像头内部测试期间,据知情人士曾在谷歌。鸟巢相机,使用人工智能判断某人一个属性是熟悉还是陌生的,把一些动物的黑人。谷歌急于解决问题之前用户已经访问产品,这位人士说。

然而,巢客户继续在公司论坛上抱怨其他缺陷。2021年,一个客户收到警报,他的母亲是按门铃,但发现他的岳母而不是门的另一边。当用户抱怨系统混合了面临他们标记为“熟悉”,客户支持代表在论坛上建议他们删除他们所有的标签和重新开始。

马可尼,谷歌发言人说,“我们的目标是防止这些类型的错误的发生。”He added that the company had improved its technology "by partnering with experts and diversifying our image data sets."

2019年,谷歌试图改善面部识别功能为Android智能手机的人数增加与黑皮肤的数据集,但据报道,谷歌已聘请承包商谁收集面部扫描诉诸于一个令人不安的战术来弥补缺乏多样化的数据:他们有针对性的无家可归者和学生。谷歌高管称对此事“非常令人不安”。

这是固定的吗?

虽然谷歌在幕后改善技术,它从不允许用户判断这些努力。

研究员玛格丽特•米切尔和谷歌的联合创始人的道德AI组,加入公司后,大猩猩事件和与照片的团队。她在最近的一次采访中说,她是一个支持谷歌的决定删除“大猩猩标签,至少一段时间。”

“你要考虑多久有人需要标签大猩猩和延续有害的刻板印象,”米切尔说。“好处不超过的潜在危害做错了。”

德教授,猜测,谷歌和苹果现在能够区分灵长类动物和人类,但他们不想启用的特性给出可能的声誉风险如果它奏效了。

Google已经发布了一个更强大的图像分析产品,谷歌眼镜,一个工具来搜索网络照片而不是文本。连接在2018年发现,该工具也无法确定一个大猩猩。

这些系统没有万无一失,米切尔说,他不再是在谷歌工作。因为数十亿人使用谷歌的服务,甚至罕见故障发生在只有一个人将表面的十亿用户。

“只需要一个错误有大规模的社会后果,”她说,指的是“毒海里捞针”。

这篇文章最初发表在《纽约时报》

  • 发布于2023年5月23日下午07:14坚持
是第一个发表评论。
现在评论

加入2 m +行业专业人士的社区

订阅我们的通讯最新见解与分析。乐动扑克

下载ETTelec乐动娱乐招聘om应用

  • 得到实时更新
  • 保存您最喜爱的文章
扫描下载应用程序
\"\"
<\/span><\/figcaption><\/figure>When Google<\/a> released its stand-alone Photos<\/a> app in May 2015, people were wowed by what it could do: analyze images to label the people, places and things in them, an astounding consumer offering at the time. But a couple of months after the release, a software developer, Jacky Alcine<\/a>, discovered that Google had labeled photos of him and a friend, who are both Black, as \"gorillas,\" a term that is particularly offensive because it echoes centuries of racist tropes.

In the ensuing controversy, Google prevented its software from categorizing anything in Photos as gorillas, and it vowed to fix the problem. Eight years later, with significant advances in artificial intelligence, we tested whether Google had resolved the issue, and we looked at comparable tools from its competitors:
Apple<\/a>, Amazon<\/a> and Microsoft<\/a>.

There was one member of the primate family that Google and Apple were able to recognize - lemurs, the permanently startled-looking, long-tailed animals that share opposable thumbs with humans, but are more distantly related than are apes.

Google's and Apple's tools were clearly the most sophisticated when it came to image analysis.

Yet Google, whose
Android<\/a> software underpins most of the world's smartphones, has made the decision to turn off the ability to visually search for primates for fear of making an offensive mistake and labeling a person as an animal. And Apple, with technology that performed similarly to Google's in our test, appeared to disable the ability to look for monkeys and apes as well.

Consumers may not need to frequently perform such a search - though in 2019, an iPhone user complained on Apple's customer support forum that the software \"can't find monkeys in photos on my device.\" But the issue raises larger questions about other unfixed, or unfixable, flaws lurking in services that rely on computer vision - a technology that interprets visual images - as well as other products powered by AI.

Alcine was dismayed to learn that Google has still not fully solved the problem and said society puts too much trust in technology.

\"I'm going to forever have no faith in this AI,\" he said.

Computer vision products are now used for tasks as mundane as sending an alert when there is a package on the doorstep, and as weighty as navigating cars and finding perpetrators in law enforcement investigations.

Errors can reflect racist attitudes among those encoding the data. In the gorilla incident, two former Google employees who worked on this technology said the problem was that the company had not put enough photos of Black people in the image collection that it used to train its AI system. As a result, the technology was not familiar enough with darker-skinned people and confused them for gorillas.

As AI becomes more embedded in our lives, it is eliciting fears of unintended consequences. Although computer vision products and AI chatbots like ChatGPT are different, both depend on underlying reams of data that train the software, and both can misfire because of flaws in the data or biases incorporated into their code.

Microsoft recently limited users' ability to interact with a chatbot built into its search engine, Bing, after it instigated inappropriate conversations.

Microsoft's decision, like Google's choice to prevent its algorithm from identifying gorillas altogether, illustrates a common industry approach - to wall off technology features that malfunction rather than fixing them.

\"Solving these issues is important,\" said Vicente Ordonez, a professor at Rice University who studies computer vision. \"How can we trust this software for other scenarios?\"

Michael Marconi, a Google spokesperson, said Google had prevented its photo app from labeling anything as a monkey or ape because it decided the benefit \"does not outweigh the risk of harm.\"

Apple declined to comment on users' inability to search for most primates on its app.

Representatives from Amazon and Microsoft said the companies were always seeking to improve their products.

Bad Vision<\/b>

When Google was developing its photo app, it collected a large amount of images to train the AI system to identify people, animals and objects.

Its significant oversight - that there were not enough photos of Black people in its training data - caused the app to later malfunction, two former Google employees said. The company failed to uncover the \"gorilla\" problem back then because it had not asked enough employees to test the feature before its public debut, the former employees said.

Google profusely apologized for the gorillas incident, but it was one of a number of episodes in the wider tech industry that have led to accusations of bias.

Other products that have been criticized include HP's facial-tracking webcams, which could not detect some people with dark skin, and the Apple Watch, which, according to a lawsuit, failed to accurately read blood oxygen levels across skin colors. The lapses suggested that tech products were not being designed for people with darker skin. (Apple pointed to a paper from 2022 that detailed its efforts to test its blood oxygen app on a \"wide range of skin types and tones.\")

Years after the
Google Photos<\/a> error, the company encountered a similar problem with its Nest home-security camera during internal testing, according to a person familiar with the incident who worked at Google at the time. The Nest camera, which used AI to determine whether someone on a property was familiar or unfamiliar, mistook some Black people for animals. Google rushed to fix the problem before users had access to the product, the person said.

However, Nest customers continue to complain on the company's forums about other flaws. In 2021, a customer received alerts that his mother was ringing the doorbell but found his mother-in-law instead on the other side of the door. When users complained that the system was mixing up faces they had marked as \"familiar,\" a customer support representative in the forum advised them to delete all of their labels and start over.

Marconi, the Google spokesperson, said that \"our goal is to prevent these types of mistakes from ever happening.\" He added that the company had improved its technology \"by partnering with experts and diversifying our image data sets.\"

In 2019, Google tried to improve a facial-recognition feature for Android smartphones by increasing the number of people with dark skin in its data set. But the contractors whom Google had hired to collect facial scans reportedly resorted to a troubling tactic to compensate for that dearth of diverse data: They targeted homeless people and students. Google executives called the incident \"very disturbing\" at the time.

The Fix?<\/b>

While Google worked behind the scenes to improve the technology, it never allowed users to judge those efforts.

Margaret Mitchell, a researcher and co-founder of Google's Ethical AI group, joined the company after the gorilla incident and collaborated with the Photos team. She said in a recent interview that she was a proponent of Google's decision to remove \"the gorillas label, at least for a while.\"

\"You have to think about how often someone needs to label a gorilla versus perpetuating harmful stereotypes,\" Mitchell said. \"The benefits don't outweigh the potential harms of doing it wrong.\"

Ordonez, the professor, speculated that Google and Apple could now be capable of distinguishing primates from humans, but that they didn't want to enable the feature given the possible reputational risk if it misfired again.

Google has since released a more powerful image analysis product,
Google Lens<\/a>, a tool to search the web with photos rather than text. Wired discovered in 2018 that the tool was also unable to identify a gorilla.

These systems are never foolproof, said Mitchell, who is no longer working at Google. Because billions of people use Google's services, even rare glitches that happen to only one person out of a billion users will surface.

\"It only takes one mistake to have massive social ramifications,\" she said, referring to it as \"the poisoned needle in a haystack.\"

This article originally appeared in
The New York Times<\/a>.

<\/body>","next_sibling":[{"msid":100449332,"title":"Wish you could tweak that text? WhatsApp is letting users edit messages","entity_type":"ARTICLE","link":"\/news\/mvas-apps\/wish-you-could-tweak-that-text-whatsapp-is-letting-users-edit-messages\/100449332","category_name":null,"category_name_seo":"mvas-apps"}],"related_content":[],"msid":100451564,"entity_type":"ARTICLE","title":"Google's photo app still can't find gorillas. And neither can Apple's","synopsis":"In the ensuing controversy, Google prevented its software from categorizing anything in Photos as gorillas, and it vowed to fix the problem. Eight years later, with significant advances in artificial intelligence, we tested whether Google had resolved the issue, and we looked at comparable tools from its competitors: Apple, Amazon and Microsoft.","titleseo":"mvas-apps\/googles-photo-app-still-cant-find-gorillas-and-neither-can-apples","status":"ACTIVE","authors":[],"Alttitle":{"minfo":""},"artag":"NYT News Service","artdate":"2023-05-23 19:14:00","lastupd":"2023-05-23 19:34:42","breadcrumbTags":["google","apple","jacky alcine","google lens","photos","android","microsoft","amazon","google photos","mvas\/apps"],"secinfo":{"seolocation":"mvas-apps\/googles-photo-app-still-cant-find-gorillas-and-neither-can-apples"}}" data-news_link="//www.iser-br.com/news/mvas-apps/googles-photo-app-still-cant-find-gorillas-and-neither-can-apples/100451564">