和谐英语

人工智能与人类的关系

2023-02-15来源:和谐英语

Business

商业版块

Bartleby

巴托比专栏

Machine Learnings

机器学习

How do employees and customers feel about artificial intelligence

员工和客户对人工智能有何感受

If you ask something of Chatgpt, an artificial-intelligence (AI) tool that is all the rage, the responses you get back are almost instantaneous, utterly certain and often wrong.

Chatgpt是最近很流行的一种人工智能工具,如果你问它一些问题,你能立即得到明确肯定,但常常是错误的回复。

It is a bit like talking to an economist.

这有点像和一位经济学家交谈。

The questions raised by technologies like Chatgpt yield much more tentative answers.

对于Chatgpt等科技提出的问题,这些问题的答案更多是试探性的。

But they are ones that managers ought to start asking.

但这些都是管理者应该开始询问的问题。

One issue is how to deal with employees’ concerns about job security.

其中一个是如何处理员工对工作保障的担忧。

Worries are natural.

担忧是很正常的。

An AI that makes it easier to process your expenses is one thing; an AI that people would prefer to sit next to at a dinner party quite another.

帮你计算开支的人工智能是一回事,在晚宴上让人们更愿意坐在它身旁的人工智能则是另一回事。

Being clear about how workers would redirect time and energy that is freed up by an AI helps foster acceptance.

清楚地说明通过人工智能而释放出来的时间和精力将用于何处,这有助于培养员工对人工智能的接受度。

So does creating a sense of agency:

创造一种代理感也是如此:

research conducted by MIT Sloan Management Review and the Boston Consulting Group found that an ability to override an AI makes employees more likely to use it.

《麻省理工学院斯隆管理评论》和波士顿咨询集团进行的研究发现,拥有凌驾于人工智能的权力会让员工更有可能使用人工智能。

Whether people really need to understand what is going on inside an AI is less clear.

人们是否真的需要了解人工智能内部发生了什么,这一点还不甚清晰。

Intuitively, being able to follow an algorithm’s reasoning should trump being unable to.

根据直觉,能够弄懂算法的推理过程应该比不能弄懂要好。

But a piece of research by academics at Harvard University, the Massachusetts Institute of Technology and the Polytechnic University of Milan suggests that too much explanation can be a problem.

但哈佛大学、麻省理工学院和米兰理工大学的学者进行的一项研究表明,过多的解释可能会造成问题。

Employees at Tapestry, a portfolio of luxury brands, were given access to a forecasting model that told them how to allocate stock to stores.

奢侈品牌集团Tapestry的员工被允许访问一个预测模型,该模型告诉他们如何为商店分配库存。

Some used a model whose logic could be interpreted; others used a model that was more of a black box.

一些人使用的模型逻辑可以得到解释;另一些人使用的模型则更像是一个黑匣子。

Workers turned out to be likelier to overrule models they could understand because they were, mistakenly, sure of their own intuitions.

事实证明,员工更有可能否决他们能够理解的模型,因为他们错误地相信了自己的直觉。

Workers were willing to accept the decisions of a model they could not fathom, however, because of their confidence in the expertise of people who had built it.

然而,员工们愿意接受他们无法理解的模型所做出的决定,因为他们对模型建造者的专业能力有信心。

The credentials of those behind an AI matter.

人工智能背后的人的资历对他们来说是重要的。

The different ways that people respond to humans and to algorithms is a burgeoning area of research.

人们对人类和算法的不同反应是一个新兴的研究领域。

In a recent paper Gizem Yalcin of the University of Texas at Austin and her co-authors looked at whether consumers responded differently to decisions—to approve someone for a loan, for example, or a country-club membership—when they were made by a machine or a person.

在最近的一篇论文中,德克萨斯大学奥斯汀分校的吉泽姆·亚尔辛和她的合著者研究了消费者对于机器或人做出的决定是否会有不同的反应,例如批准某人贷款,或者是允许某人成为乡村俱乐部的会员。

They found that people reacted the same when they were being rejected.

研究人员发现,当申请被拒绝时,他们的反应是一样的。

But they felt less positively about an organisation when they were approved by an algorithm rather than a human.

但当算法而非人类批准了申请时,他们对这个组织的看法就不那么积极了。

The reason?

为什么呢?

People are good at explaining away unfavourable decisions, whoever makes them.

人们善于解释消极的决定,无论是谁做出的。

It is harder for them to attribute a successful application to their own charming, delightful selves when assessed by a machine.

但如果是机器做出评估,他们很难认为申请成功是因为自己本身有魅力或讨人喜欢。

People want to feel special, not reduced to a data point.

人们希望感觉到自己是特别的,而不是被简化为一个数据。

In a forthcoming paper, meanwhile, Arthur Jago of the University of Washington and Glenn Carroll of the Stanford Graduate School of Business investigate how willing people are to give rather than earn credit—specifically for work that someone did not do on their own.

与此同时,在即将发表的一篇论文中,华盛顿大学的亚瑟·贾戈和斯坦福大学商学院的格伦·卡罗尔调查了人们在多大程度上愿意给予而非赢得称赞——尤其是对于并非自己独立完成的工作。

They showed volunteers something attributed to a specific person—an artwork, say, or a business plan—and then revealed that it had been created either with the help of an algorithm or with the help of human assistants.

他们向志愿者展示了一些由某个特定的人所完成的东西——比如一件艺术品或一份商业计划——然后告诉他们这些东西是在算法或其他人的帮助下完成的。

Everyone gave less credit to producers when they were told they had been helped, but this effect was more pronounced for work that involved human assistants.

当得知他们有额外帮助时,每个人给出的称赞都更少了,但对于得到人类帮助的情况,这种倾向更加明显。

Not only did the participants see the job of overseeing the algorithm as more demanding than supervising humans, but they did not feel it was as fair for someone to take credit for the work of other people.

参与者不仅认为给算法把关比给人类把关要求更高,而且他们还认为,因为其他人的工作而得到称赞是不公平的。

Another paper, by Anuj Kapoor of the Indian Institute of Management Ahmedabad and his co-authors, examines whether AIs or humans are more effective at helping people lose weight.

印度管理学院艾哈迈达巴德学院的阿努伊·卡普尔和他的合著者发表了另一篇论文,研究了人工智能和人类在帮助人们减肥方面哪个效果更好。

The authors looked at the weight loss achieved by subscribers to an Indian mobile app, some of whom used only an AI coach and some of whom used a human coach, too.

作者观察了一个印度手机应用的用户的减肥成绩,其中一些用户只使用了人工智能教练,其他一些用户也使用了人类教练。

They found that people who also used a human coach lost more weight, set themselves tougher goals and were more fastidious about logging their activities.

他们发现,同样使用人类教练的人体重减轻了更多,给自己设定了更严格的目标,对自己活动的记录也更细致。

But people with a higher body mass index did not do as well with a human coach as those who weighed less.

但身体质量指数较高的人在使用人类教练时的表现不如体重较轻的人。

The authors speculate that heavier people might be more embarrassed by interacting with another person.

作者推测,体重较重的人在与另一个人互动时可能更容易感到尴尬。

The picture that emerges from such research is messy.

这些研究呈现出的图景是混乱的。

It is also dynamic: just as technologies evolve, so will attitudes.

但它也是动态的:就像随着技术发展,人们的态度也会变化。

But it is crystal-clear on one thing.

但有一点是非常明确的。

The impact of Chatgpt and other AIs will depend not just on what they can do, but also on how they make people feel.

Chatgpt和其他人工智能的影响不仅取决于它们能做什么,还取决于它们给人们带来什么样的感受。