Smile: AI will analyse you during this meeting…

This article was also posted on LinkedIn. It’s an extended version of something I wrote in my Antonym newsletter.

A few months ago, apps like Otter and Firefly started appearing in more meetings. Now there’s a bloom of similar apps. Being an inveterate tester of tech, I’ve turned up to Zoom meetings where AI bots have outnumbered the humans (many of them join automatically if you forget to turn them off).

We’ve used Otter.ai, a transcription app, at Brilliant Noise for years to analyse research interviews and create notes and content from webinars.

These apps are obviously useful – quick summaries and actions for a meeting make actions more likely to happen, and key decisions more likely to be communicated (and remembered).

However, we need to agree on rules for their use. Right now, we show up with them and expect everyone to be happy that we have a robot side-kick tagging along. But it’s important to check if everyone is comfortable with the session being recorded, so we’ve started mentioning it with a friendly disclaimer in meeting invitations.

We use an AI note-taking bot to record these meetings. Recordings and transcripts are used to prepare notes and are deleted regularly. It’s not everyone’s cup of tea to have a robot in the meeting, though. If you don’t want this for any reason, we will declare the meeting “Bot-free”. We won’t mention who or why.

Our policy on including bots in meetings:

  • Everyone consents and gets the data.
  • Discrete opt-out available.
  • Align with company privacy policies and local laws.

AI etiquette isn’t just about personal preference. There are serious security and ethical considerations waiting a little way down the road.

Some AI helpers aren’t just writing up minutes. They are analysing us.

My current favourite is Fathom Video. Fathom has a little feature called “Monologuing” which tells you if you – or someone else – is speaking for a long time without breaks. Useful?

At first, this was slightly irritating as I was indeed monologuing and felt told off. I can see how this prompt might be a useful nudge to shut up and listen (although I think I was pitching an idea, that first time, so cut me some slack, bot!).

I saw one the other day, Read AI, that hinted at something less cheerful. Read AI brings data and analytics to note-taking and will tell you your tone and sentiment. This could be helpful, but in some contexts and for some personalities.

In the screenshot from the company’s recruitment use case demo, you can see the analysis of all the participants’ engagement and sentiment.

An AI note-taking app offering metrics on a job interviewee — it scores “charisma” and “engagement”.

On one hand, comparing human interviewers’ impressions of a candidate with those of an AI’s is useful. But there are dangers – people deferring to the “data view” without accounting for bias. Facial recognition and other technology are often biased or mistaken when looking at non-majority white faces.

I’m often told I look angry during meetings, when I just have a resting-bitch-face syndrome and the lesser known thinking-ogre-face syndrome.

The descent from time saving boon to “voluntary” psych-evaluation can be imagines as a slippery slope of questions:

  • “Do you mind if my AI bot take notes in this meeting?”
  • Can our app measure your sentiment and interest in the meeting by analyzing your facial expressions?”
  • “Hey everyone. I should mention that it will flags lies and evasion, but we’re okay with that, right? No secrets in this team.”
  • “Can I use this recording to create a virtual version of you to read your proposal back to you?”

Meeting notes are a new thing, but they’re getting to a point where we’ll need to negotiate or perhaps legislate for fair use.

Leave a Reply