“Okay, let’s get started.” Matt guided me to a chair in front of a wall of monitors.
And by a wall of monitors, I really and truly meant a wall of monitors. He had nine flat-screen computer monitors mounted to the wall, seven of which displayed what I assumed was code being compiled on some background process.
He sat next to me, his jean-clad thigh brushing my bare skin. He leaned close to point to a graph of some sort on one of the two closest monitors—basically a bunch of dots—on the screen.
“When do I get to see your prototypes?”
“You mean the AI?”
“That’s right.”
He frowned. “Not this time. What I have to show you won’t make sense, not for your purposes.”
“What does that mean?”
“It means I can show you code, I can show you our design for the neural networks, but you can’t interact with it in any meaningful way.”
“Oh.” I cast him a suspicious glance. “You’re not trying to get out of our deal, are you?”
He gave me a small smile and shook his head. “Nope. I want your questionnaire data, and this is the only way I can get it.”
“Ha!”
“Look,” he turned toward his monitor, “I thought we’d start here. This is a scatterplot of women in their thirties, displaying trends of responses. And you can see here how the responses are clustered, giving us prototypical subsets. Four main types of respondents exist represented by four different colors. Now, down here, below. You can see how the responses to our interview are also clustered, except the colors are mixed.”
“What does that mean?”
“That means a woman’s demographics and responses via the dating website data—which determine the original cluster—doesn’t allow us to predict how she will respond to our interview, and therefore what she most values in a partner.”
“Is that bad?”
He tilted his head back and forth in a considering motion. “No. Not bad. There’s not really a bad. Just surprising.”
Matt continued showing me scatterplot graphs, analyses, some raw—de-identified—data, all the while munching on my macaroons. I didn’t detect any of his previous baiting and belligerence from two weeks ago. Perhaps the cookies had affected his change in attitude. Or maybe he really did want my questionnaire data very badly. Whatever the reason, I was relieved by his easy-going manner.
He showed me how his team was attempting to create personality algorithms for their AI, dependent on how a woman responded to the interview. It was fascinating, and I wasn’t sure I comprehended all of it, but by the time we were wrapping up, my brain was exhausted.
“We’re not pursuing a DeepMind AI, not yet. Emotional intelligence is our primary aim.”
“DeepMind? What’s that?” I glanced up from my notes.
“That’s—well, how do I explain this—that’s Google’s AI.” His expression became conflicted. “It’s . . . well, it’s advanced. And the simulations they’ve run so far have shown fascinating—if not disturbing—results, none of which have been peer-review published as of yet.”
“What do you mean, disturbing?”
“It becomes aggressive when faced with competing resources, but cooperative when it’s in DeepMind’s best interest to be cooperative,” he said starkly. “It wasn’t taught that behavior, DeepMind learned it. Self-taught.”
“Interesting.”
“Right. Our prototype won’t learn to protect itself from harm, or compete for resources. It won’t be self-serving, like DeepMind. We’ve specifically designed it to eschew ego.”
“But without ego, will it have self-worth?”
“No,” he responded simply.
I frowned, wincing slightly. “Don’t you think that’s a bad idea?”
“Why?” He looked curious.
“I mean, the implications for people, humans, who own this robot, assuming you meet your aims, are somewhat concerning. People who choose this robot as a companion, as a life partner, won’t have any demands placed upon them. They’ll never have to be unselfish.”
“Exactly.” Matt acted as though I’d just answered my own question.
“No. Not exactly,” I argued, feeling deep down that the idea of creating substitutes for humans that were devoid of self-worth was dangerous. “What if people start mistreating their robots? Purposefully?”
“Mistreating a robot?” Matt echoed, as though I’d spoken a different language, and then a sly grin spread over his features. “You mean like, pushing its buttons? Get it?”
I had a hard time fighting my smile at his goofiness. “No. I mean—”
“Or playing something other than its favorite music, which everyone knows is heavy metal.”
I groaned, laughing and shaking my head. “Oh wow. That was impressive.”
“Thank you, thank you.” As he examined my face, his smile deepened and his eyes warmed, as though he was both surprised and pleased by my laughter. “Sorry for interrupting, I just have a million robot jokes and no one lets me tell them.”
“You can tell them to me, anytime.”
“Good to know.” He nodded slowly, inspecting me with his lingering smile, like I was something different. We swapped stares for a few protracted seconds, during which I admired how humor, being funny on purpose, did something wonderful for his features.
Eventually, he shook himself, clearing his throat and nodding once deferentially. “I’m sorry, I interrupted you. You were saying, about mistreating robots.”
“Oh, yes. What about ethics? Have you or any of your colleagues considered developing a regulatory board or oversight system for the treatment of robots or AI?”
Matt flinched back, his eyes wide, and stared at me like I was nuts. “No. Why would there be?”
“Are you serious?”
“Yes. Regulation only slows down technological advancement. Why would anyone want to be regulated?”
“To ensure that AI are being used ethically—”
He shook his head. “You can’t mistreat a blender. If you break it, that’s on you. You haven’t done anything ethically questionable.”
“Fine. Not all robots. I’m talking specifically about your AI. Its entire point is compassion, correct? Taking it for granted. Beating it. Insulting it. Whatever.”
“If a person damages their Compassion AI they’ll have to get it fixed or buy a new one.”
“That’s not what I mean.” What did I mean?
“I suppose we could make the cost prohibitive, to discourage damaging the device,” he suggested haltingly, still looking at me with concern. “But, Marie, you do understand that artificial intelligence is, in fact, artificial. Right? It doesn’t have actual feelings.”
I glowered at him, but before I could respond, Derek interrupted.
“Hey, are you two finished? Want to grab lunch?” Derek stuck his head in the door. His eyes bounced between us.
I stirred, glancing at my watch. Now past lunch, I realized we’d been reviewing and talking about Matt’s data for over three hours.
“Oh no.” I stood, shoving my notepad in my bag. “I have to go.”