Good Bot, Dangerous Bot | Half I: Psychological Well being and Bot Remedy

For the subsequent few weeks, the Countless Thread group can be sharing tales concerning the…

Good Bot, Dangerous Bot | Half I: Psychological Well being and Bot Remedy

For the subsequent few weeks, the Countless Thread group can be sharing tales concerning the rise of bots. They’re throughout social media platforms, chatrooms, cellphone apps, and extra. How are these items of software program — which are supposed to imitate human habits and language — influencing our every day lives in sneaky, shocking methods?

First up, our co-hosts delve into the historical past of ELIZA, the world’s first chatbot therapist. Why did this laptop’s creator have quite a bit of sophisticated emotions concerning the growth of AI? We additionally ponder the larger query: can AI assist us address psychological well being points?

Present notes

Assist the present: 

We love making Countless Thread, and we wish to have the ability to hold making it far into the longer term. If you’d like that too, we might deeply recognize your contribution to our work in any quantity. Everybody who makes a month-to-month donation will get entry to unique bonus content material. Click on right here for the donation web page. Thanks!

Full Transcript:

This content material was initially created for audio. The transcript has been edited from our unique script for readability. Heads up that some parts (i.e. music, sound results, tone) are more durable to translate to textual content. 
Ben Brock Johnson: I’m again at school for the autumn semester. Or not less than for at this time. Making an attempt to determine the place I’m going with a view to meet a professor throughout workplace hours.

Hey are you able to inform me, is that this the brand new laptop science constructing? Are you aware?

Useful campus pedestrian: Sure it’s this constructing. This one that you just’re proper right here.

Amory Sivertson: I wouldn’t have pegged you for an workplace hours pupil Ben, I gotta say.

Ben: Yeah, however I really like hanging out with everybody. Even professors. And particularly professors who work in very good new buildings.

Ben: Stunning constructing.

Useful campus pedestrian: Have you ever been right here earlier than?

Ben: No.

Amory: Hm, okay, that does take a look at. I didn’t like college…per se.

Ben: You’re a “college drools” type of individual?

Amory: On a regular basis. A lot drool. What have been you speaking to Dartmouth Faculty assistant professor Soroush Vosoughi about?

Ben: Effectively first, how cool the atrium of his constructing is.

Ben: What does it appear to be to you?

Soroush Vosoughi: Huh. Like a beehive. (Laughs.)

Ben: It does, proper? Yeah.

Soroush: Yeah.

Ben: Plenty of particular person cells that make up an entire or one thing.

Soroush: Precisely. Yeah. A complete that is larger than the sum of the people. Yeah.

Amory: A pleasant sounding remark, however from his reply I’m guessing Soroush is just not a professor of structure.

Ben: Appropriate. Although in a means, he does cope with sure sorts of structure. The cautious meeting of issues.

Soroush: I work on machine studying and pure language processing, and I do a variety of work with social media information.

Ben: I’ve come right here to get Soroush to inform me a couple of undertaking he and a few grad college students lately labored on that’s type of on the tutorial bleeding fringe of what machines can do with social media information. A program designed to foretell the onset of significant psychological well being challenges.

He takes me up a floating staircase to the highest flooring of the constructing. Previous flooring of {hardware} labs and software program labs. An costly trying distant managed sailboat.

Soroush: It is really an autonomous sailboat.

Ben: Is it actually?

Soroush: Sure. It learns find out how to sail itself.

Ben: We get into Soroush’s workplace, the place the central air is on. By no means good for an audio interview.

Ben: Is there any method to flip that air off?

Soroush: Umm.

Ben: The air is managed by software program. Centrally. And satirically, this laptop science professor is presently powerless to alter it. He says there’s an offended electronic mail thread about this very challenge within the Pc Science college’s listserv proper now.

I begin to do what any pupil does after they’re in search of additional credit score with a professor. Complimenting and asking him concerning the objects on his bookshelf. Necessary books about algorithms, machine studying, and Isaac Asimov’s Basis sequence. Which any sci-fi nerd is aware of. He’s received one thing made by a 3-D printer on there.

Soroush: It is a prototype of a Hodor holding the door. Prefer it’s a doorstop, really.

Ben: Stable GameStop memes reference.

Soroush: Sure, precisely.

Ben: There’s a selfmade radar, constructed with espresso cans, mind puzzles, a sterling engine, which makes use of automated temperature differential detection to show warmth into power.

Soroush additionally has this fantastically designed hand-sized field. With a easy mechanical change on it. No clarification. While you flip the change on, a robotic hand pops up instantly and flips the change again off.

Ben: (Laughs.) I really like these.

Soroush: There are a variety of metaphors round this one is, you recognize, perhaps the uselessness of expertise. You fixing an issue that does not exist, proper? I imply, simply goes again.

Ben: It is a reminder of what to not do.

Soroush: What to not do, precisely, is a ineffective field. That is why it’s, you recognize.

Amory: Alight, I feel you’ve received your additional credit score, Ben.

Ben: True true true true true. Time to get all the way down to enterprise. We begin with One Hundred Degree stuff.

Ben: What’s a robotic?

Soroush: Hmm.

Ben: Amory, care to reply earlier than Soroush does?

Amory: Hm. I’d say a robotic performs a activity mechanically and routinely and perhaps generally extra effectively than we are able to?

Ben: Not dangerous, not dangerous.

Soroush: So a robotic, I feel most individuals will consider a mechanical being, however the definition of a robotic is definitely extra basic than that. Something that does a activity {that a} human does, however in an automatic vogue, I might name a robotic.

Ben: Soroush began out engaged on mechanical beings at MIT. Robots that lifted issues, carried out bodily feats. However now, he’s extra targeted on a selected a part of robots. What he would perhaps describe because the mind. And this complete host of packages which frequently get referred to as by barely shorter identify, bots.

Earlier this 12 months, Soroush and a few of his college students began scraping information off of Reddit. An enormous variety of feedback, from hundreds and hundreds of actual Reddit customers, to search for indicators of psychological sickness amongst these customers. They have been doing this factor within the on-line world due to one thing Soroush was seeing in his offline world.

Soroush: As a professor at Dartmouth, I’ve had a variety of conversations with college students, each graduate and undergraduate, who, have advised me that the tradition they arrive from is such that they nonetheless do not feel comfy speaking about psychological well being points, and so they really feel stigmatized to truly even say that, hey, I would, you recognize, be feeling anxious or perhaps barely, barely depressed.

Ben: Are you able to say extra concerning the particular cultures, or would you slightly hold it basic, it is as much as you.

Soroush: Effectively, I can, I can provide you — so typically talking, I feel a variety of Asian cultures and I imply each East and West Asia, not simply East Asian. So folks in Center East, in East Asia, South Asia.

Amory: Soroush and different researchers constructed a bot to assist folks from Asian cultures acknowledge they have been having a psychological wellness problem? By looking their posting information and in search of indicators of psychological stress? That’s wild. And likewise difficult.

Ben: Sure. And one of many issues that’s so fascinating about this. Is that hundreds of thousands of individuals all around the web are going round spending their days, I feel largely pondering that they’re interacting with different folks on-line. Certain all people’s heard somebody like Elon Musk complain that there may be too many Twitter bots. However increasingly persons are a part of this sophisticated, large, teeming ecosystem of people and digital machines interacting with one another. In apparent methods and type of sneaky methods. For higher and for worse. And we wish to discuss that.

Amory: I’m Amory “completely not a robotic” Sivertson.

Ben: I’m Ben “not a robotic” Johnson and also you’re listening to Countless Thread.

Amory: We’re coming to you from WBUR, Boston’s NPR station. And we’re bringing you a brand new sequence concerning the rise of the machines. Good bot.

Ben: Dangerous bot. Right this moment’s episode: Bot remedy.

Amory: OK Ben. If a bot lives on the web, is it actually a robotic?

Ben: I feel by Soroush’s definition, sure. A robotic does one thing a human does however in an automatic vogue. However Soroush, who works on the school the place the time period Synthetic Intelligence was first coined won’t even name his creation a bot. He may name it a mannequin.

Soroush: The mannequin itself is the core of the bot. The opposite half is the enter on output is simply plugging it into some type of a, you recognize, platform and have it run in actual time. So yeah, go forward.

Ben: Are these, are these the three elements of this type of bot enter output mannequin?

Soroush: That is proper.

Ben: And is the mannequin type of like a highway map or an instruction handbook or one thing like that? How would you additional describe the mannequin?

Soroush: Yeah, that is a very good query. The, a mannequin, the only means to consider it’s as a mathematical operate that maps the enter to the output. So right here the enter is uncooked information collected from the actual world. You have got a mathematical mannequin, that is what we name the mannequin, they will then map it by some transformation to a significant output.

Amory: I don’t know man. I don’t know. Mannequin? Significant output? Enter? Bleep bloop.

Ben: OK so consider a type of actually sophisticated flowchart proper? The enter is the start of that sophisticated flowchart and the output is the very finish of it. The mannequin is the center.

Amory: Bleep bloop boop.

Ben: So let’s really check out an actual world instance. And it’s a related one. It’s referred to as ELIZA. ELIZA was a pure language processing laptop program that was created by this legendary MIT professor named Joseph Weizenbaum.

Amory: Pure language processing is…

Soroush: Pure language processing, which all which means is you are principally creating fashions that offers with naturally spoken languages, uh, has actually superior within the final decade.

Amory: And ELIZA was a pc that you just talked to. However not simply any previous Common Joe laptop program. A therapist laptop. And it wasn’t constructed within the final decade. It was inbuilt 1964.

Ben: So to strive it we have to get right into a time machine, however simply kidding.

Amory: I imply, I want. However I might go into the longer term. Which is feminine, for the file.

Ben: Truthful honest, however so was ELIZA. Supposedly. And persons are nonetheless operating her code on the internet. Amory, we could go to the therapist?

Amory: Let’s go to the therapist.

You despatched me an Eliza hyperlink. Okay, it says speak to Eliza by typing your questions and solutions within the enter field. She says, “Whats up, I am Eliza. I will be your therapist at this time.”

Ben: What let’s assume?

Amory: (Typing.) Hello, Eliza. I am feeling fairly lazy at this time. Okay, she says, “Do you imagine it’s regular to be feeling fairly lazy at this time?” What do you wish to say again?

Ben: No.

Amory: (Typing.) No, not one bit. She says, “Are you saying no. Simply to be damaging?”

Ben and Amory: (Snort.)

Ben: As an example, perhaps I do not know.

Amory: (Typing.) Possibly, I do not know. That is simply how I really feel.

She says, “Do you want to have the ability to know that is simply how you’re feeling? Query mark.” All proper, I am onto you ELIZA. That is the factor the place you are gonna say, you retain asking a query again. (Typing.) Sure, I want to know the supply of what is making me really feel lazy, and like I am unable to full issues.

Wow, that was a speedy hearth response. She says, “How have you learnt you’ll be able to’t full issues.”

Ben: Simply ask my editor.

Amory: (Laughs.) Simply ask my boss. She says, “Would you say that you’ve psychological issues?”

Ben: Effectively, how about you say, would you say that you’ve psychological issues?

Amory: (Typing.)

Ben: What do you bought?

Amory: Okay. She says, “Oh, dot dot dot. Let’s transfer on to one thing else for a bit.”
Say that, “I’ve psychological issues? Query mark.”

Ben: How about, you make me depressed Eliza?

Amory: (Typing.) Oh my God, “What makes you suppose I am making you depressed, Eliza?” The bot is self destructing so far as I am involved.

Ben: (Laughs.)

Amory: Like, she does not know her identify. She’s, you recognize, it is like I do know I’m. However what are you? What is going on on?

Ben: She’s type of damaging on this remedy session. Sort of a damaging vibe, no?

Amory: Yeah. I imply, we weren’t essentially giving her the most effective materials to work with, however essentially the most useful factor that I learn on this interplay is is her saying, “How have you learnt you’ll be able to’t full issues?” Yeah, perhaps I am going to simply say that to myself all through the day.

Ben: OK. And we’ll get again to ELIZA and why that have is just not nice. However consider Soroush’s undertaking as an evolution of this decades-old concept, that people in dialogue with chatbots may be useful. As a result of perhaps a bot might help us see issues that we wouldn’t usually see ourselves.

Amory: And if ELIZA was constructed one thing like 60 years in the past, then bots ought to be wonderful specialists at this! Proper? Besides no, completely not. In actual fact they suck at it. As a result of we people are nuanced as hell. And whereas robots have been processing human language for some time, actually understanding that means from that language is much more difficult.

Soroush: So it is simple to properly, comparatively simple, I’m going to place that in quotes to research what folks say when it comes to what they really say explicitly. Nevertheless it’s a a lot more durable scientific query to make use of what persons are saying to deduce what’s the inner psychological state? Folks know find out how to infer different folks’s states primarily based on the best way they speak and feelings and facial expressions, bots do not. And in order that’s a vital means for bots to study to deduce folks’s inner states.

Ben: That is actually fascinating. So in a means, you are speaking a couple of foundational want that bots have, which is deciphering and understanding people’ underlying feelings.

Soroush: This is called, in cognitive science, as folks generally referred to that as idea of thoughts. And so people, after all, developed to try this. So did monkeys, for instance. And different primates.

Ben: Over a very, actually, actually, actually, actually, actually, actually very long time.

Soroush: Precisely.

Ben: Soroush factors again to his workplace bookshelf the place there’s a rock polisher. A pitcher that accelerates a pure course of considerably unnaturally.

Soroush: We’re doing one thing very comparable. The place we’re doing what evolution does is lots of of hundreds of thousands of years. However in a couple of years, principally.

Amory: Some may say this feels slightly like enjoying God. Accelerating a chunk of software program’s understanding of the psychological state of people. It’s a bit … yikes?

However we’ve been reaching for the celebs on these things for a very long time, ever since we imagined the longer term, or imagined folks imagining the longer term.

Soroush: I am a giant science fiction fan, so just about all of my analysis is definitely impressed at some degree, you recognize, by science fiction. However this specific line of analysis, psychological states and extra importantly, with the ability to predict folks’s habits. It really was impressed primarily by studying the Basis sequence by Asimov. The core of the sequence is that there is a mathematician referred to as Harry Seldon who develops a mannequin referred to as or a discipline of examine really referred to as Psychohistory…

[Foundation clip audio: Psychohistory is a predictive model designed to predict the behavior of very large populations]

Soroush: That is ready to predict how societies will evolve sooner or later primarily based on previous historic information.

Amory: In a minute, how Soroush is following within the footsteps of Harry Seldon, making psychohistory actual with the person commenting histories of Redditors.

[SPONSOR BREAK]

Ben: One thing is absolutely necessary to say right here: Soroush and his graduate college students stopped wanting assembling a bot due to the potential implications of constructing a bot which may detect psychological well being or psychological sickness challenges in particular person customers.

Amory: That is good! We’re studying. Don’t construct Skynet. Possibly simply write a paper that imagines what may occur if we did construct Skynet.

[Terminator movie clip audio: If we upload now, Skynet will be in control of your military…but you’ll be in control of Skynet, right?!]

Ben: What Soroush did as a substitute was chart find out how to construct the bot. Run the mannequin, the enter, the output, and likewise find out how to tune that output.

Amory: Tune?

Ben: We’ll get there. For now know this. The group at Dartmouth checked out tons of Reddit customers’ publicly obtainable information over time.

Soroush: It is hundreds.

Ben: Okay. Tens of hundreds or simply hundreds?

Soroush: Tens of hundreds.

Amory: However their objective wasn’t to have a bot or laptop mannequin inform if a bunch of individuals have been having psychological well being challenges within the mixture. Relatively, at a person degree. Which once more, is difficult. As a result of we’re all, properly, people. On this laptop science space of examine, pure language processing, the mannequin has to account for various folks speaking in another way. For instance, sarcasm.

Ben: Sarcasm is tremendous exhausting. Which is why Soroush’s group was making use of pure language fashions in a very particular means.

Soroush: So the mannequin learns that idiosyncratic use of language by every individual.

Amory: This admittedly is each similar to what a person therapist may do over time, study the complexities of communication in a given affected person. But additionally, one thing that permit’s be sincere is an enormous, large use of time. Therefore that computational dashing up of evolution.

Ben: The very first thing the group’s mannequin, or bot, does with these large information units on a person’s complete Reddit posting historical past is take away sure sorts of issues, like references to specific occasions, and folks.

Amory: Like say, a pandemic.

Soroush: As a result of we wish to be sure we’re not capturing feelings directed in direction of specific occasions. However, you recognize, we wish to seize the individual’s inner feelings,

Amory: Then, the mannequin makes use of some fairly advanced pure language evaluation to discern that means, or the sign, from the posts.

Ben: That is an space the place pure language processing in laptop science has actually leapt ahead within the final decade or so. And Soroush’s group is utilizing the most recent and best packages to assist the bot perceive what the person is absolutely saying.

Amory: The place earlier laptop packages might detect key phrases and phrases, the brand new laptop packages are far more subtle.

Soroush: Phrases and phrases are, after all, informative, however we are able to really take a look at, as an illustration, the syntactic construction of a publish and uh, take a look at lengthy vary dependencies between phrases and what that signifies that you may say phrase in the beginning of the sentence. Language is sophisticated like that.

Ben: Proper, however there’s an enormous distinction between saying, “I am fascinated by killing myself.” And, “Wow, this, uh, prime quality gif-maker is absolutely killing it. He jogs my memory of myself.”

Soroush: Precisely.

Ben: Right here’s a giant query although.

How have you learnt if the bot you constructed works?

Soroush: Sure, that is a very good query. So for these type of tasks, evaluating, uh, your, your physique’s most likely really essentially the most difficult.

Ben: Earlier than the group checked out measuring success they did a variety of testing and tuning of the mannequin. They gave the bot take a look at inputs and waited for the mannequin to present outputs, and if the outputs have been off, they really utilized one other layer of calculation on the outputs after the mannequin to get extra correct outcomes. Then they checked out two measures of success, whether or not the bot predicted a person had a psychological well being challenge and later that person joined a psychological well being targeted subreddit and likewise checked out customers self-reporting psychological well being challenges.

Soroush: Surprisingly, lots of people self-report after some time saying that, hey, I simply received recognized by you recognize, they go to those boards and so they subreddits and so they say, I received recognized bipolar, with bipolar as an illustration. Proper.

Ben: So, the 2 markers for fulfillment out of your standpoint are person joins psychological well being associated subreddit person self-reports that they’re both they have been recognized with a psychological well being dysfunction or they’re coping with a psychological well being problem.

Soroush: Precisely. And our mannequin would have been profitable if we predicted that means earlier than the person really reviews. Once more, if we detect it afterwards, it is meaningless, after all. So it is about how far upfront you’ll be able to detect that.

Amory: This data is after all anonymized within the group’s work. And since Soroush and his group needed to get clearance from an ethics board to even do the work, we did not take a look at particular customers or ask to interview any of them. The group selected Reddit partially as a result of the person publish historical past is publicly obtainable and Reddit gives this information in simple methods for researchers to make use of with out strings hooked up, a key distinction between Reddit and Meta’s Fb. However you do must marvel a bit how folks may really feel about being a part of this examine.

Ben: To be clear, Soroush isn’t really making an attempt to exchange therapists. Create the most recent, best ELIZA. He’s making an attempt to attach the challenges he spoke about earlier in sure cultures and construct a bot which may assist counteract what he and a few of his college students see as unhealthy cultural norms round discussing psychological sickness or acknowledging it. It may very well be extra of an early warning system.

Soroush: I got here to the conclusion that having a, a means for folks to not must voluntarily say, hey, I really feel depressed, can be an enormous assist to folks coming from these tradition. 

Ben: Amory, how would you’re feeling about getting a nudge that you just may be depressed by a bot that was studying your complete historical past of posting on social media?

Amory: Truthfully, I’m not as cautious of the type of Massive Brother factor that most individuals are, and perhaps that’s a foul factor. However I don’t suppose it will harm to have a light-weight shine on my posting behaviors and simply to take one other look again at them and go oh yeah, I did publish some issues or say some issues as a result of we simply don’t have that perspective ourselves, you recognize?

Ben: So let’s really return to ELIZA for a minute. And ELIZA’s creator.

Amory: Hmm. Joseph Weizenbaum.

Dag Spicer: Joseph Weizenbaum and his household emigrated to the USA within the Thirties. They noticed what was coming with the Nazi Get together and Hitler.

Ben: That’s Dag Spicer. Who we frolicked with for some time. He’s really not in Dartmouth, New Hampshire. He’s on the alternative aspect of the nation as Soroush.

Dag: I am Dag Spicer, senior curator on the Pc Historical past Museum. And we’re in Mountain View, California, proper now.

Amory: Dag is type of a particular man with a type of a particular identify. He’s been on the laptop historical past museum for nearly 30 years. And he is aware of all the things about computer systems. And he additionally is aware of a very good bit about ELIZA and about ELIZA’s creator, Joseph Weizenbaum, who labored on a couple of computer systems which had a big affect on how we reside and work together with machines. Even earlier than ELIZA.

Dag: Weizenbaum and others labored on this laptop referred to as ERMA, which was a machine for processing checks. Effectively, how did it try this? Effectively, the actually cool factor they got here up with was this font referred to as MICR, magnetic ink character recognition, that we are able to all nonetheless see on the underside of our checks. It is these bizarre little form numbers that you just see on the backside of your verify. These come from ERMA, circa 1953.

Ben: Dag says that ERMA’s affect wasn’t simply on these little humorous numbers on the underside of a verify. It additionally put hundreds and hundreds of verify processors, human verify processors, out of labor.

Amory: And Dag says this had an affect on Weizenbaum.

Dag: He was a technologist who actually cared how his work was getting used and the way the self-discipline that he was part of was getting used.

Amory: Weizenbaum, who grew to become a foundational thoughts in synthetic intelligence and human laptop communication, was nervous concerning the issues we would attempt to remedy, or construct, with tech.

Ben: And right here’s the humorous half. ELIZA, which has been referred to as the very first chatbot, wasn’t really a severe undertaking. ELIZA was constructed as a satire. Meant to exhibit to people how chatterbots, as they have been initially referred to as, may behave poorly.

Amory: Thoughts. Blown.

Ben: That’s why our remedy session didn’t go so properly, Amory!

Amory: We now have been performed!

Ben: Joseph Weizenbaum died in 2008, a 12 months after the iPhone was launched. However Dag says this skepticism of expertise was a operating theme all through Weizenbaum’s life.

Dag: It actually began with most notably with, with Robert Oppenheimer, who stated, you recognize, or after he created the atomic bomb, lived the remainder of his life in remorse at what he had achieved. Proper? And he stated, you recognize, technologists must be on their guard for what he referred to as technologically candy, quote unquote, issues, as a result of they really appeal to you with their problem. However, if you happen to take a look at them from a extra humane perspective, they could be really fairly dangerous.

Amory: We requested Dag what Weizenbaum may take into consideration Soroush Visogi’s undertaking reddit publish histories to get a way of whether or not customers have been scuffling with psychological well being points. He didn’t wish to communicate out of activate behalf of Weizenbaum. So we requested him simply what he thought.

Dag: My first intestine response is it is, it’s kind of scary as a result of they’re basically temper watching. And, you recognize, there are AIs now that learn folks’s faces and do the identical factor. They’re like, oh, you are in a foul temper at this time. , they simply take a look at your face. And it’s simply such a slippery slope, you recognize, from there to intervention by, by the state or by anyone. So, you recognize, it is at all times the tradeoff, proper? Effectively, if it saves one life, is it value—? However, you recognize, I feel I feel on this case, I do not suppose it is a good suggestion.

Ben: Soroush constructed the mannequin scraping Reddit to search out indicators of psychological sickness in particular person customers’ posts. So he’s not so skeptical. However he does have a giant caveat.

Soroush: It should not be the platforms or authorities or every other exterior entity that is operating this stuff and, you recognize, telling folks to go see a therapist or whatnot. It ought to be a selection by folks to run this stuff privately, and the communications ought to be personal between that device and the individual.

Ben: Whether or not or not you assist Soroush’s group in imagining a world the place an opt-in program might assist folks acknowledge their very own psychological well being wants and challenges, otherwise you’re extra cynical about how a program like that may very well be used, like Dag Spicer and even, perhaps Joseph Weizenbaum. These items is already taking place.

Amory: Bots are already dutifully harvesting large, publicly obtainable datasets, interacting with customers, and far more. Generally we don’t even understand that our expertise of the web isn’t simply folks speaking to folks. It’s more and more mediated by little items of software program, skilled on the most recent and best packages. To do all kinds of issues. Right this moment: working towards find out how to predict your psychological well being challenge. Tomorrow: operating for political workplace?

Ben: Subsequent week…

[Preview audio: And of course, being digital, you can keep a record of, of everything that you say and do. So it creates a level of accountability that the current politicians just don’t, don’t have.]

Ben: Good bot.

Amory: Dangerous bot.

Ben: Countless Thread is a manufacturing of WBUR in Boston.

Amory: This episode was written and produced by my co-host Ben Brock Johnson with assist from Dean Russell. And co-hosted by yours actually. Combine and sound design by Paul Vaitkus.

Ben: Our internet producer is Megan Cattel. The remainder of our group is Nora Saks, Quincy Walters, Grace Tatter, Matt Reed, and Emily Jankowski.

Amory: Countless Thread is a present concerning the blurred traces between digital communities and a ineffective field. When you’ve received an untold historical past, an unsolved thriller, or a wild story from the web that you really want us to inform, hit us up. E-mail Countless Thread at WBUR dot O RG.