Social media posts reveal a lot about the posters. That is why some agencies look at job candidates' or security clearance applicants' social media accounts. Now...
Social media posts reveal a lot about the posters. That is why some agencies look at job candidates’ or security clearance applicants’ social media accounts. Now research shows how monitoring social media posts can reveal indicators of suicide … and therefore help prevent it. The Federal Drive with Tom Temin spoke with the man doing the research: Harvard psychology professor Matthew Nock, PH.D.
Interview Transcript:
Tom Temin And you’ve been studying suicide for quite a long time. So this sounds like this joins a long list of indicators that if people are sensitive to, they can maybe help people prevent suicide. What are some of the indicators besides social media? And we’ll get to the details there in a moment.
Matthew Nock So I’ve been at Harvard studying suicide for 20 years now. So we’ve been looking at trying to understand suicide in the general population, among adolescents, among military service members, veterans and so on. And there are some key risk factors across all groups we’ve examined, probably most prominently, the presence of mental illness. So about 90 to 95% of people who die by suicide have some form of mental disorder before their death. And usually it’s what we call co-morbidity or multimorbidity, which refers to having two, three, four mental disorders at once. Depression is the one that people think of most, and it has the strongest relationship with suicidal thinking. But other disorders like alcohol use, substance use, aggressive behavior, what we call intermittent explosive disorder, are more predictive of suicidal behavior. So acting on those thoughts and we think it’s really the combination of those factors that put people at elevated risk.
Tom Temin All right. So looking at social media postings, then that’s kind of once removed from observation of the person themselves. And it may be that the people closest to the person don’t even see those posts. Right. Can see the outbursts. They can see the alcohol consumption. They can hear direct statements. I’d like to kill myself or something along those lines. So tell us about the research in social media. And you used army subjects here.
Matthew Nock Yes. So this work with social media trying to better understand and predict suicide builds on decades of work on how people talk about suicide. So for decades, we’ve known that about two thirds of people who die by suicide told someone ahead of time that they were thinking about death or dying or wanting to kill himself. So we’ve long known that people often give us signals that they’re thinking about suicide, that they’re at risk. People don’t really know how to respond. And research has also shown that about 80% of people who die by suicide denied suicidal thoughts or intentions in their last communication before dying, which makes it understandable that people wouldn’t know what to do. When should we act on someone’s talking about suicide and when shouldn’t be? What social media does is provides a platform for capturing all of that information and systematizing how we scan for it and how we respond to it. And so we’ve been the past few years working with a social media platform. I think this is what you’re referring to called Rally Point, which is sort of Facebook meets LinkedIn for military service members and veterans. It’s a place where people can go and post about what they’re experiencing or how they enjoy fishing and hunting and questions about the military and so on. And occasionally people will post about suicide. And so we’ve been building machine learning classifiers. So basically algorithms that sweep over posts and in an automated computerized way identify the posts of people who are having suicidal thoughts and posting about them or posting hints about them and intervening in real time to try and reach out to those folks and save them.
Tom Temin And what are some of the things that you can see in a Rally Point? I mean, when people discuss suicide, they could say, gee, one thing we want to do is help our comrades avoid suicide. That’s one thing. But do people express thoughts on this rally point as postings that could be indicators that the poster is thinking about suicide?
Matthew Nock Absolutely. So we’ve built these classifiers that are Rally Point specific in a way, and that we capitalize on the fact that many military service members and veterans will use language that’s little bit different from what people use in the general population, for instance. There’s a longstanding statistic that there are 22 suicides per day among veterans. And so someone might post I’m about to be one of the 22. We’re in the general population. That doesn’t mean so much. But on this site, we know that numbers often used to refer to suicide. And so those are posts that will flag for humans to take a look at and determine, is this poster at risk of suicide or are they posting about suicide? And then right now, the models that humans from Rally Point will intervene in real time. And we’ve had dozens of cases so far where we found people who were suicidal and intervened to try and keep them safe. And our next research steps are building automated interventions that suggest to other people that they reach out to the person at risk or suggested the person at risk if they take steps to try and keep themselves safe. So we can really scale this up and potentially share it with other platforms.
Tom Temin We’re speaking with Dr. Matthew Nock. He is chairman of the psychology department at Harvard University and a research scientist at Massachusetts General Hospital. That was my question. Can this be operationalized in a way that doesn’t violate the person’s privacy because the best intentions in the world can run afoul of privacy statutes. And who would you alert because of the linked nature of these accounts? Almost no one would not be connected in some way, even if a couple of times removed from the original poster.
Matthew Nock It’s a very good point you raise. And there are some well known efforts where well-intended steps have gone awry. So there was a platform that would find people on Twitter who are at risk for suicide in crisis and reach out to people in their network. And the effort was stopped in just a few days because it backfired. You know, someone would say, I’m being bullied, I’m going to kill myself. And the app would reach out to the bully and say, Hey, this person’s going to kill themselves. And the bully might say, Good, you should kill yourself. So well-intended in terms of finding people and trying to help them, but it didn’t work the way it was intended. So we’re really mindful of the fact that we have to do research and we have to do experiments, and we have to see is this thing that we think helping people, is it actually helping or is it doing harm? We want to make sure to prevent that. We’re consistently motivated by two things. One is people at risk often don’t have access to care. This is true of veterans. It’s in a way true of service members, military service members. They have access to care, but they’re often not using it because they’re encouraged not to communicate with other people, that they’re having mental distress, that they’re having thoughts of suicide for fear of being discharged from the service. So they’re not getting ready access to care. And the second is peer support service members, veterans often say when they’re in distress, they don’t want to go to sort of traditional clinical channels. They don’t want to go to their doctor. They don’t want to go to the supervisor. They want to talk to peers. They want to talk to their friends, their comrades, their partners. So what we’re trying to do is capitalize on that and get people access to care through their peer support network.
Tom Temin And just a detailed question, you mentioned, you know, I could be one of the 22 today as an obvious post that is known in its meaning to people in the military. What are some other signs for those that might have a suspicion about someone or just care about other people that are reading these posts? What are some other things that may be a little bit more subtle to look out for?
Matthew Nock Yeah, that’s a great question. There was a work group a few years back of suicide researchers who were focused on understanding the signs of warning signs for suicide. There are all sorts of efforts and little cards that are handed out saying, here are the things to look out for. A person becomes disheveled in their appearance or acts a little down. And what the group realized is there’s really no evidence for any of those really subtle things. The big warning signs to look out for are people talking about not wanting to be around anymore. Thinking about death, thinking about dying or talking about suicide. So I would encourage people to look out for those things. And it’s often difficult. You know, having studied suicide for well over 20 years, it’s difficult for me sometimes if I have a friend, coworker or patient who suddenly mentioned something about not wanting to be around anymore. It can be stressful just to broach the subject and ask. But I would encourage people to ask, Are things so bad that you’re thinking about suicide? There’s lots of evidence showing that asking people about suicide, talking about suicide does not make people more suicidal. And that’s a big concern. If I ask the question, maybe I’ll put the idea in someone’s head. And that’s just not the case. So given my earlier statistic that two thirds of people who die by suicide told someone about it, I would really encourage people if someone around you suddenly hints that or explicitly stated that they’re thinking about suicide follow up and ask them about it, and I often kind of lead into it. How have you been feeling down? Have you been feeling depressed? Have things been so bad that you’ve thought about not wanting to be alive anymore or that you’ve thought about suicide? And if so, allow the person to share what they’re experiencing. Listen in an open, concerned way and talk to them about the importance of getting help and offer to bring them to a local emergency department or a hotline or a local mental health professional for a thorough evaluation. It could save a life.
Tom Temin And what you say about the language means it’s really important to design those artificial intelligence algorithms correctly because someone could say, I don’t want to be around. And they’re referring to a circus that’s making noise, you know, down the street.
Matthew Nock Absolutely.
Tom Temin The context is really the harder thing maybe to get at than the specific words.
Matthew Nock Right. When we, as with anyone built, build these classifiers, these machine learning classifiers, you want to balance false positives and false negatives, as we say. So false positives, you know, the flag goes up saying, here’s something we’re concerned about when that’s false, it’s you know, the person is talking about the circus. This is the same logic that is used in creating spam filters. Is this spam or is it not? And they work okay. And as you give the algorithm feedback, you look into your spam folder and say, yep, spam, spam, spam, not spam. The algorithm learns that we do the same thing here. We try and figure out, is this post someone in distress? Are they at risk for suicide? And we’re much more okay with false positives than we are with what we call false negatives, where we miss a post of suicide. We don’t want to miss them. So what we do is flag them and then have a human look through to catch the ones that are false, positive and not respond and only respond to the ones that look like they’re real people in real distress.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years.
Follow @tteminWFED