Article Published: 9/29/2025
Counselors play a vital role in supporting students in crisis. As schools increasingly adopt artificial intelligence (AI)–based monitoring systems to identify suicide risk, it is essential for counselors to understand the opportunities and challenges these technologies present. While AI offers proactive capabilities, it also raises important concerns around privacy, equity, and clinical appropriateness. A trauma-informed, ethical response begins with a deep understanding of how these tools function—and how frequently they may generate false positives.
Several companies, including Gaggle, GoGuardian Beacon, Securly, Lightspeed, and ManagedMethods, provide software that monitors student activity on school-issued devices and networks. These tools use AI to scan for keywords or behavioral patterns linked to self-harm, suicidal ideation, or violence. Alerts are then sent to designated staff members and, in some cases, contain built-in real-time supports such as pop-up mental health resources.
“This is about early detection and proactive intervention,” says Jeneifer Threadcraft, NCC, LPC, owner of Positive Peering in Snellville, Georgia. “The goal of these systems is to identify the early warning signs of a student at risk for self-harm or suicide before a crisis can occur.”
Threadcraft delves into Bark, a free program that scans school-issued emails, chats, and files for a wide range of potential issues, including threats of violence, cyberbullying, and references to drugs or alcohol.
Beyond monitoring, Bark provides essential tools for school administrators. Its web-filtering feature allows schools to control the online content students can access, enabling the blocking of specific websites or entire categories, from social media to online gaming. This policy can be enforced whether a student is on the school network or at home.
Threadcraft is a proponent of Bark, and other platforms alike, because it employs a combination of both AI and human safety experts to review the content, ensuring accuracy and providing a swift, informed response.
According to the National Library of Medicine (NIH), AI tools demonstrate between 72%–93% effectiveness in identifying suicide risk when analyzing data from social media and health records; however, ethical and implementation concerns persist (Sherekar & Mehta, 2025).
AI monitoring can create a sense of surveillance among students and compromise student privacy and trust. Some may avoid searching for mental health information online for fear of being flagged, which can erode the trust that counselors work diligently to establish (Collins et al., 2021).
“The use of AI-powered software for student monitoring is a significant source of conflict with student privacy,” says Threadcraft. “It forces a complex balancing act between the desire to ensure student safety and the fundamental right to privacy, a debate with ongoing legal, ethical, and social implications.”
Furthermore, marginalized students, including LGBTQ+ youth, English language learners, and those from low-income backgrounds, are at higher risk of being misidentified due to algorithmic bias and cultural insensitivity embedded in data sets or decision-making logic. Innocuous activity, such as researching a paper on depression, may trigger alerts and create false positives. In the absence of proper context or clinical support, such incidents can lead to unnecessary stress or mislabeling rather than therapeutic intervention.
Technology providers often highlight their individual success stories, but data on long-term outcomes and accuracy is limited. Counselors must advocate for responsible implementation, transparency, and adequate follow-up procedures. Many schools lack the clinical infrastructure needed to act on alerts appropriately, and this is where counselors’ leadership becomes essential (Collins et al., 2021).
Beyond monitoring software, several clinically validated suicide risk assessments are available in digital formats. Counselors should ensure these tools are used within a comprehensive support framework—not merely to flag concerns, but to connect students with meaningful care. Two examples are the Computerized Adaptive Screen for Suicidal Youth (CASSY) and Columbia-Suicide Severity Rating Scale (C-SSRS) assessments.
CASSY is a modern tool using a cloud-based platform and a bank of 72 questions. It adapts to student responses in real time to efficiently screen for suicide risk.
C-SSRS is a standardized questionnaire used to assess the severity and immediacy of suicide risk. It is available in different versions, from a short screener (2–6 questions) to more detailed versions for clinical and research use.
Traditional anonymous reporting channels also remain a valuable component of a comprehensive monitoring prevention strategy. Many states offer anonymous tip lines or web portals, where trained staff evaluate submissions and escalate urgent cases as needed.
Threadcraft recommends a blended approach of AI and more traditional methods in monitoring students for suicide. Counselors can partner with school leaders to integrate modern modalities with existing mental health resources to ensure students are aware of both digital and more traditional in-person resources.
The most significant advantage of AI, according to Threadcraft, is its ability to process vast amounts of data at incredible speed. Unlike traditional monitoring, which relies on limited staff time and resources, AI can instantly scan millions of emails, chats, and documents for concerning content. A student expressing suicidal thoughts can potentially be identified in minutes, not days. Traditional monitoring can be a “logistical nightmare,” as time spent here can be better spent on instruction and student support.
On the contrary, Threadcraft believes the power of human connection is the biggest benefit of traditional methods. In-person assessments create a safe space built on trust and rapport, and a counselor can pick up on subtle cues that technology misses, such as body language, tone of voice, and nonverbal expressions of distress.
“The biggest mistake to avoid is a one-size-fits-all approach,” concludes Threadcraft.
Jeneifer Threadcraft, NCC, LPC, is the founder and CEO of Positive Peering, Inc., which she established in 2004. She received a master's degree in psychology from Grand Canyon University in 2016. Threadcraft specializes in trauma and suicide prevention and is a member of the Georgia Murder Suicide Response Network Team, where she provides group therapy and individual therapy. Her passion lies in servicing the community by providing food and clothing to underprivileged families in need.
**Opinions and thoughts expressed in NBCC Visions Newsletter articles belong to the interviewees and do not necessarily reflect the opinions or practices of NBCC and Affiliates.
References
Collins, S., Park, J., Reddy, A., Sharifi, Y., & Vance, A. (2021, September). The privacy and equity implications of using self-harm monitoring technologies. Student Privacy Compass. https://studentprivacycompass.org/resource/self-harm-monitoring/
Sherekar, P., & Mehta, M. (2025). Harnessing technology for hope: A systematic review of digital suicide prevention tools. Discover Mental Health, 5(1), 101. https://doi.org/10.1007/s44192-025-00245-y
The information provided by the National Board for Certified Counselors, Inc. (NBCC) on the nbcc.org website (site) is for general information purposes only. NBCC makes significant efforts to maintain current and accurate information on this site. We are not responsible for any information concerning NBCC or our programs, services, or activities that is published or displayed on any third-party website(s). These websites are maintained by third parties over which we exercise no control, and for which we have no responsibility. Individuals should verify any information obtained from third-party sources by referring to our official site or contacting our customer service team directly.
Copyright ©2025 National Board for Certified Counselors, Inc. and Affiliates | All rights reserved.