1. Why telepractice LSA is now a permanent part of the SLP service-delivery landscape
Telepractice has stopped being the 2020 emergency stopgap it was originally framed as and has settled into a permanent service-delivery option for a meaningful fraction of the speech-language pathology workforce in 2026. The post-pandemic ASHA practice surveys show that roughly one in four school-age SLPs is delivering at least part of their caseload through a video platform, with the proportion meaningfully higher in rural states, large geographic school districts, and private practices serving medically fragile children who cannot reliably travel to a clinic. The rural-state effect is the load-bearing one for school districts: in a state where the nearest qualified bilingual school SLP is two hundred miles from the child, a video-platform service-delivery model is not a downgrade from the in-person standard, it is the only realistic way the child receives services at all.
The clinical reality the practice surveys describe is that telepractice LSA — the activity of collecting, transcribing, scoring, and writing up a language sample on a child the SLP has only ever met through a video link — is now a routine part of the assessment workload for the SLPs in the rural-state, large-district, and medically-fragile-private-practice segments. The methodological question the published telepractice literature has been working on for the last six years is therefore not "is telepractice LSA acceptable" but "what protocol decisions make a defensible telepractice LSA possible." The literature has converged on a set of explicit answers to that question, and the answers map onto the six load-bearing problems this pillar walks through one by one.
The 2026 honest framing for the rest of this pillar is that telepractice LSA is methodologically defensible when the SLP makes explicit protocol decisions about audio recording quality, platform choice, parent coaching, environmental setup, the fully-remote transcription-and-scoring workflow, and the defensible remote eligibility report. Skipping any of these decisions — trusting the default platform settings, treating the parent as a passive observer, or omitting the platform-and-environment paragraph from the report — produces a sample that fails on inter-rater reliability, on the procedural-safeguards review, or on both. ConductSpeech and the deterministic calculators on this site are the load-bearing tools that make the protocol affordable on a real caseload, and they are the tools every section of this pillar will refer back to.
The 2026 honest line
Telepractice LSA is methodologically defensible when the SLP makes explicit protocol decisions about recording quality, platform choice, parent coaching, environmental setup, the fully-remote scoring workflow, and the eligibility report. The rural-state, large-district, and medically-fragile-private-practice segments have converged on this protocol because the alternative is no service at all, and the published telepractice literature backs every step.
2. Problem 1: Audio recording quality and the platform compression that destroys downstream transcription
The first methodological problem in telepractice LSA is the one most clinicians under-estimate when they first move a sampling activity onto a video platform: the audio that arrives at the SLP’s laptop is not the audio the child actually produced. Every consumer video platform applies aggressive lossy compression to the audio stream, with bandwidth-adaptive bitrate algorithms that drop quality further when the network is congested, and the compression artefacts that result — syllable smearing, sibilant distortion, dropped voiceless consonants, occasional whole-word dropouts on poor connections — destroy the downstream transcription accuracy that the entire LSA workflow depends on. A clinician transcribing from a compressed Zoom recording is not transcribing the child’s actual speech; they are transcribing a degraded reconstruction of it, and the metrics computed from that transcription inherit the degradation.
The corrective the published telepractice literature has converged on is to never rely on the video platform’s built-in recording for the audio that gets transcribed and scored. The 2026 best practice is a parallel local recording on the child’s end — the parent or an in-person e-helper records the child’s audio on a phone or a separate laptop microphone next to the child during the elicitation, while the video platform handles the visual interaction. The locally-recorded audio is then uploaded to the SLP after the session through a HIPAA-compliant file-sharing channel, and that uncompressed file is what gets transcribed. This produces audio quality comparable to an in-person session because it is in fact an in-person recording — the SLP just was not in the room.
For SLPs who cannot get a parallel local recording (the parent does not have a second device, the child will not tolerate an extra mic, the platform is the only option), the 2026 fallback is to choose a platform whose built-in recording uses a higher-bitrate codec and to confirm the recording is happening on the child’s end of the call rather than the SLP’s end. Recording from the child’s end captures the audio before it goes through the network compression layer, which preserves significantly more of the original signal than recording from the SLP’s end after the audio has round-tripped through the platform’s codec. This is a meaningful improvement over the worst-case scenario of recording from the SLP side, but it is still a meaningful step down from the local-recording protocol and the report should acknowledge it.
- Video platform audio is aggressively compressed — the audio at the SLP’s laptop is a degraded reconstruction of the child’s actual speech.
- The 2026 best practice is a parallel local recording on the child’s end — phone, separate laptop mic, or dedicated USB mic.
- Locally-recorded audio is then uploaded after the session through a HIPAA-compliant channel.
- When local recording is not possible, record from the child’s end of the platform call, never the SLP end.
- Document the recording channel in the eligibility report so the reader can audit the audio chain.
- Compression artefacts destroy syllable boundaries, sibilants, and voiceless consonants — exactly the segments the calculators score.
3. Problem 2: Which video platform actually supports a HIPAA-compliant LSA workflow
The second methodological problem is platform choice, and it is one where the consumer-grade defaults that work for an internal team meeting do not work for clinical service delivery. A defensible telepractice LSA requires a video platform with three load-bearing properties: a signed Business Associate Agreement (BAA) with the practice or district, end-to-end encryption of the video and audio streams, and an audit-logging capability that lets the SLP demonstrate the session was conducted on a HIPAA-compliant channel if the procedural-safeguards reviewer asks. The free-tier consumer versions of Zoom, Google Meet, and Microsoft Teams do not provide all three; the paid clinical / enterprise tiers of all three do, but only after the practice or district has executed the BAA and configured the account in HIPAA mode. The configuration step is not optional and the default settings are not HIPAA-compliant.
The 2026 platform landscape settled into a small number of options after several false starts. The dominant choices in the school-district segment are Zoom for Healthcare and Google Meet (Workspace edition with the BAA executed), both of which have explicit HIPAA documentation and both of which scale to the per-seat licensing model school districts already use. The dominant choice in the private-practice segment is a dedicated telehealth platform such as Doxy.me, SimplePractice, or TheraNest, all of which bundle the HIPAA-compliant video channel with scheduling, billing, and documentation. The dominant choice in the medically-fragile / hospital-affiliated segment is the hospital’s existing telehealth platform (Epic-integrated, Cerner-integrated, etc.), which inherits the parent organisation’s HIPAA compliance but typically lacks the audio quality and recording flexibility of the clinical-tier consumer platforms.
The clinical recommendation for an SLP setting up a new telepractice LSA workflow is to start from the platform the practice or district has already executed a BAA on, rather than from the platform the SLP personally prefers, because the BAA execution is the load-bearing legal step and re-executing it on a different platform takes months. The next-most-important step is to confirm the platform is in HIPAA-compliant mode on the SLP’s account specifically — the BAA covers the institution, but the SLP’s personal account may or may not be linked to the institutional license, and the audit-logging requirement only applies when it is. Once both steps are confirmed, the platform-choice problem is solved and the rest of this pillar applies.
- A defensible telepractice LSA platform needs a signed BAA, end-to-end encryption, and audit logging — not just one of the three.
- Free-tier Zoom, Google Meet, and Teams do NOT meet these requirements; the clinical / enterprise paid tiers do, after BAA execution and HIPAA-mode configuration.
- Schools converge on Zoom for Healthcare or Google Meet (Workspace + BAA).
- Private practice converges on Doxy.me, SimplePractice, or TheraNest — dedicated telehealth platforms.
- Hospital-affiliated SLPs use the parent organisation’s telehealth platform (Epic / Cerner integrated).
- Start from the platform the practice already has a BAA on — BAA execution is the slow legal step.
The platform configuration trap
A signed BAA covers the institution, not the individual user account. An SLP whose personal Zoom account is not linked to the institutional HIPAA license is delivering services on a non-compliant channel even though the institution has the BAA in place. Confirm HIPAA mode is active on the specific account the LSA session will be conducted from before the first appointment, not after the procedural-safeguards reviewer asks.
4. Problem 3: The parent / e-helper coaching protocol that makes a remote elicitation reliable
The third methodological problem in telepractice LSA is that the SLP is not in the room with the child, which means everything an in-person SLP would do with their physical presence — holding up a picture book, redirecting attention, repositioning a microphone, providing the wait-time pause without filling it — has to be delegated to the adult who is in the room with the child. That adult is usually the parent in the medically-fragile / private-practice case, the school nurse or paraprofessional in the school-district case, and an "e-helper" (the published telepractice term for a trained on-site adult) in the rural-state case. The success of the elicitation depends almost entirely on whether that adult has been coached on the protocol before the session, and a clinician who skips the coaching step and trusts the parent to figure it out in real time is the clinician whose remote LSA samples are systematically thinner and less informative than their in-person samples.
The 2026 best-practice parent / e-helper coaching protocol has four explicit components. First, a 10-minute pre-session orientation call (or video) the day before the appointment, in which the SLP walks the on-site adult through the elicitation activities, the wait-time rule, the no-prompting rule, and the recording-device setup. Second, a written one-page protocol cheat sheet the parent can keep next to the child during the session, with the elicitation prompts spelled out and the explicit "do not interrupt" reminder bolded. Third, a 5-minute audio and video check at the start of the actual session, in which the SLP confirms the child can be heard clearly and the camera is positioned correctly. Fourth, a brief debrief at the end of the session, in which the SLP asks the on-site adult what they noticed and whether anything in the protocol felt awkward or impossible — this is the feedback loop that improves the protocol over the next several sessions.
The single most important coaching point is the wait-time rule. An untrained parent will fill any silence longer than two seconds with a prompt or an answer or a "go on, sweetie," and the resulting transcript over-counts adult contributions and under-counts the child’s independent productive language. The bilingual pillar in this cluster covered the same problem from a different angle, and the answer is the same in both cases: the on-site adult’s job is transparent recording, not co-elicitation, and the SLP’s coaching has to be explicit about the difference. A clinician who trains their parents and e-helpers on the wait-time rule produces telepractice LSA samples that are methodologically comparable to in-person samples; a clinician who does not produces samples that systematically under-represent the child.
- The on-site adult — parent, paraprofessional, or e-helper — is the load-bearing operator of the elicitation, not a passive observer.
- 10-minute pre-session orientation call the day before the appointment.
- Written one-page protocol cheat sheet the parent keeps next to the child during the session.
- 5-minute audio and video check at the start of the session, before the elicitation begins.
- End-of-session debrief to capture what felt awkward or impossible — this is the protocol-improvement feedback loop.
- The wait-time rule is the single most important coaching point — silence is data, not a problem to solve.
5. Problem 4: The environmental setup checklist parents need before the session
The fourth methodological problem is the environmental setup at the child’s end of the video link, which the published telepractice literature treats as a separate problem from the parent-coaching protocol because it solves a separate failure mode. Parent coaching addresses what the on-site adult does during the session; environmental setup addresses what the room looks like, where the camera and microphone are placed, how the lighting is configured, and what distractions are removed before the session starts. A perfectly coached parent in a poorly-set-up room produces a sample contaminated by background noise, sibling interruptions, or the dog barking at the FedEx truck; a well-set-up room with an uncoached parent produces a sample with clean audio but inflated adult contributions. Both problems have to be solved, and they have to be solved separately.
The 2026 environmental setup checklist sent to the parent ahead of the session has six explicit items. First, a quiet room with the door closed and other family members and pets in a different part of the house. Second, the recording device (phone or laptop mic) within two to three feet of the child, on a stable surface, with the microphone pointed at the child’s mouth. Third, the video camera at the child’s eye level, framing the child from the chest up, with the parent or e-helper visible in the same frame so the SLP can see what the on-site adult is doing. Fourth, the room lit from in front of the child rather than from behind — backlighting silhouettes the child and makes the lip-reading cues the SLP needs invisible. Fifth, the elicitation materials (picture book, toy set, art supplies) within reach of the child but out of the camera frame until the elicitation starts. Sixth, a phone or backup channel for the parent to reach the SLP if the platform crashes mid-session.
The same checklist gets re-confirmed at the start of every session in the audio and video check, because parents move things around between appointments and the room that worked perfectly last week may have become a backlit cluttered mess by the time the next session starts. The 2026 best practice is to keep the checklist as a one-page printable that the parent receives by email the day before the appointment and keeps next to the child during the session, so the cognitive load on the parent is to consult a list rather than to remember a verbal briefing from the day before. ConductSpeech and several of the dedicated telehealth platforms ship printable parent-environment checklists; the ones on this site are accessible from the Language Sample Worksheet page and are designed to be printed and tucked into the elicitation kit the parent keeps for the recurring sessions.
- Environmental setup is a SEPARATE problem from parent coaching — both have to be solved.
- Quiet room, door closed, other family members and pets in a different part of the house.
- Recording device (phone or laptop mic) within 2-3 feet of the child on a stable surface.
- Camera at child’s eye level, child framed chest-up, on-site adult visible in the same frame.
- Lighting from in FRONT of the child, not behind — backlighting kills the lip-reading cues.
- Backup phone channel in case the video platform crashes mid-session.
- Re-confirm the checklist at the start of every session — parents move things around between appointments.
The backlighting failure mode
The single most common environmental failure in telepractice LSA is a child sitting in front of a window, which silhouettes the child and makes facial cues invisible to the SLP. Lip-reading is one of the cues clinicians use when interpreting borderline phonetic productions, and losing it on a remote articulation case is a meaningful clinical loss. The fix is a 30-second lighting check at the start of every session, and the parent checklist needs to spell it out.
Get the full analysis
The defensible telepractice LSA workflow, end to end, in a HIPAA-compliant environment
ConductSpeech wraps the deterministic LSA calculators on this site with HIPAA-compliant transcription and telepractice-specific IEP goal drafting \u2014 the clinician reviews every score and owns the eligibility recommendation, the platform-and-environment paragraph, and the comparison-to-norms acknowledgement.
6. Problem 5: How the transcription, scoring, and IEP-goal drafting steps change when the SLP is fully remote
The fifth methodological problem in telepractice LSA is the workflow downstream of the elicitation: how the transcription, the scoring, and the IEP-goal drafting steps change when the SLP never had physical access to the child and is working entirely from the audio and video files captured during the session. The good news is that the changes are smaller than most clinicians initially expect, because the deterministic calculators on this site and the AI-assisted transcription pipeline in ConductSpeech do not care whether the audio came from an in-person session or a video-platform session — they care about the audio quality, which the platform-and-recording protocol from problems 1 and 2 has already solved. The downstream workflow is therefore largely unchanged once the upstream audio is right.
The transcription step works exactly as it does for in-person sessions: the audio file (locally-recorded by the parent under the protocol from problem 1, or recorded from the child’s end of the platform under the fallback protocol) is uploaded to ConductSpeech, which produces a HIPAA-compliant transcript in the standard utterance-per-line format the calculators expect. The scoring step works exactly as it does for in-person sessions: the transcript is run through the MLU Calculator, the Lexical Diversity Calculator, the PGU Calculator, the DSS or IPSyn Calculator depending on the case, and the Brown’s Stages Lookup or the SUGAR Norms Lookup for the normative comparison. For school-age narrative cases the same transcript also goes through the Narrative Scoring Scheme Calculator and the Story Grammar Scorer from the previous pillar in this cluster. None of the calculators have a "telepractice mode" because none of them need one — the math is the same, and the deterministic property holds end to end.
The IEP-goal drafting step is also largely unchanged, with one telepractice-specific modification: the goal language has to specify whether the goal will be measured in the in-person setting (during a periodic on-site reassessment) or in the remote setting (during routine telepractice sessions), because the procedural-safeguards reviewer needs to know that the measurement protocol matches the service-delivery model. A goal written as "Marcus will produce four-word utterances at 80% accuracy across three consecutive sessions" is incomplete in a telepractice context; the same goal written as "Marcus will produce four-word utterances at 80% accuracy across three consecutive telepractice sessions, as measured by the MLU Calculator on transcripts of locally-recorded session audio" is the version that survives review. The IEP Goal Generator on this site supports the telepractice-measurement modifier as a standard option for goals on remote-only caseloads.
- Once the upstream audio is right, the downstream workflow is largely unchanged from in-person LSA.
- ConductSpeech transcription works on the locally-recorded audio file — same HIPAA-compliant pipeline.
- Every deterministic calculator on this site (MLU, NDW, PGU, DSS, IPSyn, NSS, Story Grammar) is platform-agnostic.
- The IEP-goal drafting step adds a telepractice-measurement modifier — specify the setting in the goal language.
- "Telepractice sessions, as measured by the MLU Calculator on locally-recorded session audio" — the procedural-safeguards-defensible phrasing.
- The IEP Goal Generator on this site supports the telepractice-measurement modifier as a standard option.
7. Problem 6: Writing a defensible eligibility report when every piece of evidence was collected remotely
The sixth and last methodological problem in telepractice LSA is the eligibility report. A defensible remote-LSA report is structurally similar to the in-person version it derives from, but it has to add three explicit paragraphs that the in-person report does not need, and a procedural-safeguards reviewer who finds those paragraphs missing will flag the report on a technicality even when the underlying clinical reasoning is sound. The 2026 honest framing is that telepractice eligibility reports stand or fall on whether they pre-empt the procedural-safeguards reviewer’s questions explicitly, because the reviewer’s default question is "how do I know this evidence is reliable" and the report has to answer the question before the reviewer asks it.
The first additional paragraph is the platform-and-environment paragraph. It names the video platform used, confirms the BAA is in place, names the recording channel (parallel local recording on the child’s end vs platform recording from the child’s end), and notes the environmental setup checklist was reviewed at the start of the session. Two to three sentences is enough; the goal is to demonstrate the procedural decisions were made, not to litigate them. The second additional paragraph is the on-site adult paragraph. It names the adult who was in the room with the child (parent, paraprofessional, e-helper), confirms the pre-session coaching was completed, and notes the adult’s role during the elicitation (transparent observer following the wait-time rule, not co-eliciter). One to two sentences is enough.
The third additional paragraph is the comparison paragraph. It explicitly states whether the resulting metrics are comparable to published age-banded norms collected from in-person samples, and if there is a known telepractice-vs-in-person discrepancy in the literature for the specific metric, the report acknowledges it. For most LSA metrics on the calculators on this site, the published telepractice literature finds no meaningful difference between remote and in-person samples when the recording-and-coaching protocol is followed; for a small number of metrics (typically the percent-consonants-correct articulation metrics on younger children) the literature finds a meaningful penalty from compressed audio that the report has to acknowledge. Naming the comparison directly, including any known discrepancy, is the move that elevates the report from "an LSA report that happens to have been done remotely" into "a defensible telepractice LSA report that anticipates the procedural-safeguards review."
- Telepractice LSA reports need three additional paragraphs the in-person version does not.
- Paragraph 1: platform and environment — platform name, BAA confirmation, recording channel, environmental checklist review.
- Paragraph 2: on-site adult — who they were, that pre-session coaching happened, that they followed the transparent-observer role.
- Paragraph 3: comparison to published norms — explicit acknowledgement of any known telepractice-vs-in-person discrepancy.
- For most LSA metrics, the published telepractice literature finds no meaningful in-person-vs-remote difference when the protocol is followed.
- For PCC and similar fine-grained articulation metrics on young children, the literature finds a meaningful penalty from compressed audio — acknowledge it.
8. Common mistakes in telepractice LSA (and how to avoid them)
The published telepractice reliability and feasibility literature, plus six years of clinical implementation experience in the SLPs who have stayed on remote caseloads since 2020, has identified a small set of recurring mistakes that show up in real telepractice LSA reports and that lead to either bad clinical data or eligibility decisions that do not survive procedural-safeguards review. The good news is that all of these mistakes are avoidable with the explicit protocol decisions from the previous six sections, and the explicit protocol decisions are the ones the published literature has converged on. This section lists the mistakes directly so a telepractice SLP doing a self-audit on a recent report can check their own work against the failure modes the literature has documented.
The first mistake is trusting the video platform’s built-in recording for the audio that gets transcribed. Compressed platform audio destroys the syllable boundaries the calculators score, and the corrective is the parallel local recording protocol from problem 1. The second mistake is using a platform that does not have a signed BAA, or has one that does not cover the SLP’s personal account. The corrective is to confirm the platform configuration on the specific account the LSA session will be conducted from, not to assume the institution-level BAA covers the user-level account automatically. The third mistake is treating the parent as a passive observer rather than as the on-site operator of the elicitation. The corrective is the four-component coaching protocol from problem 3.
The fourth mistake is omitting the environmental setup checklist and trusting the parent to set up the room well by default. The corrective is the six-item checklist from problem 4 sent the day before the appointment. The fifth mistake is writing IEP goals without the telepractice-measurement modifier, which leaves the procedural-safeguards reviewer guessing whether the measurement protocol matches the service-delivery model. The corrective is the explicit "as measured on telepractice session audio" phrasing from problem 5. The sixth mistake is writing the eligibility report without the three additional paragraphs from problem 6 — platform-and-environment, on-site adult, and the comparison-to-norms acknowledgement — which leaves the report exposed on the procedural-safeguards review even when the clinical reasoning is sound.
- Mistake 1: Trusting the video platform’s built-in recording. Fix: parallel local recording on the child’s end.
- Mistake 2: Using a platform without a per-user BAA confirmation. Fix: confirm HIPAA mode on the SLP’s specific account.
- Mistake 3: Treating the parent as a passive observer. Fix: the four-component coaching protocol.
- Mistake 4: Omitting the environmental setup checklist. Fix: the six-item checklist sent the day before the appointment.
- Mistake 5: Writing IEP goals without the telepractice-measurement modifier. Fix: explicit "telepractice session audio" phrasing.
- Mistake 6: Eligibility report missing the three additional paragraphs. Fix: platform-and-environment, on-site adult, and comparison-to-norms paragraphs.
9. Where ConductSpeech fits on the telepractice LSA workflow
ConductSpeech is built to support the telepractice LSA workflow described in this pillar in the same way it supports the other LSA workflows on the rest of the SLP caseload, and the telepractice case is in fact one of the cases where the time saving is largest. A telepractice SLP without ConductSpeech is doing the same three-to-four-hour transcription-and-paperwork burden per child as an in-person SLP, plus the additional coaching and platform-management overhead that is unique to telepractice, on a caseload that is typically larger because telepractice service-delivery models are deployed precisely in the rural-state and large-district segments where the in-person SLP supply is too thin. The result is the burnout pattern that has driven a meaningful fraction of the post-2020 telepractice cohort out of the field, and the fix is a workflow that collapses the transcription, scoring, and first-draft paperwork burden so the additional telepractice overhead becomes affordable on a real caseload.
The positioning matches the honest framing of every other pillar in this cluster exactly. ConductSpeech does not produce eligibility recommendations — the multi-metric pattern interpretation is a clinical judgement the SLP makes from the data, and the tool surfaces the data without pre-empting the decision. ConductSpeech does not replace the elicitation step — the locally-recorded audio still has to be captured by the parent or e-helper following the protocol from problem 3, and a tool cannot substitute for the on-site adult’s presence in the room. ConductSpeech does not replace the BAA execution — the platform-compliance step from problem 2 is a legal step the practice or district has to handle separately, and the tool plugs into the resulting compliant infrastructure rather than replacing it. What ConductSpeech does is collapse the transcription, scoring, and first-draft paperwork steps into a workflow that takes 30 minutes total instead of three to four hours, which is the time saving that makes the additional telepractice overhead practical to absorb on a real caseload.
For an SLP evaluating ConductSpeech on a remote-only or hybrid caseload, the diagnostic questions are the same as the other LSA cases plus three telepractice-specific ones: (1) Does the transcription pipeline accept the locally-recorded parent-uploaded audio file format? (2) Does the IEP Goal Generator support the telepractice-measurement modifier as a standard goal-template option? (3) Does the report drafting step include the three additional paragraphs (platform-and-environment, on-site adult, comparison-to-norms) the procedural-safeguards reviewer is looking for? ConductSpeech answers yes to all three today. The honest framing for the telepractice case is the same as the honest framing for every other case in this cluster: the clinician owns the judgement call, the calculator owns the math, and the AI saves the clinician the hours that would otherwise be spent on transcription and first-draft paperwork.