x
[quotes_form]

IRB and Participant Privacy in Human-Centered Computer Vision Research: A Practical Guide for Early Investigators

Quick Guide

ABSTRACT

Computer vision (CV) studies involving human participants raise unique ethical and regulatory challenges because video and image data are inherently identifiable. For new investigators, navigating Institutional Review Board (IRB) review can feel opaque and intimidating. This guide offers plug-and-play tools such as boilerplate protocol language, checklists, and case exercises, while clarifying the risks most frequently flagged in IRB review of CV research. Topics include participant consent, privacy safeguards, confidentiality versus anonymity, device oversight, and the fairness–privacy tension in dataset construction. The goal is to provide early-career investigators with a structured, practical framework for designing studies that protect participants, meet regulatory expectations, and clear IRB review efficiently.

WHY IRB APPROVAL IS CLINICAL FOR COMPUTER VISION

Any study involving video or image recordings of humans requires IRB oversight because such data are inherently identifiable [1]. Visual data encode biometric and contextual identifiers including faces, scars, gait, posture, and environmental details that persist despite anonymization attempts [2]. Even blurred or cropped images may be re-identified when cross-referenced with other datasets, underscoring that visual data carry a higher baseline risk than most other research modalities [3].

Video breaches can expose identities as well as sensitive settings such as healthcare, family environments, or private routines [4]. In healthcare, ambient intelligence systems raise concerns about ā€œdecisional privacy,ā€ or the right of individuals to control how, when, and where they are observed [1]. Continuous monitoring often extends into intimate and legally sensitive spaces such as homes, patient rooms, or restrooms, where breaches of confidentiality carry profound implications [3].

Another IRB consideration is device classification. When computer vision systems are used in healthcare or research adjacent to clinical care, IRBs must determine whether the software constitutes a medical device [4]. Unless investigators explicitly describe the system as research-only, non-significant risk, or IDE-exempt, reviewers may escalate oversight to FDA-level regulation. Failure to clarify this distinction can delay approval or expand regulatory burden. At the same time, CV raises fairness versus privacy tensions. Protecting participants by excluding them from datasets may inadvertently exacerbate algorithmic bias, leading to systematic misclassification that disproportionately harms underrepresented groups [5]. For this reason, IRBs increasingly expect investigators to address privacy protections and fairness safeguards together, rather than treating them as separate silos [2].

IRBs themselves are not uniform in how they assess these risks. Studies demonstrate significant variability in timelines, consent requirements, and classifications of minimal versus more-than-minimal risk across institutions, even for identical protocols [6]. Machine learning analyses of IRB workflows confirm that delays and inconsistencies are common [7]. Proactive clarity in the application, especially around data security and device oversight, reduces reviewer uncertainty and expedites approval [8]. Computer vision research is inherently high-risk for privacy, identifiability, and fairness, and requires explicit, anticipatory IRB protocols. Investigators who clearly document risks, emphasize strong protections, and frame their projects as observational and minimal risk are more likely to obtain efficient approval [3].

CORE IRB REQUERIMENTS FOR COMPUTER VISION RESEARCH

Protocols for computer vision research must be unusually thorough because video and image data raise more complex questions than traditional survey or lab-based studies [3]. Investigators should begin with a clear background and significance section that justifies why computer vision is necessary, citing prior studies to demonstrate feasibility and scientific value [2]. Vague or overly ambitious protocols often raise reviewer concerns about ā€œscope creep,ā€ which is especially problematic when artificial intelligence technologies are involved, since risk–benefit calculations are more difficult for novel tools [8]. This connects directly to the specific aims of the study, which should be narrow, measurable, and framed in a way that highlights the minimal-risk nature of participation [6].

The study design must be presented in plain language, clarifying whether the project is observational or interventional, and whether the risks should reasonably be considered minimal [4]. Because IRBs vary significantly in how they interpret categories of risk, even for identical protocols, investigators should justify their classification explicitly [6]. The section on subject selection and recruitment should go beyond inclusion and exclusion criteria by also addressing equity and representation [5]. In computer vision, unbalanced recruitment has a direct impact on algorithmic bias, and reviewers increasingly expect applicants to demonstrate how their sampling will mitigate unfair outcomes for underrepresented groups [2].

A strong consent process is critical. Participants must be explicitly told that they will be recorded, that video data are inherently identifiable, and that de-identification cannot be guaranteed [9]. Consent forms should outline how recordings will be stored, who will have access, whether secondary uses such as model training or data sharing are planned, and what withdrawal rights participants retain [10]. Privacy, confidentiality, and anonymity are distinct protections and should not be conflated. Without clear communication, participants may assume anonymity where only confidentiality is promised, creating an ethical and regulatory gap [10].

Detailed study procedures are another common sticking point for IRBs. Investigators must explain what exactly will be recorded, such as faces, movements, or postures, how software will analyze the footage, and whether procedures differ from standard care [4]. Without such detail, IRBs are more likely to request revisions, which prolongs approval. The risks and benefits section should acknowledge that while physical risks are negligible, psychosocial harms are not, since being recorded can cause discomfort, and breaches of confidentiality can have serious consequences [1].

Because of these risks, the privacy and confidentiality plan must be robust [9]. Protocols should emphasize the need for encryption in transit and at rest, pseudonymization or coding of identifiers, access control restricted by role, and explicit policies for retention and deletion of raw video [10]. Device oversight is another crucial area [4]. To avoid unnecessary escalation into FDA regulation, investigators should explicitly state that the software is being used for research analysis only and will not influence medical decision-making, which supports classification as non-significant risk or IDE-exempt [8]. Finally, the IRB application should describe the data analysis plan, including statistical methods and approaches for detecting and mitigating bias [5], as well as how missing or incomplete data will be handled [3]. Reviewers will also expect procedures for monitoring and quality assurance, particularly around breach reporting and compliance checks [7], and a clear data sharing plan that complies with NIH policies, employs Data Use Agreements where appropriate, and honors participants’ right to withdraw [10].

HOW TO WRITE AN IRB THAT CLEARS REVIEW

The most effective way to secure timely IRB approval for computer vision research is to present the study in plain, unambiguous language [3]. Protocols should consistently describe the project as observational, minimal risk, and research-only, avoiding jargon that could imply diagnostic or therapeutic intent [4]. Investigators should resist the temptation to overemphasize technical novelty [8]. Reviewers are more likely to approve quickly when risk is clearly minimized and participation is framed as non-invasive [6].

A strong application also foregrounds data protections [9]. For example, investigators might include language such as: ā€œRecordings will be stored on encrypted institutional servers, retained for no longer than six months, and permanently deleted thereafterā€¦ā€ [10]. Equally important is an explicit description of the consent process. Participants must be informed of the nature of being recorded, the potential risks of re-identification, and any plans for secondary use of their footage [9]. Research on ambient intelligence shows that participants often underestimate the implications of continuous monitoring, particularly in private or clinical spaces, making transparency essential [1]. IRBs will look for evidence that participants’ decisional privacy is respected and that withdrawal procedures are simple and enforceable [10].

Successful applications also anticipate common reviewer concerns. These include how the study will handle incidental findings in footage, whether the software should be considered a medical device [4], and how fairness and privacy will be balanced in dataset construction [5]. Excluding individuals from datasets may protect their privacy but can also exacerbate bias, leading to systematic misrecognition, an ethical tension IRBs increasingly expect to see addressed. Likewise, the use of public datasets without consent or with demographic imbalance should be openly acknowledged, with proposed mitigation strategies such as rebalancing or fairness audits [2].

Finally, investigators should avoid ethical pitfalls that have undermined prior research. A review of adversarial machine learning experiments found that many projects involving human participants failed to report IRB approval or informed consent, violating basic principles of autonomy, beneficence, and justice [11]. Highlighting how the proposed study differs from these lapses can help reassure reviewers.

EDUCATIONAL TOOLS FOR FELLOWS

Because computer vision research intersects technical innovation with sensitive human-subject protections, early investigators benefit from structured training in how to prepare IRB protocols [3]. One effective strategy is the use of annotated IRB examples [7]. By presenting fellows with real or mock protocols that highlight CV-specific risks such as inadequate consent language, poorly defined retention policies, or unacknowledged device oversight, educators can demonstrate how these gaps directly contribute to delays or rejections [10].

Training should also provide boilerplate templates for key sections, including consent language, privacy and confidentiality safeguards, and disclaimers clarifying that software is for research purposes only [9]. These templates give fellows a practical starting point and reinforce the expectation that IRB language must be specific, plain, and anticipatory of reviewer concerns [6]. Another valuable method is the case exercise, in which fellows are presented with a deliberately flawed IRB draft [11]. Finally, incorporating structured discussion prompts helps fellows internalize the ethical dilemmas unique to computer vision [5]. Questions such as ā€œShould bystanders in video recordings be considered participants?ā€ encourage critical reflection [2].

CONCLUSION

Computer vision research involving human participants is inherently sensitive because images and video recordings cannot be reliably de-identified and carry unique risks of re-identification, surveillance, and bias [1]. Investigators who anticipate these challenges and address them directly in their IRB protocols are more likely to obtain approval efficiently [6]. The strongest applications emphasize identifiability risks, provide detailed plans for privacy and confidentiality protections, and demonstrate how the software environment itself supports participant safeguards [9,10]. Just as importantly, protocols must acknowledge the fairness–privacy tension, ensuring that efforts to protect participants do not inadvertently worsen bias or inequities [2,5].

Embedding these considerations into training for fellows and early investigators is essential [7]. By teaching how to write clear, plain-language protocols, how to select secure and compliant software, and how to anticipate reviewer concerns, educators can raise the overall standard of IRB literacy in computer vision research [3,8,10]. Such preparation aligns with emerging calls to adapt IRB review processes for AI and big data studies and ensures that investigators enter the field with a foundation in both regulatory compliance and ethical reflection [11]. Ultimately, fostering this dual technical and ethical literacy will allow human-centered AI research to advance responsibly, transparently, and in ways that protect the dignity and rights of participants [1].

APPENDIX A. BOILERPLATE TEMPLATES FOR IRB PROTOCOLS IN COMPUTER VISION RESEARCH

Privacy and Confidentiality Statement
All video and image recordings will be stored on institutionally managed, encrypted servers. Files will be encrypted both in transit and at rest. Raw video data will be retained following analysis and then permanently deleted once the study is completed. Only de-identified, coded data will be used for research purposes.

Consent Language (Video Recording)
Your participation involves being video recorded, which may capture identifiable features such as your face, body movements, or voice. Only authorized members of the research team will view the recordings. Recordings will be securely stored and deleted once the study is completed. You may withdraw from the study at any time, and upon request your recordings will be permanently deleted.

Device Disclaimer
The software used in this study is strictly for research analysis. It will not be used to make clinical decisions or guide your medical care.

Risk and Benefit Statement
The risks associated with this study are minimal. They include potential discomfort from being recorded and the possibility of a confidentiality breach. Safeguards such as encryption, time-limited retention, pseudonymization, and secure storage are in place to minimize these risks. There are no direct medical benefits to participants; however, the knowledge gained may contribute to the advancement of safer and more ethical computer vision research.

Case Exercise for Fellows
Scenario: A draft IRB protocol proposes storing video files on a consumer cloud platform (Dropbox) and does not mention data deletion or withdrawal rights.
Task: Revise the consent and data storage sections using the boilerplate language above, ensuring compliance with institutional standards for encryption, retention, and participant autonomy.

APPENDIX B. IRB CHECKLIST FOR COMPUTER VISION RESEARCH

Twelve Essentials for CV Protocols

  1. Background and significance with justification for CV.
  2. Clearly defined specific aims.
  3. Study design classification with explicit risk justification.
  4. Recruitment and representation addressing equity.
  5. Consent language clarifying recording and withdrawal rights.
  6. Detailed study procedures and data processing description.
  7. Risks and benefits acknowledging psychosocial harms.
  8. Privacy and confidentiality protections with encryption and deletion policies.
  9. Device oversight clarifying research-only use.
  10. Data analysis and bias mitigation plans.
  11. Monitoring and quality assurance procedures.
  12. Data sharing and withdrawal rights compliant with NIH policies.

REFERENCES

  1. Martinez-Martin, N., et al. (2021). Ethical issues in using ambient intelligence in health-care settings. Lancet Digital Health, 3(2), e115–e123. https://doi.org/10.1016/S2589-7500(20)30275-2
  2. Tahir, G. A. (2024). Ethical Challenges in Computer Vision: Ensuring Privacy and Mitigating Bias in Publicly Available Datasets. arXiv. https://doi.org/10.48550/arXiv.2409.10533
  3. Ferretti, A., Ienca, M., Hurst, S., & Vayena, E. (2020). Big Data, Biomedical Research, and Ethics Review: New Challenges for IRBs. Ethics & Human Research, 42(5), 17–28.
  4. Office for Human Research Protections. (2022). IRB Considerations on the Use of Artificial Intelligence in Human Subjects Research. HHS/SACHRP.
  5. Xiang, A. (2022). Being ā€œSeenā€ vs. ā€œMis-Seenā€: Tensions between Privacy and Fairness in Computer Vision.Harvard JOLT, 36(1). SSRN: https://ssrn.com/abstract=4068921
  6. Abbott, L., & Grady, C. (2011). A systematic review of the empirical literature evaluating IRBs: What we know and what we still need to learn. JERHRE, 6(1), 3–19. https://doi.org/10.1525/jer.2011.6.1.3
  7. Shoenbill, K., Song, Y., Cobb, N. L., Drezner, M. K., & MendonƧa, E. A. (2017). IRB Process Improvements: A Machine Learning Analysis. JCTS, 1(3), 176–183.
  8. Makridis, C. A., et al. (2023). Informing the Ethical Review of Human Subjects Research Utilizing Artificial Intelligence. Front. Comput. Sci. https://doi.org/10.3389/fcomp.2023.1235226
  9. University of Southern California, HRPP. (n.d.). Privacy, confidentiality, and anonymity in human subjects research.
  10. Office for Research Protections, Penn State University. (n.d.). IRB Guideline X – Guidelines for Computer- and Internet-Based Research Involving Human Participants.
  11. Albert, K., Delano, M., Penney, J., Rigot, A., & Sivakumar, R. S. (2020). Ethical Testing in the Real World: Evaluating Physical Testing of Adversarial Machine Learning. arXiv. https://doi.org/10.48550/arXiv.2012.02048

Written by researchers, for researchers — powered by Conduct Science.

Author:

Amy Avakian, MD

Is a transitional year resident physician pursuing a career in diagnostic neuroradiology with a background in biochemistry. She is the founder of a collaborative initiative focused on radiology, artificial intelligence, and deep learning, connecting students and physicians through shared opportunities, interdisciplinary research, and mentorship. Beyond her research, she explores the integration of creative technologies such as virtual reality and 3D modeling into medical education and clinical practice. A VR enthusiast and Ā artist, she Ā believes that creativity and compassion should remain at the heart of patient care.

Related Posts

Female scientist pointing at molecular diagram during lab presentation with text "Call for Papers" for ConductScience.

Get access to top-tier research from scientists publishing today.

50% OFF for new subscribers