Skip to content Skip to main navigation Report an accessibility issue

Artificial Intelligence (AI) Tools

Guidance on the Use of AI in Human Subjects Research

  1. Overview

With the emergence of artificial intelligence (AI), investigators have a unique opportunity to use AI in their own research studies. Best practices on the use of AI in human subjects research are not yet well formed, nor is there general consensus on appropriate use. Given this fluidity, this guidance is therefore subject to change based on the development of this novel technology and best practices for its use. This guidance covers the use of AI, machine learning, deep learning, and related AI techniques used in research activities, as well as other activities that may govern or affect the use of AI tools within the context of research.

  1. AI Use in Human Subjects Research

When conducting research, a researcher may engage an AI tool in human subjects research when:

  • The AI tool collects data from humans through interaction or intervention;
  • The AI tool is used to obtain informed consent from human participants;
  • The AI tool is used to obtain, analyze, or otherwise access identifiable data about human research participants; or
  • The AI tool acts as an extension or representative of the investigator(s) by answering questions for potential, current, or past human research participants.

The Human Research Protection Program (HRPP) considers AI tools providing transcription services to be integral to human research. Zoom’s transcription service has been reviewed for privacy, confidentiality, and security by UT’s Office of Information Technology (OIT) and, unless noted during Institutional Review Board (IRB) review, is not subject to limitations on transcription that are set forth in this guidance.

The data to which other AI tools have access should be carefully protected. Unless clearly stated otherwise by the terms of use of the AI tool provider, there is no guarantee that the information provided to the AI tools will remain confidential. Investigators should therefore exercise caution and avoid sharing any sensitive or private information when using these tools, especially data that is legally protected. More broadly, the UT HRPP recommends the collection of identifiable participant data be limited to the minimum necessary for completion of the research activity, whether used with an AI tool or not.

  1. IRB Review

Whenever an AI tool is used to interact with or obtain data generated by or from human research participants, that AI tool is considered engaged in human subjects research and the IRB will review the use of the AI tool in accordance with the applicable ethical and regulatory standards.

When research involves the use of an AI tool as described in this document, the HRPP and IRB will follow procedures and Standard Operating Procedures (SOPs) related to the review of the overall study, including relevant indications in this guidance. This includes the use of certain exemption categories, expedited review procedures, and full board review. When necessary, the IRB will obtain review from outside parties (e.g., consultants) similar to ancillary reviewers and ancillary review committees employed for biomedical research (e.g., radiation safety and biosafety committees).

For all research, the HRPP and IRB will review the use of the AI tool in the same context as any other research mechanism. The IRB will review studies that use AI tools under the same criteria for approval used for all other research. For example, non-exempt research that uses AI tools will consider risk minimization, data safety monitoring, informed consent, equitable selection of subjects, and the adequate protection of vulnerable populations. As with many other contextual issues in a given individual research protocol, the use of the AI tool is subject to the relevant criteria for approval and, as such, template protocols for AI research will not be made available by the HRPP.

  1. Informed Consent

The HRPP recommends and, in many cases, will require that investigators notify participants of the use and context of AI in all consent documents, so that participants may make their own determination as to whether they wish to participate in the research. Investigators and consent documents should explain in lay terms the AI’s use in the study in relationship to the points listed in section 2 of this guidance document, including the data to which it will have access. This explanation should include what, if any, limitations will be placed on the AI tool and participants, as well as a description of what the AI tool will do with any data that it receives as a result of the research activities. The IRB recognizes that investigators may not fully know or understand the scope of the AI tool’s ability to utilize or re-access data that it has been previously provided and, in those cases, the IRB will require that investigators inform potential participants of this information. Similarly, investigators must inform participants in the informed consent if and when their data cannot be removed from the AI tool.

The IRB does not permit AI tools to obtain automatic informed consent from participants; that is, a human investigator who is listed on the study and has completed all relevant training(s) must be present. As with all consent processes, it is the responsibility of the Principal Investigator to ensure that the appropriate staff are in place and that participants adequately understand the study in order to provide ongoing and meaningful informed consent. The UT IRB will continue to approve electronic consent processes (e.g., the use of Qualtrics to obtain informed consent) so long as the consent process aligns with applicable regulations and requirements, including those outlined in this guidance.

AI tools should not have access to informed consent documents, especially when an individual declines to consent in some or part of the research. In situations where informed consent is obtained without an investigator’s direct interaction with participants (e.g., survey research employing Qualtrics to obtain informed consent), researchers should have mechanisms in place to ensure consent is obtained from participants and that consent documents are separated from any research data being provided to the AI tool.

The IRB will review requests for a waiver of consent as described in the federal regulations or in relationship to an exempt protocol on a case-by-case basis.

Situation 1:  An investigator intends to collect informed consent for survey research by using a consent form posted on Qualtrics.

IRB Response:  In this situation, the IRB will approve the use of Qualtrics to deliver the informed consent. The IRB considers electronic platforms that are not connected to an AI tool to be appropriate mechanisms to obtain informed consent. The IRB will continue to consider, as appropriate, privacy and confidentiality as it relates to the study.

Situation 2:  An investigator uses ChatGPT to create a consent form. The investigator intends to print the consent form and obtain signatures in-person.

IRB Response:  The IRB can approve these procedures, so long as the consent form meets the regulatory or ethical requirements of informed consent and the signed consent forms are not provided to the AI tool.

Situation 3:  An investigator intends to conduct a study on student interactions with an AI chatbot. The study will take place remotely and the AI chatbot obtains informed consent directly prior to beginning the study interactions.

IRB Response:  The IRB will require changes to the study procedures for the informed consent process. To ensure that informed consent is effective and ongoing, investigators must have a mechanism to be actively engaged in the informed consent process. Additionally, the IRB will request information about the oversight and monitoring of the interaction with the chatbot.

Situation 4:  An investigator wishes to conduct survey research. Participants will input their data into a survey tool that will conduct real-time analysis for the investigator. Informed consent will be obtained using RedCap and, subsequently, those that agree to participate will be redirected to the survey platform.

IRB Response:  The IRB can approve this consent process if the completed consent forms are not shared with the AI tool. Similar to Situation 1, the consent process occurs separately from the AI tool.

Situation 5:  An investigator conducting interview research intends to use otter.ai for transcription services. The investigator will obtain verbal consent prior to the interview and plans to record the consent. The investigator plans to provide otter.ai with all recordings for transcription.

IRB Response:  The IRB will require changes to the consent procedures. Consent information should be kept separately from AI tools, especially when a participant does not consent to the research.

  1. Identifiable Data

Investigators should be increasingly aware of and cautious of an AI tool’s ability to create indirect identifiers within a dataset. The HRPP recommends that investigators, to the extent allowed within their research, limit an AI tool’s access to demographic information and other data points that could potentially reidentify an individual (e.g., combinations of data points that could identify an individual). The HRPP recommends and, in certain situations, will require that investigators place limitations and parameters on the AI tool’s data use, and submit documentation thereof.

  1. Bias

Research has shown that AI tools can reflect or propagate biases based on a number of factors, including data used to train and operate the AI tool. The HRPP recommends that investigators develop and submit plans to routinely and continuously evaluate AI tools used in their protocols for bias. Such testing may include, but is not limited to, creating test cases or replication tests.

  1. Limitations

Regardless of the level of review (e.g., exempt, expedited, full board) applied to a study, the HRPP has determined that:

The AI tool may not have access to the following data in identifiable format, including the combination of indirect identifiers that could reasonably identify a participant:

  • Biospecimens, including blood samples;
  • Genomic data;
  • Family Educational Rights and Privacy Act (FERPA)-protected data; or
  • Data that could reasonably put participants at risk:
    • Example: Audio/video recordings about sensitive or illegal topics.

 

  1. Health Insurance Portability and Accountability Act (HIPAA)-Protected Data

The IRB will consider, on a case-by-case basis, permitting AI tools to have access to identifiable health record data, provided the express consent of participants has been given, including the relevant HIPAA Authorization Forms. In addition to all relevant information required by HIPAA, the HIPAA Authorization Forms must disclose the use of the AI tool in an equivalent manner as the informed consent document (see above).

The HRPP will not approve research with AI tools that have been given access directly to an electronic health record (“EHR”) system within the UT system. Rather, UT researchers must curate data to remove the 18 identifiers prior to providing it to the AI tool.  In cases where another institution has provided the AI tool with access to its EHR, the HRPP will require written documentation of approval of access from the owner or administrator of the EHR.

  1. Data Scraping

               The use of AI tools to scrape data from websites is becoming highly popular. The HRPP may approve these techniques in the context of human subjects research provided the overall research meets the applicable criteria for approval. Investigators will be required to submit the appropriate limitations and parameters placed on the AI tool as part of their research proposal.

  1. Interventions

               In certain cases, AI tools may be used to engage in intervention-based research with participants. Such interactions between AI tools and participants are approvable, provided the research meets the criteria for approval applied by the IRB. As part of the submission to the IRB, investigators must submit the following:

  • A full description of the planned interaction between the participant and the AI tool;
  • A description of the data that the AI tool will be designed to collect;
  • Documentation of the parameters or limits placed on the AI tool for the intervention, data collection, and (if applicable) data analysis;
  • Scripts or texts of instructions that will be read or provided to participants as part of the interaction with the AI tool; and
  • A plan to monitor the safety of participants and their data during and after the intervention.

The HRPP strongly recommends and, in some cases, will require that investigators directly monitor any direct interaction between AI tools and participants.

  1. Exemptions

The HRPP has and will continue to review research that utilizes AI as exempt when the appropriate criteria are met. The exempt categories are available in SOP 5.2 and, for studies that use AI, all other portions of the exempt criteria must be met, in addition to the limitations placed on the exemption by the HRPP.

 Research, conducted in established or commonly accepted educational settings, that specifically involves normal educational practices that are not likely to adversely impact students’ opportunity to learn required educational content or the assessment of educators who provide instruction. This includes most research on regular and special education instructional strategies, and research on the effectiveness of or the comparison among instructional techniques, curricula, or classroom management methods.

Limitation:  The HRPP will allow this exemption only when all of the following are met:

  • The AI tool will not access identifiable data about children;
  • The AI tool will only have access to data that is not sensitive in nature; and
  • The AI tool will only have access to data that does not reasonably place participants at risk.

 Research that only includes interactions involving educational tests (cognitive, diagnostic, aptitude, achievement), survey procedures, interview procedures, or observation of public behavior (including visual or auditory recording).

Limitation:  The HRPP will allow this exemption only when at least one of the following is met:

  • The AI tool’s access is limited to deidentified data;
  • The AI tool’s access to identifiable data is limited to data that would not reasonably place participants at risk; or
  • When data could reasonably place participants at risk, all of the following must be met:
    • The AI tool is not accessible by the public;
    • The researcher presents documentation that the AI tool adequately protects the privacy and confidentiality of participants (e.g., the AI tool is only accessible by the research team for the purpose of the research); and
    • The participant agrees or consents to the use of the AI tool.

Research involving benign behavioral interventions in conjunction with the collection of information from an adult subject through verbal or written responses (including data entry) or audiovisual recording if the subject prospectively agrees to the intervention.

Limitation:  The HRPP will allow this exemption only if all of the following are met:

  • When the study involves a direct interaction between the AI tool and the participant, the intervention guidance above is followed; and
  • One of the following criteria are met:
    • The AI tool’s access is limited to deidentified data;
    • The AI tool’s access to identifiable data is limited to data that would not reasonably place participants at risk; or
  • When data could reasonably place participants at risk, all of the following must be met:
    • The AI tool is not accessible by the public;
    • The researcher presents documentation that the AI tool adequately protects the privacy and confidentiality of participants (e.g., the AI tool is only accessible by the research team for the purpose of the research); and
    • The participant agrees or consents to the use of the AI tool.

Secondary research for which consent is not required when the analysis is limited to identifiable private information that is publicly available.

Limitation:  The HRPP will approve this exemption only when the investigator can provide documentation that the AI tool’s access to data is limited to publicly available data.

Secondary research for which consent is not required when the analysis is recorded by investigators or the AI tool in such a manner that the identity of the human subjects cannot be readily ascertained directly or through identifiers linked to the subjects, the investigator or the AI tool does not contact the subjects, and the investigator or the AI tool will not re-identify the subjects.

Limitation:  The HRPP will approve this exemption only when all of the following are met:

  • The AI tool will not have access to identifiers, including previously existing code keys; and
  • Documentation exists and is provided that confirms the AI tool is not permitted to attempt to re-identify participants within the dataset.

Secondary research for which consent is not required when the analysis is limited to the use of identifiable health information for the purposes of “health care operations” or “research”.

Limitation:  The HRPP will not approve this exemption category for protocols using AI tools. Please note that the IRB will consider this type of research using AI tools only under expedited or full board review.

Secondary research for which consent is not required when the research is conducted by, or on behalf of, a Federal department or agency using government-generated or government-collected information obtained for non-research activities.

Limitation:  The HRPP will not approve this exemption category for protocols using AI tools. Please note that the IRB will consider this type of research using AI tool only under expedited or full board review.

Research and demonstration projects that are conducted or supported by a Federal department or agency, or otherwise subject to the approval of department or agency heads (or the approval of the heads of bureaus or other subordinate agencies that have been delegated authority to conduct the research and demonstration projects), and that are designed to study, evaluate, improve, or otherwise examine public benefit or service programs, including procedures for obtaining benefits or services under those programs, possible changes in or alternatives to those programs or procedures, or possible changes in methods or levels of payment for benefits or services under those programs.

Limitation:  The HRPP will review these projects on a case-by-case basis.

Taste and food quality evaluation and consumer acceptance studies.

Limitation:  The HRPP will allow this exemption only if all of the following are met:

  • When the study involves a direct interaction between the AI tool and the participant, the intervention guidance above is followed; and
  • One of the following criteria are met:
    • The AI tool’s access is limited to deidentified data; or
    • The AI tool’s access to identifiable data is limited to data that would not reasonably place participants at risk.
      • NOTE: If a study is subject to limited IRB review under this category, the AI tool may not have access to any identifiable data.
  1. Non-Exempt Research

The HRPP and IRB will review the use of an AI tool in non-exempt research on a case-by-case basis, taking into account the relevant criteria for approval in relationship to the study procedures and the use of the AI tool. For all studies, relevant limitations described in sections 2-11 of this guidance apply. Biomedical studies or clinical trials must follow all relevant guidance provided in section 13 of this document.

  1. Food and Drug Administration (FDA)-Regulated Research

Investigators should be aware that the FDA’s regulations on mobile medical applications and medical devices may apply to research using AI tools; investigators are strongly encouraged to contact the HRPP when planning research protocols that may be considered clinical trials.

  1. Collaborative Research

Investigators should be aware of this guidance when engaging in collaborative research using AI tools, wherein multiple institutions and/or researchers are engaged in research activities. When UT is the reviewing IRB, the UT IRB will utilize this guidance to its applicable extent in review of the research study. When another IRB is the reviewing IRB, UT will utilize this guidance as part of the “local context review” of the study.