This FAES course covers advanced SAS coding concepts such as the use of SAS Macro, SAS SQL, as well as a combination of both. The course also introduces students to SAS STAT coding for common statistical tests (such as t-test, ANOVA, linear regression, and others). Students have the opportunity to practice in class, using sample datasets. Homework and project assignments are provided as well.
The public funding of research includes many discrete components: setting research priorities; securing funds; funding research infrastructure; selecting and funding meritorious projects; conducting research; monitoring research progress; communicating research findings; and training researchers. This FAES survey course is deigned to review theories, methods, and practices in program and policy evaluation as they relate to research, particularly publicly funded biomedical research. The full range of the evaluation hierarchy (needs assessment and program planning, feasibility and implementation evaluation, process evaluation, and outcome and impact evaluation) is considered as students will be guided to develop a comprehensive framework for the evaluation of federally funded biomedical research.
In her webinar, Dr. Cara Lewis provides an overview of findings from the Society for Implementation Research Collaboration (SIRC) Enhanced Systematic Review and Synthesis of Measures, and discusses several measurement issues that threaten the field of implementation science and directions for possible solutions.
Dr. Diez Roux is internationally known for her research on the social determinants of population health and the study of how neighborhoods affect health. Her work on neighborhood health effects has been highly influential in the policy debate on population health and its determinants. Her research areas include social determinants and health disparities, environmental health, urban health, psychosocial factors in health, cardiovascular disease epidemiology, and the use of multilevel methods.
This FAES course gives a broad and conceptual overview of the most popular machine learning algorithms, followed by examples of how and when to apply them to real data. Best practices in designing machine learning analyses will be emphasized and reviewed, along with how to avoid common pitfalls and how to interpret analysis results.
In this Methods: Mind the Gap webinar, Dr. David MacKinnon describes mediation analysis methods with attention to solutions for some of the limitations of these methods. He also discussed future directions in mediation theory and statistical analysis.
Applying Models and Frameworks to Dissemination and Implementation (D&I) Research: An Overview & Analysis
Part of a joint presentation, Dr. Rachel Tabak presents a review which uses snowball sampling to develop an inventory of models, synthesizes this information, and provide guidance on how to select a model. Dr. Ted Albert Skolarus discusses an examination of citation frequency and impact of D&I models using citation analysis.
Approaches to Evidence Synthesis in Systematic Reviews of Public Health Interventions: Methods and Experiences of the Community Preventive Services Task Force
In this Methods: Mind the Gap webinar, Dr. David Hopkins discuses the conceptual decisions to emphasize a broad consideration of available evidence for reviews of public health interventions; the methods required to ensure a balanced assessment of mixed bodies of evidence; and factors weighed by the CPSTF in translating evidence into conclusions on effectiveness and recommendations regarding use.
This webinar presents insights from a National Academies report exploring how reports on obesity prevalence and trends differ and what these differences mean for interpretation and application. Speakers provide an overview of the various data collection and analysis approaches that have been used across population groups, but particularly as they relate to children and adolescents.
In this Methods: Mind the Gap webinar, Dr. Jason Moore reviews the new discipline of automated machine learning (AutoML). The goal of AutoML is to simplify the process of combining different types of algorithms and methods in an analytical pipeline and to make machine learning more accessible.
Balancing Fidelity & Adaptation: If We Want More Evidence-Based Practice, We Need More Practice-Based Evidence
In this webinar, Drs. Larry Green and Rachel Gold deliver a joint presentation on fidelity and adaptation. Fidelity and adaptation relate to the manner in which the evidence from a research study is brought to practice. There is fidelity if the program is implemented in a way that is very similar to how it was originally designed, and there is adaptation when there are changes made to the process and content of the program to fit to a particular context. In most cases, contextual factors can influence the ability to maintain fidelity as well as the need for adaptation.
A collection of online chapters that provide an introduction to selected behavioral and social science research approaches, including theory development and testing, survey methods, measurement, and study design. eSource was developed in 2010, and these chapters have not been updated to reflect advances in the past decade. However, they can still be used as supplementary teaching materials.
A series of six webinars related to designing clinical trials to include patient-reported outcomes. The videos in the series may be viewed in any order.
A report that provides guidance to NIH investigators on how to rigorously develop and evaluate mixed methods research applications.
Big Data and the Promise and Pitfalls When Applied to Disease Prevention and Promoting Better Health
How disruptive will Big Data be in the long run to biomedical research and health care? In his Methods: Mind the Gap webinar, Dr. Philip Bourne addresses this question in light of the Big Data to Knowledge (BD2K) initiative and other trans-NIH data science programs.
This FAES Graduate School course introduces students to the theory and practice of cancer screening in the United States. Students learn about the methodology used to assess cancer screening tests; how to interpret cancer screening data; and how to identify potential benefits and harms of cancer screening. They also become familiar with the evidence in favor of and against population-based screening for breast, colorectal, lung, cervical, and prostate cancer, as well as the controversies that surround mass screening for these diseases.
In this Methods: Mind the Gap webinar, Dr. Stuart G. Baker focuses on the method of latent class instrumental variables (often called the LATE or CACE approach). He discusses how to estimate the effect of treatment received (in the complier latent class) in a randomized trial with all-or-none compliance.
This archive provides a collection of webinars on methodology. The topics include HIV prevention, implementation methods, personalized medicine, complexity, and longitudinal data. In 2017, the Office of Disease Prevention (ODP) provided co-funding to the Center for Prevention Implementation Methodology to help create this archive.
In this Methods: Mind the Gap webinar, Dr. Evan Mayo-Wilson discusses the consequences of “multiplicity” for clinical investigators, systematic reviews and guideline developers, and clinical decision-makers. He highlights some potential solutions to these challenges, including prospective registration and core outcome sets.
A collection of training modules that came out of the NIH's initiative to enhance rigor and reproducibility in the research endeavor. The modules were developed by the NIH or NIH-funded grantees and focus on a variety of topics, including integrating sex and gender into research, the design and analysis of group-randomized trials, and computational analyses.