Home About us Editorial board Search Browse articles Submit article Instructions Contacts Login 
Users Online: 1954
Home Print this page Email this page

 



 
Previous article Browse articles Next article 
ORIGINAL ARTICLE
J Edu Health Promot 2020,  9:233

Psychometric evaluation of a questionnaire to evaluate organizational capacity development for faculty development programs


1 Department of Medical Education, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
2 Department of Medical Education, Edge Hill University Medical School, Edge Hill University, Ormskirk, UK
3 Department of Medical Education, School of Medicine, Tehran University of Medical Sciences, Tehran; Education Development Center, Zanjan University of Medical Sciences, Zanjan, Iran
4 Department of Medical Education, School of Medicine; Health Professions Education Research Center; Department of Internal Medicine, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
5 Department of Medical Education, School of Medicine; Education Development Center, Tehran University of Medical Sciences, Tehran, Iran

Date of Submission04-Apr-2020
Date of Acceptance27-May-2020
Date of Web Publication28-Sep-2020

Correspondence Address:
Dr. Roghayeh Gandomkar
Department of Medical Education, Tehran University of Medical Sciences, No. 57, Hojjatdust Alley, Naderi St., Keshavarz Blvd., Tehran 141663591
Iran
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jehp.jehp_292_20

Rights and Permissions
  Abstract 


Background: Organizational capacity development is an important outcome of faculty development programs, but there is a lack of an appropriate instrument for its evaluation.
Aims and Objectives: The aim of this study was to develop a questionnaire to evaluate the organizational capacity development for faculty development programs and to test its psychometrics.
Materials and Methods: The initial capacity development for faculty development questionnaire (CDQ-FD) of 26 items was developed based on a literature review and opinion of experts. Content validity ratio (CVR), content validity index (CVI), content validity index for items (I-CVI), and the content validity index for scales (S-CVI) were computed for content validity. Confirmatory factor analysis (CFA) and exploratory factor analysis (EFA) were performed for construct validation.
Results: The score for CVR, CVI, I-CVI, and S-CVI was 0.71, 0.83, 0.87, and 0.90, respectively. EFA resulted in a three-factor model with total variance extraction of 64%. Cronbach's alpha and Spearman Brown coefficient were investigated for reliability assessment. The Cronbach's alpha of overall scale was 0.8 and the test-retest reliability of the overall scale was 0.78. The final CDQ-FD contained 21 items and three categories.
Conclusions: The CDQ-FD questionnaire appears to be a valid and reliable instrument for the evaluation of organizational capacity development for faculty development in the medical education.

Keywords: Capacity building, empowerment, faculty, medical education, program evaluations, psychometric, questionnaire, staff development


How to cite this article:
Salajegheh M, Sandars J, Norouzi A, Mirzazadeh A, Gandomkar R. Psychometric evaluation of a questionnaire to evaluate organizational capacity development for faculty development programs. J Edu Health Promot 2020;9:233

How to cite this URL:
Salajegheh M, Sandars J, Norouzi A, Mirzazadeh A, Gandomkar R. Psychometric evaluation of a questionnaire to evaluate organizational capacity development for faculty development programs. J Edu Health Promot [serial online] 2020 [cited 2020 Oct 22];9:233. Available from: https://www.jehp.net/text.asp?2020/9/1/233/296388




  Introduction Top


Faculty development programs are an essential component of the academic success of individual faculty members as well as their institution.[1],[2] One significant step in maintaining the effectiveness of the faculty development programs is the evaluation of their outcomes. However, most of the research has been focused on only measuring the short-term outcomes, especially at an individual level. The examples include the evaluations of participant satisfaction,[3],[4],[5] exploration of participant attitude, knowledge or skills,[6],[7],[8] and assessing changes in participant behaviors.[9],[10] Despite increasing demands for the evaluation of faculty development programs at a much broader level beyond individual aspects, little has been published on the impact of such programs on the organizations in the medical education. One important impact of these programs may be on the promoting and developing the capacities of the organization in which teachers work.[11],[12],[13]

Capacity development can be considered to be the changes in the behavior of both individuals and organizations, such as the growth of new knowledge, skills, attitudes, values, and relationships, that lead to improved organizational performance.[14] These new capabilities engage individual faculty members with the various members of the wider organizational system, including other educators and administrators, to empower changes in the organization, both at individual and collective levels.[15],[16],[17] One of the difficulties in evaluating capacity development is that each educational program may use a unique set of approaches and strategies,[18] and therefore, requires the specific evaluation tools.[19]

There are few studies which have explored capacity development for faculty development in medical education. Capacity development was identified by Frantz et al. as one of the five key themes in the participant perceptions of a faculty development program in sub-Saharan Africa.[20] Another study by Frantz et al. investigated the contribution of a faculty development program to individual and collective capacity development in sub-Saharan Africa by using participant interviews.[21] To the best of our knowledge, there has been no previous study of validated questionnaires for evaluating organizational capacity development for faculty development programs. Exploring organizational capacity development for faculty development programs is essential since it helps policy-makers of faculty development programs to understand the strengths and limitations of the capacity development process, informing their future planning to reinforce or modify the subsequent programs.

Given the importance of faculty development programs having an impact at organizational level and because of the lack of a specific instrument for evaluating capacity development for these programs in medical education, this study aimed to develop and test the psychometric properties of a questionnaire to evaluate the organizational capacity development for faculty development programs.


  Subjects and Methods Top


Setting

The research was conducted at Tehran University of Medical Sciences (TUMS) in Iran between 2017 and 2019. The TUMS's institutional review board approved the study (No.IR.TUMS.IKHC.REC.1396.4122). The participants did not receive any incentives, and participation was voluntary.

The “Basic Teaching Skills Course” is one of the faculty development programs implemented at TUMS to help new faculty members fulfill their teaching roles. The course has been running since 2003 and covers the essential subjects for teaching effectiveness such as instructional design, teaching methods, and student assessment. It is delivered in an interactive format with lectures, group works, and practice-based assignments.

Item development

The items in the capacity development for faculty development questionnaire (CDQ-FD) were developed based on a previous literature review[22] and also the opinion of experts to ensure that they were relevant to the specific context of medical education.[23]

A comprehensive literature review to identify a list of the indicators of organizational capacity development for faculty development was performed. Studies were included for the review if they met the following criteria: (1) focused on capacity development for faculty development programs in higher education and medical education, (2) published in English language, and (3) published between the years 1980 and 2017. The literature was searched using Medline, ERIC (EBSCO), Scopus, Embase, Web of Science, and Google Scholar using the key words: staff/faculty/teacher development, faculty/teacher/staff continuous professional development, organizational capacity development/building, and enhancement.

An expert panel session with nine key informants from faculty development program providers at TUMS was conducted in 2017 using nominal group technique to elicit the indicators of organizational capacity development for faculty development programs. The expert group was not provided with the items from the literature review. The group members suggested the indicators inductively through a brain-storming process. After these two steps, the researchers merged the common indicators of the literature review and expert group. The indicators from each source were similar conceptually, but used different terminology. The researchers chose to prefer the vocabulary of the experts to develop the items of the CDQ-FD to ensure greater potential content validity. Some indicators were included from one source but not the other, the researchers kept these.

Psychometric evaluation

Content validation

The content validity of the initial CDQ-FD was investigated both quantitatively and qualitatively by expert opinion. Ten experts were recruited based on their experience in the management and administration of faculty development programs and their expertise in organizational capacity development. They were selected within several universities of Medical Sciences in Iran. Experts were asked to consider each item of the CDQ-FD based upon the criteria of “essential,” “relevance,” “clarity,” and “simplicity.” Each item was assessed using Likert scales: A three-point scale for “essential” (1 – unessential, 2 – useful, but not essential, and 3 – essential,), and four-point scales for “relevance” (1 – not relevant, 2 – rather relevant, 3 – relevant, and 4 – completely relevant) and “clarity” (1 – not simple, 2 – rather simple, 3 – simple and 4 –completely simple) criteria. In addition, the experts were asked to provide comments about the “simplicity” of each item (fluency and using simple and understandable words) as well as the most appropriate placement and order of the items.

We examined content validity by computing content validity ratio (CVR) and content validity index (CVI) using ratings of item relevancy that were highlighted by the content experts.[24] Furthermore, some studies showed that the chosen method may influence the results of the item deletion.[25] Hence, we used further indexes for investigating CVI. These indexes include the content validity index for items (I-CVI) and the content validity index for scales (S-CVI).[26]

Given the ten experts who evaluated the items, the minimum acceptable amount of CVR was 0.62 based on Lawshe table. The formula for calculating CVI in Waltz and Bausell method is the number of all the respondents in “relevancy,” “clarity,” and “simplicity” criteria divided by the number of experts who have scored 3 or 4 in the relevant question in that criterion. In this formula, if an item has a score more than 0.79 that item is retained in the questionnaire. If CVI is between 0.70 and 0.79, the item is questionable and needs correction and revision. Furthermore, if it is less than 0.70, the item is unacceptable and it must be deleted. In Lynn's method, the formula for CVI of items is the number of experts who have scored 3 or 4 for the related items in the “relevancy” criterion divided by the total number of respondents. In I-CVI formula, if the score of each item is more than 0.78, that item remains in the questionnaire. If the calculated score is less than 0.78, the item is questionable and needs correction and revision. In order to calculate the S-CVI, the CVI for scales/average (S-CVI/Ave) was utilized. For computing the S-CVI/Ave, the average of I-CVI scores in relevancy criterion was calculated. The obtained score for S-CVI/Ave must be 0.90 or more.

Construct validation

The modified CDQ-FD based on content validity was sent to 311 faculty members of TUMS who had been participated in the Basic Teaching Skills Course. It was redistributed two more times at approximately 4-week intervals, via E-mail and also followed up through the social media.

For investigating the construct validity, first a confirmatory factor analysis (CFA) was performed to examine and verify the assumed five factors structure of the CDQ-FD with LISREL software (8.8 version. New Jersey). Several fit indices were carried out to assess the fit of the hypothesized model to the data: comparative fit index (CFI), goodness of fit index (GFI), and adjusted goodness of fit index (AGFI), with values of about 0.9 considered adequate; standardized mean square residual (SRMR) and root mean square error of approximation, which should approximately be equal or less than 0.08 to be indicative of adequate fit of the model to the data.[27]

In the next step, exploratory factor analysis (EFA) followed by a varimax rotation was applied to determine the factorial structure of the questionnaire. We applied the Kaiser-Meyer-Olkin (KMO) and Bartlett's test measure to assess the sample adequacy and sphericity of the CDQ-FD, respectively. A KMO value equal or above 0.70 and a significant Bartlett's test of sphericity were considered as acceptable criteria for sample adequacy and factorability of correlation matrix. The criteria for keeping the factor for this study were extraction values above 0.32 and Eigen values above 1.0.

Reliability assessment

The internal consistency of the CDQ-FD was investigated by Cronbach's alpha. Internal consistency of more than 0.7 was considered suitable. For determining instrument stability, test-retest method was utilized. The CDQ-FD was administered to 15 faculty members of TUMS, under similar conditions with a 7-day interval between the first assessment and the second one. This group was not included in the subsequent phase and were not the same as the construct validity participants. The two sets of obtained scores were compared with Spearman Brown coefficient and the minimum acceptable correlation coefficient was considered 0.7. The overall CDQ-FD development and validation process is shown in [Figure 1].
Figure 1: Overall development and validation process of capacity development for faculty development questionnaire

Click here to view



  Results Top


Demographic data

All 10 experts completed the content validation form. The majority of them (70%) were women, 50% were assistant professors, 30% were associate professors, one participant was a professor and one was an instructor. The final number of participants who completed the CDQ-FD for investigating construct validity was 203 of the 311 recruited, yielding a response rate of 64.9%. The sample size appeared to be sufficient given the recommendation for factor analysis of 5–10 person per item in the questionnaire.[28] Female participants (49.5%) were almost equal in number with the male participants. Most of the participants were assistant professors (88.8%) and 71.2% were affiliated to clinical science departments. Over half had 1–5 years' experience of being faculty member (71.7%), with the majority were from the school of medicine of them (69.8%).

Content validity

The initial CDQ-FD consisted of 26 items divided in five categories [Online Supplemental [Appendix 1], English version of the CDQ-FD]. The corrective comments of experts about the wording of items, such as fluency, using simple and understandable words, and the suitable placement of the words were used. Five items were revised to increase the ease of understanding the wording. For example, based on the experts' comments, the item “enthusiasm and self-confidence in teaching” was separated into “enthusiasm in teaching” and “self-confidence in teaching.”

The overall CVR was 0.71, which was acceptable. The CVI for all items was 0.83 by using Waltz and Bausell method (In terms of relevance 0.80, clarity 0.81, and simplicity 0.88). Three items with CVI <0.70 were removed as they identified as being vague or similar to other items. Nine items were corrected and accepted.



By calculating the I-CVI, four items were removed that had a 0.5 score. One item was corrected and accepted, and the rest of the items were retained, all with a 0.87 score value. The scores of gained by each item are presented in [Table 1]. The S-CVI/Ave (average score) was 0.9, which are appropriate.
Table 1: Item's content validity index for item score

Click here to view


Construct validity

The results of CFA showed an inappropriate fitness for the five factors structure of the questionnaire (RSMEA: 0.13, GFI: 0.70, AGFI: 0.63, CFI: 0.89, NNFI: 0.87, and SRMR: 0.073).

EFA and sample size adequacy were examined using the SPSS software. The results showed that the item D5Q3 was a barrier for the positive definition of the correlation matrix, and after deleting this item, the results of the KMO and Bartlett's test indicated ample adequacy of the sample size and factorability of correlation matrix for conducting EFA (KMO index = 0.923, P < 0.001, Bartlett's test = 3645.222 and df = 210).

Viewing of the scree plot revealed three factors with eigen values greater than 1 and these factors explained 61.4% of the total variance. The item D1Q1 did not have any loading on any of the extracted factors. Therefore, this item was also deleted, and the remaining items were again examined by EFA. After deletion of the item D1Q1, the scree plot, the total variances, and the rotated factor matrix, two factors with eigenvalues > 1 explained a total of 51.65% of the variance. The next factor with eigenvalues equal to 0.98 was then analyzed and following the inclusion of this factor, the analyses and the total variance increased to 64%. The first factor included 13 items, and the second and third factor each included 4 items. In summary, the EFA identified a three factor structure. The first factor named “development and innovation in teaching and learning process and communications,” the second named “development and sustaining faculty development programs” and the third named “development of educational leadership and management.” The results of the EFA are presented in [Table 2].
Table 2: Results of exploratory factor analysis of the capacity development for faculty development questionnaire

Click here to view


Reliability assessment

Cronbach alpha coefficient for all items of the CDQ-FD was 0.80. The Cronbach alpha coefficient for “development and innovation in teaching and learning process,” “development and sustaining faculty development programs,” and “development of educational leadership and management” were 0.80, 0.82, and 0.78, respectively, which was suitable. The Spearman Brown coefficient was 0.78 indicating that the instrument stability was acceptable.

Production of the final questionnaire

After investigating reliability and validity, the CDQ-FD with 21 items in three categories was finalized. These three categories included “development and innovation in teaching and learning process” with 13 items, “development and sustaining faculty development programs” with 4 items, and “development of educational leadership and management” with 4 items, and English version of final CDQ-FD].


  Discussion Top


This study described the development and psychometric testing of the first instrument to evaluate organizational capacity development for faculty development at TUMS. The initial CDQ-FD included 26 items, and after content validation through two methods of Waltz and Bausell and Lynn, 23 items were retained. All CVIs were appropriate. The results of the EFA indicated that the three-factor model fits the data reasonably well. These categories included “development and innovation in teaching and learning process,” “development and sustaining faculty development programs,” and “development of educational leadership and management.” Two items were deleted through EFA and the final questionnaire consisted of 21 items. Even though there are no studies reporting the development and validity evidence of a questionnaire for capacity development of faculty development in medical education in any language, our results are closely aligned with the previous published work on the conceptualization of capacity development. The indicators highlight the importance of individual and collective development, with the evolution of professional identity as an educator and the empowerment of faculty members, to enable the organization to change and effectively cope with the complexity of factors in the wider organizational system.[29] About 65% of the participants answered all items for construct validation and this may indicate the future potential usefulness and functionality of the CDQ-FD.

The “development and innovation in teaching and learning process” category had a focus on developing competencies in the teaching and learning process, including various teaching and student assessment methods. The category of “development and sustaining faculty development programs” represented the interest of teachers in medical education and their support and collaboration with colleagues, which is essential to sustain and develop the programs. “Development of educational leadership and management” category referred to involvement in the development, implementation, and evaluation of the medical education institution. Further analyses showed acceptable internal consistency and reliability for CDQ-FD.

In the present study, the factor “development and innovation in teaching and learning process,” explained 34.6% of the total variance. These findings are consistent with the results of previous studies, with most faculty development initiatives having an emphasis on teaching and learning aspects[30],[31] and improving communication within the organizational systems.[32]

Lee et al. found that faculty development programs were also effective for improving faculty's teaching and learning competencies,[10] and some studies have claimed that these programs enhance humanistic capabilities such as professionalism, communications skills, group networking, and teamwork.[33],[34]

The factor “development and sustaining faculty development programs,” as the second factor with 17.0% of the total variance, has had little discussion in previous studies of faculty development programs. This new understanding of organizational capacity development is important for the future evaluation of the effectiveness of faculty development programs. The factor “development of educational leadership and management,” with 12.3% of the total variance is consistent with prior studies. Some researchers have reported that participation in faculty development programs produced more positive attitudes towards teaching, as well as greater involvement in organizational roles, such as leader and manager.[35],[36]

The item “I have obtained the competencies to design a course plan based on educational principles,” did not have any loading on any of the extracted factors and was deleted. A reason for this might be that course planning was embedded in other topics of the “Basic Teaching Skills Course” and not specifically taught on the course. Another deleted item was “I motivate to attend seminars and conferences related to medical education.” Its elimination might be that the seminar formats are unfamiliar to the faculty and that the faculty development course may not provide participants with sufficient information about this important educational approach.

Examining content validity through different indices provided a variety of evidence for CDQ-FD content validity.[28] Analyzing the content validity with two different methods showed no difference between indices and deleted items, which further assured us of the CDQ-FD content validity. The results of internal consistency with alpha's Cronbach coefficient of 0.80 for all items and 0.80, 0.82, and 0.78 for categories demonstrated acceptable levels. Our findings are consistent with prior studies. The Cronbach's alpha for the questionnaire which Jacobs et al. used to explore the effects of the efforts to improve evidence-based decision-making capacity ranged from 0.67 to 0.94.[37] The results of test-retest method and calculating Spearman Brown coefficient indicated that tool stability was acceptable. Therefore, considering that alpha's Cronbach coefficient was more than 0.7, the reliability of the CDQ-FD was considered suitable and verified the results of EFA.

There are some limitations to the study. First, all evaluations are based on faculty development participant's perceptions, which is a potential source of bias about the capacity development results. We therefore recommend using other insights such as policy makers, funding agencies, and student's perceptions.

Second, when using test-retests of questionnaires, there is always a risk that respondents may be influenced by answering the first questionnaire, and the answers to the second questionnaire will include differences due to an intervention effect. It is possible that some participants after the first questionnaire became more familiar with the impact of faculty development programs on capacity development and therefore changed their answers before the retest questionnaire; however, this was not evident in our results. Third, is generalizability of our findings. To be used in another context the CDQ-FD needs further validation in groups speaking other languages, different cultures and in other universities. Finally, no other capacity development for faculty development questionnaire is available. Therefore, it was not possible to validate the new questionnaire against a gold standard and testing criterion validity. Future research could examine how institutes experience the benefit of the questionnaire in faculty development interventions and development in the organization.


  Conclusions Top


This is the first questionnaire for evaluating organizational capacity development for faculty development programs and it appears to be a valid and reliable instrument for the evaluation of organizational capacity development for faculty development in medical education. The questionnaire was developed and evaluated psychometrically by a variety of methods. All CVIs and test-retest reliability were appropriate. The results of the EFA indicated that the three-factor model fits the data reasonably well. Overall, the three categories of indicators in the final questionnaire are closely aligned with previous published work on the conceptualization of capacity development. The indicators highlight the importance of individual and collective development of faculty members to enable the organization to change and effectively cope with the complexity of factors in the wider organizational system.

Acknowledgments

We would like to thank the faculty members who participated in this study for their support and involvement.

Financial support and sponsorship

This work was a part of a PhD thesis at the Tehran University of Medical Sciences and was funded by the National Agency for Strategic Research in Medical Education, Tehran, Iran. Grant No. 970080.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Steinert Y, Mann K, Anderson B, Barnett BM, Centeno A, Naismith L, et al. A systematic review of faculty development initiatives designed to enhance teaching effectiveness: A 10-year update: BEME Guide No. 40. Med Teach 2016;38:769-86.  Back to cited text no. 1
    
2.
Guraya SY, Chen S. The impact and effectiveness of faculty development program in fostering the faculty's knowledge, skills, and professional competence: A systematic review and meta-analysis. Saudi J Biol Sci 2019;26:688-97.  Back to cited text no. 2
    
3.
Sarikaya O, Kalaca S, Yegen BC, Cali S. The impact of a faculty development program: Evaluation based on the self-assessment of medical educators from preclinical and clinical disciplines. Adv Physiol Educ 2010;34:35-40.  Back to cited text no. 3
    
4.
Moore P, Montero L, Triviño X, Sirhan M, Leiva L. Impact beyond the objectives: A qualitative study of a faculty development program in medical education. Rev Med Chil 2014;142:336-43.  Back to cited text no. 4
    
5.
Abu-Rish Blakeney E, Pfeifle A, Jones M, Hall LW, Zierler BK. Findings from a mixed-methods study of an interprofessional faculty development program. J Interprof Care 2016;30:83-9.  Back to cited text no. 5
    
6.
Julian K, Appelle N, O'Sullivan P, Morrison EH, Wamsley M. The impact of an objective structured teaching evaluation on faculty teaching skills. Teach Learn Med 2012;24:3-7.  Back to cited text no. 6
    
7.
Abigail LK. Do communities of practice enhance faculty development? Health Prof Educ 2016;2:61-74.  Back to cited text no. 7
    
8.
Saiki T, Imafuku R, Pickering J, Suzuki Y, Steinert Y. On-site observational learning in faculty development: Impact of an international program on clinical teaching in medicine. J Contin Educ Health Prof 2019;39:144-51.  Back to cited text no. 8
    
9.
Nor MZ. Contribution of faculty development programs to professional identity development of medical educators in Malaysia: A phenomenological study. J Taibah Univ Med Sci 2019;14:324-31.  Back to cited text no. 9
    
10.
Lee SS, Dong C, Yeo SP, Gwee MC, Samarasekera DD. Impact of faculty development programs for positive behavioural changes among teachers: A case study. Korean J Med Educ 2018;30:11-22.  Back to cited text no. 10
    
11.
Steinert Y. Faculty development: From rubies to oak. Med Teach 2020;42:429-35.  Back to cited text no. 11
    
12.
Olupeliyawa AM, Venkateswaran S, Wai N, Mendis K, Flynn E, Hu W. Transferability of faculty development resources. Clin Teach 2020;17:86-91.  Back to cited text no. 12
    
13.
Ambarsarie R, Mustika R, Soemantri D. Formulating a need-based faculty development model for medical schools in Indonesia. Malays J Med Sci 2019;26:90-100.  Back to cited text no. 13
    
14.
Watkins KD. Faculty development to support interprofessional education in healthcare professions: A realist synthesis. J Interprof Care 2016;30:695-701.  Back to cited text no. 14
    
15.
Fernandez N, Audétat MC. Faculty development program evaluation: A need to embrace complexity. Adv Med Educ Pract 2019;10:191-9.  Back to cited text no. 15
    
16.
Venner M. The concept of 'capacity'in development assistance: New paradigm or more of the same? Global Change Peace Security 2015;27:85-96.  Back to cited text no. 16
    
17.
MacKintosh R. Capacity in maritime archaeology: A framework for analysis. J Maritime Archaeol 2019;14:391-408.  Back to cited text no. 17
    
18.
Bezboruah KC. Community organizing for health care: An analysis of the process. J Community Pract 2013;21:9-27.  Back to cited text no. 18
    
19.
Haynes A, Rowbotham SJ, Redman S, Brennan S, Williamson A, Moore G. What can we learn from interventions that aim to increase policy-makers' capacity to use research? A realist scoping review. Health Res Policy Syst 2018;16:31.  Back to cited text no. 19
    
20.
Frantz JM, Bezuidenhout J, Burch VC, Mthembu S, Rowe M, Tan C, et al. The impact of a faculty development programme for health professions educators in sub-Saharan Africa: An archival study. BMC Med Educ 2015;15:28.  Back to cited text no. 20
    
21.
Frantz J, Rhoda A, Sandars J, Murdoch-Eaton DB, Marshall M, Burch VC. Understanding faculty development as capacity development: A case study from South Africa. Afr J Health Prof Educ 2019;11:53-6.  Back to cited text no. 21
    
22.
Salajegheh M, Gandomkar R, Mirzazadeh A, Sandars J. Capacity development indicators for faculty development programs: A narrative review. Annual Association for Medical Education in Europe. Congress Center Basel, Switzerland; 25-29, August, 2018.  Back to cited text no. 22
    
23.
Salajegheh M, Gandomkar R, Mirzazadeh A, Sandars J. Identification of capacity development indicators for faculty development programs: A nominal group technique study. BMC Med Educ 2020;20:1-8.   Back to cited text no. 23
    
24.
Kääriäinen M, Mikkonen K, Kyngäs H. Instrument development based on content analysis. In: The Application of Content Analysis in Nursing Science Research. Cham: Springer; 2020. p. 85-93.  Back to cited text no. 24
    
25.
Glasberg SB. Scientific comparison and validity requires appropriate use. Plast Reconstr Surg 2017;140:750e-1e.  Back to cited text no. 25
    
26.
Yusoff MS. ABC of content validation and content validity index calculation. Educ Med J 2019;11 (2):49–54. .  Back to cited text no. 26
    
27.
Schmitt TA. Current methodological considerations in exploratory and confirmatory factor analysis. J Psychoeduc Assess 2011;29:304-21.  Back to cited text no. 27
    
28.
Wolf EJ, Harrington KM, Clark SL, Miller MW. Sample size requirements for structural equation models: An evaluation of power, bias, and solution propriety. Educ Psychol Meas 2013;76:913-34.  Back to cited text no. 28
    
29.
Vallejo B, Wehn U. Capacity development evaluation: The challenge of the results agenda and measuring return on investment in the global south. World Dev 2016;79:1-3.  Back to cited text no. 29
    
30.
Lim LA, Choy LF. Preparing staff for problem-based learning: Outcomes of a comprehensive faculty development program. IJRSE 2014;3:53-68.  Back to cited text no. 30
    
31.
Zheng M, Bender D, Nadershahi N. Faculty professional development in emergent pedagogies for instructional innovation in dental education. Eur J Dent Educ 2017;21:67-78.  Back to cited text no. 31
    
32.
Jones M, Schuer KM, Ballard JA, Taylor SA, Zephyr D, Jones MD. Outcomes of an immersive pilot faculty development program for interprofessional facilitation: A mixed methods study. J Interprof Educ Pract 2015;1:83-9.  Back to cited text no. 32
    
33.
Branch WT Jr., Chou CL, Farber NJ, Hatem D, Keenan C, Makoul G, et al. Faculty development to enhance humanistic teaching and role modeling: A collaborative study at eight institutions. J Gen Intern Med 2014;29:1250-5.  Back to cited text no. 33
    
34.
Fleming GM, Simmons JH, Xu M, Gesell SB, Brown RF, Cutrer WB, et al. A facilitated peer mentoring program for junior faculty to promote professional development and peer networking. Acad Med 2015;90:819-26.  Back to cited text no. 34
    
35.
Khoshhal KI, Guraya SY. Leaders produce leaders and managers produce followers. A systematic review of the desired competencies and standard settings for physicians' leadership. Saudi Med J 2016;37:1061-7.  Back to cited text no. 35
    
36.
Burdick WP, Diserens D, Friedman SR, Morahan PS, Kalishman S, Eklund MA, et al. Measuring the effects of an international health professions faculty development fellowship: The FAIMER Institute. Med Teach 2010;32:414-21.  Back to cited text no. 36
    
37.
Jacobs JA, Duggan K, Erwin P, Smith C, Borawski E, Compton J, et al. Capacity building for evidence-based decision making in local health departments: Scaling up an effective training approach. Implement Sci 2014;9:124.  Back to cited text no. 37
    


    Figures

  [Figure 1]
 
 
    Tables

  [Table 1], [Table 2]



 

Top
Previous article  Next article
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
Subjects and Methods
Results
Discussion
Conclusions
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed85    
    Printed2    
    Emailed0    
    PDF Downloaded9    
    Comments [Add]    

Recommend this journal