https://dx.doi.org/10.24016/2025.v11.446

ORIGINAL ARTICLE

 

 

Exploratory Analysis of Psychological Competencies in the Clinical Domain of University Students from an Interbehavioral Perspective

 

Jonathan Zavala Peralta 1*, Virginia Pacheco Chávez1

1 Universidad Nacional Autónoma de México, State of Mexico, Mexico.

 

* Correspondence: zavala@iztacala.unam.mx

 

Received: February 12, 2025 | Revised: June 05, 2025 | Accepted: July 17, 2025 | Published Online: July 25, 2025

 

CITE IT AS:

Zavala Peralta, J. (2025). Exploratory Analysis of Psychological Competencies in the Clinical Domain of University Students from an Interbehavioral Perspective. Interacciones, 11, e446. https://dx.doi.org/10.24016/2025.v11.446

 

 

ABSTRACT

Background: Higher Education Institutions (HEIs) seek to foster in students the development of professional competencies, that is, behaviors that conform to the fulfillment of the criteria of their respective disciplinary fields. From an interbehavioral perspective, the evaluation of these competencies involves defining behavioral indicators that allow their learning to be verified. The present study focuses on three fundamental competencies that comprise a large part of the clinical psychologist’s skills (identification of relevant cases, diagnostic assessment, intervention planning). Objective: To evaluate professional competencies in the clinical domain in students from UNAM. Method: Forty psychology students participated (16 from the fifth semester and 24 from the seventh semester). A modified version of the Virtual Environment for the Development and Evaluation of Professional Competencies in Psychologists (EVACOMPS; Cruz, 2022) was used, composed of twenty-one exercises organized into different types of performance. Results: On average, seventh-semester students obtained 60% correct answers, and fifth-semester students 50%, differences that were statistically significant. In the identification of relevant cases, the highest percentage of incorrect answers was found in both groups. Conclusions: While it is necessary to expand the sample, the modified EVACOMPS is a relevant tool for evaluating professional performance in the clinical domain of psychology. The low percentages of correct answers show the need to develop strategies to address the academic deficiencies that students carry since the beginning of their studies.

Keywords: Professional competencies, performance, undergraduate students, clinical setting, assessment.

 

 

INTRODUCTION

Training Psychologists at FEZ Zaragoza: Progress and Challenges

The Universidad Nacional Autónoma de México (UNAM) stands as Mexico’s foremost public university. Beyond its central campus, Ciudad Universitaria, UNAM encompasses additional academic divisions dedicated to higher education, most notably the Facultad de Estudios Superiores (FES). Currently, the FES collectively accounts for over 40% of UNAM’s undergraduate enrollment, distributed across five campuses. Among their key contributions are the expansion of student enrollment, the growth of postgraduate programs, and the promotion of interdisciplinary collaboration through multidisciplinary clinics (Lona & Marín, 2014). FES Zaragoza, one of these faculties, offers an undergraduate degree in Psychology, designed to train professionals capable of enhancing quality of life through prevention, guidance, intervention, and rehabilitation initiatives (Facultad de Estudios Superiores Zaragoza, 2010). This objective aligns with the professional competencies outlined by the National Council for Teaching and Research in Psychology (Abad & Betancourt, 2013).

The institution at FES Zaragoza divides undergraduate psychologist training into three stages: foundational, professional, and complementary training. The professional stage encompasses four knowledge areas, each with two modules. From the third semester, students must complete three of the four areas, one of which is Clinical and Health Psychology. This area includes modules such as Approaches in Clinical and Health Psychology and Health-Illness: Perspectives and Processes, covering subjects like Clinical and Health Psychology Research, Development, and Evaluation and Intervention. By the fifth semester, students should have developed the skills required to tackle clinical challenges (Facultad de Estudios Superiores Zaragoza, 2010).

Despite significant advancements in psychology at FEZ Zaragoza, challenges remain, including the need to assess whether the program objectives are being met and if students are prepared to address professional demands. (Martínez, 2019; Mercado-Ruíz, 2016; Abad & Betancourt, 2013).

Most evaluations of psychology students rely on feedback from professors, employers, or psychologists regarding their skills and performance (Cabrera et al., 2010; Herrera et al., 2009; Ramírez et al., 2019). However, such assessments may not accurately reflect actual performance (Yan, et al., 2023; Radović, 2024; Cruz, 2022). Thus, there is a need for methodological strategies to evaluate students' real performance to determine their readiness for professional practice (Castañeda, 2006; Cruz, 2022; Pacheco, 2021).

From an interbehavioral perspective, the concept of a skill is defined as the specific way an individual organizes activity to solve a problem (Carpio et al., 2007). For example, a psychologist in therapy must gather relevant psychological information, tailoring their approach based on the context (e.g., using toys for children if they consistently demonstrate the ability to gather behavioral information effectively, they are regarded as skilled in this area.

Problem-solving often requires varied and effective behavioral responses, referred to as competency (Carpio et al., 2007; Ribes, 2006). Competency is context-specific; it is not an abstract quality. For example, it is accurate to say, "Jaime is skilled in drafting intervention objectives," but not to claim, "Jaime is competent" without specifying the domain. Carpio et al. (2007) extends the concept of competency to the professional field, defining it as a disposition to solve discipline-relevant problems through varied and effective performance. It should be noted that current perspectives, such as Ibáñez's (2024), conceptualize competency as a set of skills that meet specific performance criteria within a domain, without requiring variability in performance. Despite these differences, both approaches enable the identification and evaluation of psychology students' skills (Ribes, 2006; 2011). The theoretical model defines the content required for psychologists to effectively solve disciplinary problems, emphasizing the critical link between theory and professional practice.

 

Assessment of Psychologists' Professional Competencies

Although scholarly literature reveals a growing interest in assessing professional competencies among psychology students, studies that incorporate virtual systems for this purpose remain limited. For instance, Rogers et al. (2020) examined the use of virtual reality (VR) to train counseling skills; however, the brevity of the simulated interactions constrained the evaluation of its formative impact. Similarly, Hakelind and Sundström (2022) investigated a digital Objective Structured Clinical Examination (OSCE) implemented during the pandemic, emphasizing its perceived realism and validity according to both students and examiners. In another contribution, Zalewski, et al. (2023) compared the use of simulated patients (SP) and virtual patients (VP) in clinical assessment training, highlighting the creation of realistic clinical profiles, immediate feedback, and the capacity to simulate specific clinical phenomena. Likewise, Renn et al. (2021) reported the development of an Intelligent Tutoring System (ITS) for training clinical skills in social work education, with findings indicating a positive correlation between students’ progression within the system and improvements in clinical competencies as assessed by expert evaluators—underscoring its potential as a scalable educational tool.

Taken together, these studies demonstrate the feasibility of leveraging technological resources to assess and promote the development of essential skills and competencies required for effective psychological practice, particularly within clinical settings.

Nonetheless, such efforts remain fragmented and tend to focus on specific components of student performance. This is exemplified in the studies by Renn et al. (2021) and Zalewski et al. (2023), which, although centered on clinical practice, assess competencies that—while at times overlapping—differ in scope and cannot be extrapolated to other domains of professional practice.

These findings demonstrate the need to implement assessment systems grounded in a coherent conceptual-methodological framework, focused on developing transversal professional competencies. This approach would enable a comprehensive evaluation of the skills acquired by students during their training.

Adopting an interbehavioral approach as a conceptual framework is justified insofar as it coherently integrates psychological theory, curriculum design, and educational practice. In other words, while many competency-based curriculum models operate in a fragmented manner, separating the formulation of professional profiles, instructional design, and learning assessment, the interbehavioral framework allows for the integration of these levels through a unified functional logic. This logic is grounded in the analysis of behavior within specific contexts, ensuring that curricular objectives, pedagogical strategies, and assessment instruments are designed based on observable and desirable behavioral repertoires. Consequently, educational research can be directly linked to formative intervention, and both can be aligned with the real demands of professional practice (Ribes, 2011; Irigoyen et al., 2011; Ibáñez, 2007).

In line with this idea and based on the contributions of Ribes (1993), Carpio et al. (1998, 2007), and Cruz (2022), a list of professional competencies for clinical psychologists was developed from a behavioral perspective. This list includes five professional competencies and thirteen specific skills, along with an assessment system called the Virtual Environment for the Development and Assessment of Psychologists’ Professional Competencies (EVACOMPS, by its Spanish acronym).

This multimedia system emphasizes real-time performance evaluation of students, avoiding the limitations of subjective data collection methods—such as self-reports or retrospective opinions from students or professors about their behavior—that are typical of other evaluation types. (Zalewski, et al., 2023; Silva y Méndez, 2022; Reen et al., 2021).

Using EVACOMPS, Cruz (2022) evaluated seventh-semester psychology students from FES-Iztacala, finding correct response rates below 70% in the competencies assessed, with the best performances observed in exercises where the adjustment criteria were less complex. To expand the findings, Cruz et al. (2023) evaluated third- and fourth-year psychology students (i.e., fifth-sixth and seventh-eighth semesters) and graduates from FES-Iztacala. The results of this second evaluation showed correct response rates below 60% across all competencies, irrespective of whether participants had taken clinical courses from a behavioral perspective, had completed the course, or were still enrolled.

It is worth noting that performance rates below 60% were also observed among graduates, even though one initial assumption was that this group, having completed their training process, would perform better compared to third- and fourth-year students. However, their performance analysis revealed that they obtained the lowest correct response rates, at 46% (Zavala et al., 2023).

While Cruz’s work (2022; Cruz et al., 2023) is a step towards understanding the state of training for psychology students at FES-Iztacala, the researcher herself acknowledges the need to analyze the limitations of her evaluation system. One of the main issues observed is the inconsistent number of evaluation exercises assigned to each competency. For instance, as seen in Table 1, the number of exercises assessing competency Intervention Planning is five times greater than those for Detection and Delimitation of Relevant Factors in Psychological Problems. The need to advance the design of a more appropriate evaluation tool led to adjustments being made to EVACOMPS.

 

 

Table 1. Relationship of Evaluation exercises for each competency in EVACOMPS by Cruz (2022).

Competency

Number of exercises evaluating the competency

Identification of psychological events and their factors

3 exercises

Detection and delimitation of relevant factors for psychological problems

2 exercises

Problem diagnosis

4 exercises

Intervention planning

11 exercises

 

 

To achieve this, the functional performance domains of the psychologist's technological practice and the main activities carried out in each, generally and specifically from a behavioral perspective, were first taken as references (Ribes et al., 1980; Ribes, 1982; Carpio et al., 1998; Ibáñez, 2007; Carpio et al., 2007; Zavala et al., 2023). Based on this, the competencies in EVACOMPS were reorganized, resulting in three professional competencies: Identification of Relevant Cases, Diagnostic Evaluation, and Intervention Planning.

                    Identification of Relevant Cases involves relating the tacit and explicit demands of users to a psychological theory. In other words, it determines the psychological dimension of what a user reports.

                    Diagnostic Evaluation, on the other hand, consists of generating strategies to characterize and detect the variables that sustain the psychological behavior the user defines as problematic.

                    Intervention Planning involves designing and selecting procedures that foster the alteration of psychological events (factors and relationships) presented as the problem case, including measurement systems and ways of transferring knowledge to professionals and non-professionals (Zavala et al., 2023).

 

While these competencies do not encompass the countless functions performed by psychologists in clinical practice, they do address some of the main activities that psychologists engage in, regardless of the theoretical approach they adopt. For instance, Simms (2011) argues that, from a person-centered perspective, the therapist may translate the client’s presenting concerns into a theoretical framework—such as conditions of worth or incongruence between the real and ideal self—but always collaboratively and with respect for the client’s internal frame of reference (Identification of Relevant Cases). Furthermore, the author highlights that it is possible to identify conditions contributing to the problem (e.g., denial of authentic experiences or the internalization of external values [Diagnostic evaluation]), which can be addressed through non-directive strategies. These strategies, rooted in empathy, unconditional positive regard, and congruence, create a secure therapeutic relationship that enables the client to autonomously discover pathways to transform these conditions (Intervention Planning). These competencies can also be extended to other functional performance domains in psychology (Carpio et al., 1998). Based on these competencies, the relationship between the evaluation exercises and the corresponding competencies is shown in Table 2.

 

 

Table 2. Relationship of evaluation exercises in EVACOMPS for each professional competence of the psychologist proposed by Zavala et al. (2023).

Competency

Number of Exercises Evaluating the Competency

Identification of relevant cases

8 exercises

Diagnostic evaluation

5 exercises

Intervention planning

8 exercises

 

 

Once the new competencies were established, the number of exercises evaluating each one was standardized. For this purpose, some existing exercises were removed, and others were added.

The assessment exercises included in EVACOMPS underwent a content validation process based on expert judgment. The panel was composed of eight experts: three psychology professors from FES-Iztacala with extensive experience in behavioral approaches, and five clinical psychologists with at least five years of professional practice within the same theoretical framework. As an exclusion criterion, individuals practicing from psychological perspectives other than behaviorism were not considered.

To quantify content validity, Osterlind’s index was calculated using the classical method (Osterlind, 1989). A three-point scale was used: +1 indicated that the item met all predefined criteria, 0 indicated partial compliance (two characteristics met), and -1 indicated minimal or no compliance (one or none of the criteria met). The resulting values ranged from 0.75 to 1.0, reflecting a satisfactory degree of agreement between the items and the dimensions they were intended to assess, according to expert consensus.

To further strengthen the content validity analysis, Content Validity Indices (CVIs; Pedrosa et al., 2014) were calculated for supplementary exercises. The obtained CVI scores ranged from 0.75 to 0.87, reflecting moderate to high validity according to established thresholds

 

Evaluation of the Psychologist’s Competencies Developed at FES Zaragoza: An Approach

Authors such as Castañeda (2006) have pointed out that tools for evaluating the learning of university students are insufficient in both quantity and quality. In their view, the existing literature reflects a poor understanding of the teaching-learning process required at this educational level. This situation arises because many of these works lack a solid and well-defined theoretical framework to anchor them, which is essential for guiding research and ensuring consistency in the findings.

In the same vein, designing evaluation systems appropriate for university students’ learning also promotes the strengthening of theoretical frameworks that explain the role of variables involved in the development of effective behavior (Castañeda et al., 2012; Peñalosa-Castro & Castañeda-Figueiras, 2008; Peñalosa et al., 2010).

Researchers such as Díaz-Barriga (2019), García & García (2022), and Pacheco (2021) agree that evaluating professional competencies requires analyzing and designing evaluation situations linked to activities where students must exercise their developed competencies.

In short, a system for evaluating the psychologist’s professional competencies must meet at least two criteria. First, it must stem from a theoretical framework capable of identifying the relationships between the components of the teaching-learning process in a discipline. Second, it must focus on situations where the developed competencies enable the solution of disciplinary problems.

Developing evaluation systems that do not meet these characteristics risks generating data that cannot retroactively and significantly influence learning and teaching methods (Castañeda et al., 2012). It should not be overlooked that evaluating psychologists' competencies allows for the assessment of whether they can effectively solve disciplinary problems, thereby confirming the success of their academic training and ensuring their readiness for professional practice (Zabalza & Lodeiro, 2019).

In line with the above, there is a need to refine evaluation systems based on robust theoretical frameworks, ensuring that their observations are sensitive and relevant to the contexts in which psychology students' disciplinary performance is demonstrated.

The 2.0 version of EVACOMPS represents a psychological approach to university students' learning, concretized in the analysis of varied and effective behavior for solving disciplinary problems, or, in other words, professional competencies. This system is designed to record performance in situations similar to those psychology students will encounter in their professional practice. These two characteristics—relevance to real-world scenarios and alignment with professional demands—constitute its significance.

To verify the suitability of EVACOMPS version 2.0, psychology students from FES Zaragoza were evaluated. In this academic curriculum, students are trained to address major psychological problems in various professional settings, including clinical practice, educational environments, and community interventions.

It is important to highlight that, although studies such as those by García et al. (2021) or Cervantes et al. (2020) have sought to identify variables affecting psychology students at FES Zaragoza in exercising acquired professional competencies, they have not directly evaluated the development of behaviors ultimately aimed at solving disciplinary problems. Therefore, the aim of this study was to evaluate the professional competencies developed by fifth- and seventh-semester psychology students at FES Zaragoza, specifically in the areas of relevant case identification, diagnostic assessment, and intervention planning.

 

METHODS

Design

Our study is cross-sectional and was reported following the STROBE guidelines (see Supplementary Material 1).

 

Participants

40 Psychology students from FES-Zaragoza, 16 from the fifth semester and 24 from the seventh semester, aged between 18 and 26 years. This exploratory study evaluated professional competencies in psychology students with the aim of refining measurement instruments prior to large-scale implementation. A strategically selected sample was used to identify and address potential methodological limitations, without seeking generalizable results.

Prioritizing quality over quantity, the sample included students with varying levels of training and experience, which allowed for the identification of relevant patterns. This approach is consistent with the nature of exploratory research, whose primary objective is to validate and refine methods before proceeding to more extensive investigations.

 

Instruments

Participants used internet-connected laptops and smartphones to complete the tasks and access the evaluation platform. The inclusion of both device types was based on the premise that, when properly configured, they provide comparable levels of reliability, standardization, and measurement precision. This approach aligns with APA guidelines (2020), which state that the validity of an assessment depends on its appropriate use rather than the technological medium through which it is delivered. Moreover, allowing mobile access may enhance inclusivity by reducing access barriers, if data security and confidentiality are ensured through robust encryption and authentication protocols

An updated version of the EVACOMPS, referred to as version 2.0, was used. The exercises incorporated in this version were validated through expert judgment, with Osterlind indices (1989) ranging from 0.75 to 1.0, supporting their relevance for the purposes of the evaluation. Content Validity Indices (CVIs; Pedrosa et al., 2014) were calculated for supplementary exercises. The obtained CVI scores ranged from 0.75 to 0.87, reflecting moderate to high validity according to established thresholds. In total, version 2.0 comprises 21 exercises. For a more detailed explanation of the evaluation exercises, see Cruz (2022).

 

Procedure

The evaluation took place in regular classroom settings within the facilities of the undergraduate Psychology program at FES-Zaragoza, UNAM. Data collection was conducted during scheduled class hours, under the supervision of the research team and in coordination with faculty members. Table 3 presents a summary of the key aspects of the evaluation. The procedure was carried out in collaboration with faculty members from the undergraduate Psychology program at FES-Zaragoza, who facilitated access to their student groups. The researchers entered the classrooms, briefly explained the purpose of the study, and invited students to participate voluntarily, emphasizing that their participation would have no academic consequences.

 

 

Table 3. Overview of the Study Design

Aspect

Description

Study type

Exploratory, cross-sectional

Primary objective

To evaluate clinical competencies in Psychology students at FES Zaragoza, UNAM

Participants

40 students: 16 from 5th semester and 24 from 7th semester (18–26 years old). Non-probability convenience sample.

Instrument

EVACOMPS 2.0 (modified version): Virtual platform with 21 clinical activities. Osterlind index applied (values 0.75–1.0), demonstrating content congruence.

Competencies evaluated

1.     Identification of relevant cases

2.     Diagnostic evaluation

3.     Intervention planning

Independent variable

Semester (5th vs. 7th)

Dependent variable

Percentage of correct responses (CR), partially correct responses (PCR), and incorrect responses (IR)

 

 

 

Participants who agreed to participate were assigned a username and password to access the EVACOMPS digital platform, developed to assess professional competencies in simulated environments. Access was granted via personal mobile devices or laptops provided by the research team. Upon logging in, the system displayed an interface with five main evaluation sections (see Figure 1), each corresponding to a distinct type of clinical competency.

 

   

Figure 1. Sections of the EVACOMPS professional competencies evaluation system.

 

 

Usage instructions were standardized across all participants. They were told:

“On the left side of the screen, you will see five sections. You must complete the exercises in each section. After finishing one section, the system will return you to the main screen to proceed with the next one.”

 

These instructions ensured consistent and autonomous navigation throughout the assessment. Each section included interactive resources. Some exercises incorporated multimedia materials, such as short clinical interaction videos, while others began with clinical cases, either narrated or written, that required participants to perform specific actions. An example of the exercise format is shown in Figure 2, which depicts a clinical analysis task featuring multiple-choice response options.

   

 

Figure 2. Example of multimedia content, Clinical case and evaluation exercise in EVACOMPS 2.0.

 

 

 

Throughout the process, researchers oversaw the administration directly in classrooms without interfering with participants’ responses. The use of websites, notes, external aids, or peer interaction was strictly prohibited. Technical questions were addressed prior to the start, and any operational issues were resolved without compromising the validity of the responses.

To ensure internal validity, multiple controls were implemented. All participants completed the same set of exercises under identical conditions, with continuous supervision to prevent access to external sources or peer communication. Additionally, the platform automatically recorded participants’ responses, allowing for accurate analysis of student performance.

To reduce selection and survivorship biases, full classroom groups were invited, including all students present regardless of academic level or experience. By involving both fifth- and seventh-semester students, the sample captured variation in training stages, limiting bias linked to academic progression. Although non-probabilistic, this approach supported the exploratory aims of the study.

 

Ethical aspect

This study was approved by the Ethics Committee of the Faculty of Higher Studies Iztacala (CE/FESI/052025/1929). All participants provided informed consent prior to inclusion in the study.

 

RESULTS

To assess students’ performance across the evaluated competencies, a rubric was used to classify responses into three categories: correct (CR), partially correct (PCR), and incorrect (IR). The results revealed notable differences based on academic semesters. Seventh-semester students achieved 60% correct responses, compared to 50% among fifth-semester students. The proportion of partially correct responses was similar between groups (29% and 26%, respectively). However, a substantial discrepancy emerged in incorrect responses: 11% in the seventh semester versus 24% in the fifth. These results suggest a progression in performance as students advance in their academic training, as reflected by a higher proportion of correct responses and a reduction in incorrect ones.

To statistically contrast these discrepancies, an independent samples Student’s t test was applied. Prior to analysis, assumptions of normality were verified via the Shapiro–Wilk test (p = 0.093 for fifth semester; p = 0.342 for seventh semester) and homogeneity of variances with the F test (F = 1.5594; p = 0.217). The results confirmed significant differences (t(38) = 5.03; p < 0.001) with a large effect size (r = 0.63). Seventh semester students showed higher mean scores (M = 15.56, Variance = 5.44) compared to fifth semester students (M = 11.38, Variance = 8.48), supported by a 95 % confidence interval for the difference in means of [−5.87, −2.50].

In Figure 3, it was revealed that seventh-semester students achieved higher correct response rates than fifth-semester students across all competencies: Identification of Relevant Cases (55% vs. 45%), Diagnostic Evaluation (64% vs. 51%), and Intervention Planning (65% vs. 49%). They also demonstrated lower rates of incorrect responses: 16% vs. 24%, 8% vs. 20%, and 7% vs. 29%, respectively.

 

 

 

 

Figure 3. Accumulated percentage of correct responses, partially correct responses, and incorrect responses for each of the evaluated competencies of fifth- and seventh-semester Psychology students from FES Zaragoza.

 

 

 

 

 

In both semesters, the lowest percentage of Correct Responses (CR) is found in the competency of identifying relevant cases. It is observed that seventh-semester students obtained the highest percentage of Incorrect Responses (IR), while fifth-semester students showed the highest percentage of Partially Correct Responses (PCR). This may be due to the type of exercises included in this section of the evaluation, where students were required to perform a psychological analysis of the demand, that is, to account for the psychological dimension using the appropriate tools and theoretical frameworks.

 

 

 

Figure 4. Accumulated percentage of correct responses, partially correct responses, and incorrect responses for the competency "Identification of Relevant Cases" for fifth- and seventh-semester Psychology students from FES Zaragoza.

 

 

 

The analysis of the eight exercises comprising the Identification of Relevant Cases competency (See Figure 4) reveals a distinct performance pattern between fifth- and seventh-semester students. In the first three exercises—focused on the objective identification of problematic behaviors, functionally involved individuals, and the consequences of those behaviors, and completed by selecting the correct answer from multiple options—both groups demonstrated high levels of accuracy. The seventh-semester group achieved 92%, 96%, and 100% correct responses, respectively, while the fifth-semester group obtained 81%, 100%, and 94%. Incorrect responses were minimal (0% or 4%), and partially correct answers appeared only in the more advanced group, with 19% in the first exercise.

In contrast, the remaining five exercises, which required students to construct written responses, proved more challenging. For instance, in Exercise 4 (selecting the most appropriate explanation for the user’s problem), the seventh-semester group scored 28% correct, 28% partially correct, and 44% incorrect responses. The fifth-semester group performed less well: 6% correct, 69% partially correct, and 25% incorrect. This trend did not hold in Exercise 5 (identifying the appropriate method for recording simulated behaviors), where the seventh-semester group achieved only 16% correct responses, compared to 25% in the fifth-semester group, which also recorded 56% partially correct and 25% incorrect responses.

In Exercise 6 (qualitative aspects of simulated behavior), fifth-semester students appeared to struggle more, achieving 44% correct, 56% partially correct, and 0% incorrect responses. In contrast, the seventh-semester group recorded 52% correct, 32% partially correct, and 16% incorrect.

Performance in Exercise 7 (technical description of the problem) was notably low: fifth-semester students obtained 11% correct responses, while seventh-semester students achieved 16% correct, along with 28% partially correct and 56% incorrect responses. Finally, Exercise 8 (linking complaints to emotional states) was the most challenging for the fifth-semester group, with only 6% correct, 25% partially correct, and 69% incorrect responses. Although the seventh-semester group performed better, they also encountered difficulty: 24% correct, 44% partially correct, and 32% incorrect.

 

DISCUSSION

The objective of this study was to assess the professional competencies of psychologists in the clinical field of fifth- and seventh-semester students at FES Zaragoza, UNAM. The main findings were as follows: 1) both groups of students obtained correct response percentages equal to or less than 60%; 2) seventh-semester students obtained 60% correct responses and 11% incorrect responses, while fifth-semester students reached 50% correct responses and 24% incorrect responses; 3) A t-test was conducted to determine whether the observed differences were statistically significant. The results indicated significant differences (t(38) = 5.03, p < .001), with a large effect size (r = 0.63). Seventh-semester students scored significantly higher (M = 15.56) than fifth-semester students (M = 11.38), with a 95% confidence interval for the mean difference ranging from -5.87 to -2.50; 4) the competency of identifying relevant cases showed the lowest correct response percentages for both groups, particularly in exercises where students were required to formulate their answers rather than simply select from available options.

These findings are consistent with those reported by Cruz (2022) and Cruz et al. (2023), who identified an average of 50% correct responses when evaluating clinical competencies in fifth- and seventh-semester psychology students at FES Iztacala. The consistency observed across studies—where students consistently exhibit modest accuracy rates (≤60%)—suggests that acquiring the competencies assessed here constitutes an enduring challenge in the professional training of psychologists. This alignment can be explained from both theoretical and methodological perspectives.

Theoretically, the results appear to reflect structural limitations in training environments, particularly with respect to students’ exposure to tasks that simulate real-world professional scenarios and require diverse, functional repertoires, as proposed by Ribes (2006) and Carpio et al. (2007). From an interbehavioral standpoint, professional competencies involve solving disciplinary problems through varied and context-sensitive performances. When students are not routinely engaged in such conditions, limitations tend to persist—even in advanced stages of their training.

Methodologically, the observed consistency may also result from the use of performance-based assessment systems like EVACOMPS 2.0, grounded in interbehavioral principles. Unlike instruments based on self-reports or subjective evaluations, these systems require students to demonstrate concrete problem-solving behaviors in simulated contexts. As a result, they more accurately reveal the challenges students face when transferring theoretical knowledge to applied settings. Thus, the low accuracy rates may reflect not conceptual deficiencies, but rather insufficient mastery in applying knowledge across variable conditions.

In a similar vein, Rogers et al. (2020) found that while students rated tools such as virtual reality role-play positively, these innovations did not lead to significant learning outcomes—likely due to insufficient rehearsal in realistic settings. Zalewski et al. (2023) likewise noted that evaluation stress and lack of standardization in formats (e.g., patient simulations) disproportionately impact novice students, which may partially account for the present results.

However, these findings should be contrasted with those of Escobar et al. (2023), who assessed diagnostic and intervention competencies via an asynchronous, unsupervised online clinical simulation and reported a global success rate of 79.4%. This discrepancy likely reflects key methodological differences: Escobar et al.'s format may have allowed access to external materials, whereas the present study and Cruz's research (2022, 2023) employed more controlled conditions with restricted access to auxiliary resources. This variability highlights the need to standardize assessment protocols—particularly in virtual environments—to ensure ecological validity and cross-study comparability.

On the other hand, regarding students’ performance in the competency of identifying relevant cases, several points are worth noting. The exercises in this section required students to approach the problems from a specific theoretical standpoint—namely, the behavioral perspective. However, given that the curriculum at FES-Zaragoza is based on multiple theoretical frameworks and does not consistently emphasize the importance of adhering to a defined epistemological stance, this may have affected the students’ performance. Metaphorically speaking, they were not analyzing the cases through "the lens of a behavioral psychologist."

Another possible explanation is that the EVACOMPS instructions did not explicitly state that the assessment would be conducted from a behavioral framework. This may have led to theoretical interpretations that differed from those originally intended. This potential confounding variable will be controlled for in future studies.

It is important to note that although EVACOMPS 2.0 was designed following an interbehavioral rationale and this particular evaluation was framed from a behavioral perspective, its content is adaptable to any theoretical orientation. As previously mentioned, the competencies assessed are generic and transversal, making them applicable across different theoretical paradigms and professional contexts.

It should be noted that the work reported here represents a preliminary approach to evaluating professional competencies in the clinical field, focusing on performance. While it provides important indicators about the formative quality of this population, more data is required to identify the development of skills and professional competencies more clearly. This limitation underscores the need for further research and evaluations to better understand the progress and challenges faced by students in their academic training.

Even though the data presented here is preliminary, it is noteworthy: only a small percentage of the evaluated Psychology students can display varied and effective behavior in solving disciplinary relevant problems. As Carpio et al. (2023) emphasized, the legitimacy of psychology as a discipline contributing to solving social problems depends on students' ability to respond to the demand’s society places on them.

In this sense, it is important to emphasize that the training of psychologists is a social issue. Therefore, various elements at different analytical levels interact simultaneously. Some of the psychological variables that may affect the development of competencies in FES Zaragoza students include: 1) the performance of the instructors the students have had; 2) their involvement in extracurricular activities such as participation in research and/or discussion groups; 3) basic skills and competencies such as writing, reading, and/or proficiency in other languages; 4) preference for the clinical field (Carpio and Pacheco, 2023; Carpio and Irigoyen, 2005; Castañeda, 2006; Castañeda et al., 2012; Cruz, 2022; Ibáñez, 2007; Lomelí et al., 2021).

This work is considered an initial approach to this complex area of study. In this same line of research, further studies are needed to identify the role of these and other variables in student performance. Additional data and analyses will help clarify the factors that influence the development of professional competencies and provide a more comprehensive understanding of how to improve academic training and evaluation methods in the field of psychology.

The main contribution of this work is to present a coherent and relevant evaluation system for analyzing performance in professional fields for Psychology students. This system, hosted on the EVACOMPS platform, is sensitive to analyzing relevant aspects of the performance of Psychology students at FES Zaragoza. In comparison with other evaluation methodologies (e.g., Rosenbluth et al., 2016; Yañez-Galecio, 2005; Castro, 2004), this system does not focus on indirect data such as verbal reports from teachers, students, or employers, but rather on their actual performance.

In line with this, the organization of EVACOMPS follows a logic in which performance in prototypical situations may serve as a potential indicator of how a student could behave in real-world contexts. The system is based on the working assumption that varied and effective performance in solving disciplinary problems might reflect professional competence. This approach derives from the theoretical perspective that practical skills and theoretical knowledge, when applied successfully across diverse scenarios, could provide useful insights about a student's preparedness for professional practice.

As authors such as Díaz-Barriga (2019) and García and García (2022) have pointed out, the evaluation of professional competencies requires situations in which the competencies to be evaluated are exercised. In this way, a bidirectional advance is possible in explaining the teaching-learning process of psychology and improving both instructional strategies and the evaluation of the capacities required for effective practice (Castañeda et al., 2012; Peñalosa-Castro and Castañeda-Figuereidas, 2008; Peñalosa et al., 2010).

It is important to highlight that the competencies evaluated cover the main activities conducted in professional practice. The classification used in this work (Zavala et al., 2023) is a synthesis of a large number of studies on the analysis of intelligent behavior and its promotion (Ribes, 1990; Ribes and Varela, 1994), the critical examination of the psychologist's activity in different fields of practice (Ribes, 1993; Carpio et al., 1998), and the definition of competencies as a structural axis of formal teaching at different educational levels (Carpio et al., 2007; Ibáñez, 2007; Cruz, 2022; Ribes, 2006; 2011; Pacheco, 2021).

The professional competencies of identifying relevant cases, diagnostic evaluation, and intervention planning do not encompass all the activities that a psychologist performs, as there are specificities inherent to each area of practice. However, they reflect the main activities conducted in different professional contexts, regardless of the theoretical approach. Therefore, they are considered appropriate for predicting whether students have the necessary capabilities for effective professional practice.

If in future research the data obtained aligns with the findings reported in this work, it will be necessary to consider implementing educational programs that address deficiencies in the training of psychologists. Consequently, the evaluations presented here provide information on aspects to consider in the preparation of highly qualified psychologists.

Taken together, the findings have important implications for psychology education. The consistently low accuracy rates observed—even among advanced students—suggest that current pedagogical strategies may not be sufficiently fostering the development of functional and socially relevant professional competencies. To address this, curricular restructuring should prioritize experiential learning through performance-based assessments, realistic simulations, and supervised practicums that mirror real-world contexts. Competency development should be made explicit and progressive throughout the academic program.

In parallel, faculty training should focus on the design of instructional activities involving diverse, socially meaningful tasks. Additionally, the incorporation of active learning methodologies—such as theoretical-methodological seminars and case-based discussions—may enhance students’ ability to apply knowledge effectively in clinical settings. These adaptations could better prepare students to meet the complex demands of professional practice.

Furthermore, it is important to emphasize that, as some authors have pointed out, competencies can be configured at diverse levels of complexity (Carpio et al., 2007; Carpio and Irigoyen, 2005). Although the effect of exposing students to educational situations in which this level of complexity varies was not evaluated in this approach, it is estimated that this would promote differential learning performances. It will be important to analyze this variable in subsequent evaluations.

 

Limitations

This study has several limitations that should be acknowledged. First, the sample size was relatively small and limited to fifth- and seventh-semester students from a single institution (FES Zaragoza, UNAM), which restricts the generalizability of the findings. While the results offer valuable insights into the clinical competencies of psychology students at this campus, they may not reflect the broader population of psychology students across other faculties, universities, or regions in Mexico. Second, the assessment was conducted in a simulated environment using the EVACOMPS 2.0 platform, rather than in real-world professional contexts. Although performance-based simulations offer greater ecological validity than self-reports or purely academic evaluations, they still differ from actual clinical practice. The absence of real patient interactions, time constraints, and contextual variability may limit the extent to which these results fully capture the complexities of professional functioning. Future research should incorporate more diverse and representative samples, as well as field-based evaluations, to better approximate the demands of actual professional practice.

 

Conclusions

Performance-based assessment revealed an insufficient acquisition of key clinical competencies among psychology students, even at advanced stages of their training. These results underscore the need to strengthen educational strategies that promote the development and transfer of functional skills to real-world problem-solving contexts. While traditional academic metrics, such as passing grades, may obscure these deficits, competency-based evaluations offer a clearer picture of students' actual readiness for professional practice. Continued empirical research of this nature is essential to identify specific gaps in training and to inform the curricular and pedagogical adjustments needed to ensure that future psychologists are equipped to respond effectively to the social demands of their profession.

 

ORCID

Jonathan Zavala Peralta: https://orcid.org/0000-0003-1891-6204

Virginia Pacheco Chávez: https://orcid.org/0000-0001-9316-1070

   

AUTHORS’ CONTRIBUTION

Jonathan Zavala Peralta: Conceptualization, investigation, writing, review, supervision, translation and approval of the final version.

Virginia Pacheco Chávez: Conceptualization, investigation, writing, review, supervision, and approval of the final version

 

FUNDING SOURCE

The study was conducted as part of the first author’s academic work at Universidad Nacional Autónoma de México (UNAM), without the support of a specific funding program.

 

CONFLICT OF INTEREST

The authors declare that there were no conflicts of interest in the collection of data, analysis of information, or writing of the manuscript.

 

ACKNOWLEDGMENTS

The first author acknowledges the support received for this research from the Consejo Nacional de Humanidades, Ciencias y Tecnologías (CONAHCYT), currently coordinated by the Secretaría de Ciencia, Humanidades, Tecnología e Innovación (SECIHTI), through National Grant 780838.

 

REVIEW PROCESS

This study has been reviewed by external peers in double-blind mode. The editor in charge was David Villarreal-Zegarra. The review process is included as supplementary material 2.

 

DATA AVAILABILITY STATEMENT

The raw data supporting the conclusions of this article are provided as supplementary material 3.

 

DECLARATION OF THE USE OF GENERATIVE ARTIFICIAL INTELLIGENCE

The authors declare that they have not made use of artificial intelligence-generated tools for the creation of the manuscript, nor technological assistants for the writing of the manuscript.

 

DISCLAIMER

The authors are responsible for all statements made in this article.

 

REFERENCES

Abad, L. & Betancourt, O.S. (2019). Adquisición de habilidades intelectuales y desarrollo de funciones profesionales del estudiante de psicología en FES Zaragoza. Tesis de licenciatura, Facultad de Estudios Superiores Zaragoza Universidad Nacional Autónoma de México]. Dirección General de Bibliotecas y Servicios Digitales de Información.

Cabrera, R., Hickman, H., & Mares, G. (2010). Perfil profesional del psicólogo requerido por empleadores en entidades federativas con diferente nivel socioeconómico en México. Enseñanza e investigación en Psicología15(2), 257-271.

Carpio, C., Díaz, L., Ibáñez, C. & Obregón, F.J. (2007). Aprendizaje de competencias profesionales en psicología: un modelo para la planeación curricular en la educación superior. Enseñanza e Investigación en Psicología, 12, 27-34.

Carpio, C. e Irigoyen, J. (2005). Psicología y Educación: Aportes desde la teoría de la conducta. UNAM

Carpio, C., & Pacheco, V. (2023). Formación de psicólogos en tiempos de la COVID-19. En Pacheco, C., Cruz, J., & Carpio, C. (Eds.), Docencia e investigación durante la pandemia por COVID-19 (pp. 3-23). Facultad de Estudios Superiores Iztacala.

Carpio, C., Pacheco, V., Canales, C., & Flores, C. (1998). Comportamiento inteligente y juegos de lenguaje en la enseñanza de la psicología. Acta Comportamentalia, 6 (1), 47-60.

Castañeda, S. (2006). Evaluación del aprendizaje en educación superior. En S. Castañeda (Ed.), Evaluación del Aprendizaje en el nivel Universitario. Elaboración de exámenes y reactivos objetivos (pp.  3-27). UNAM-CONACyT.

Castañeda, S., Peñalosa, E., & Austria, F. (2012). El aprendizaje complejo. Desafío a la   educación   superior.   Revista   de   investigación en Educación Médica, 1(3), 140-145.   

Castro, A. (2004). Las competencias profesionales del psicólogo y las necesidades de perfiles profesionales en los diferentes ámbitos laborales. Interdisciplinaria, 21(2), 117-152.

Cervantes, A., Ramos, B., Rincón, S. & Figueroa, C. G. (2020). Efecto de autoeficacia general percibida en el uso de competencias transversales en estudiantes de pregrado del área de la salud. Vertientes Revista Especializada en Ciencias de la Salud23(1-2), 13-21.

Cruz, E., Carpio, C., Zavala, J. & Pacheco, V. (2023). Evaluación de habilidades y competencias profesionales del psicólogo clínico en FES Iztacala. En V. Pacheco, E. Cruz & C. Carpio (Coords.), Competencias profesionales psicológicas: Docencia e investigación durante la pandemia por COVID-19 (pp. 49-71). Universidad Nacional Autónoma de México, Facultad de Estudios Superiores Iztacala.

Cruz, E. (2022). Evaluación de competencias profesionales del psicólogo: una propuesta interconductual aplicada en la FES Iztacala. [Tesis de doctorado, Facultad de Estudios Superiores Iztacala Universidad Nacional Autónoma de México]. Dirección General de Bibliotecas y Servicios Digitales de Información.

Díaz-Barriga, F. (2019). Evaluación de competencias en educación superior: experiencias en el contexto mexicano. Revista iberoamericana de evaluación educativa12(2), 49-66. https://doi.org/10.15366/riee2019.12.2.003

Escobar, B. A., Escandón-Nagel, N. I., Barrera-Herrera, A. L., & García-Hormazábal, R. A. (2023). La evaluación auténtica como herramienta para evidenciar el logro de competencias en la carrera de psicología. Formación Universitaria, 16(2), 35–48. https://doi.org/10.4067/S0718-50062023000200035

Facultad de Estudios Superiores Zaragoza. (2010). Plan de estudios de la licenciatura en psicologíahttps://www.zaragoza.unam.mx/wpcontent/portalfesz2019/Licenciaturas/psicologia/plan_estudios_psicologia.pdf

García, J. G., & García, M. (2022). La evaluación por competencias en el proceso de formación. Revista Cubana de Educación Superior41(2). https://doi.org/10.15366/riee2019.12.2.003

García, J.M., Rojas, A.T., Contreras, E.A., Mercado, A.A. & Contreras M.S. (2021). Evaluación de competencias y actitudes vinculadas al proceso de titulación en estudiantes de Licenciatura. Know and Share Psychology. 2(2), 7-26 https://doi.org/10.25115/kasp.v2i2.4707

Hakelind, C., & Sundström, A. E. (2022). Examining Skills and Abilities During the Pandemic–Psychology Students’ and Examiners’ Perceptions of a Digital OSCE. Psychology Learning & Teaching21(3), 278-295. https://doi.org/10.1177/14757257221114038

Herrera, A., Restrepo, M. F., Uribe, A. F., & López, C. N. (2009). Competencias académicas y profesionales del psicólogo. Diversitas: Perspectivas en psicología. Diversitas: Perspectivas en Psicología5(2), 241-254.

Ibáñez, C. (2024). Observaciones críticas sobre la noción de competencia en la teoría de la conducta. Acta Comportamentalia, 32(1), 93–105.

Ibáñez, C. (2007). Diseño curricular basado en competencias profesionales: una propuesta desde la psicología interconductual. Revista de Educación y Desarrollo6(2), 45-54.

Irigoyen, J. J., Jiménez, M. Y., & Acuña, K. F. (2011). Competencias y educación superior. Revista mexicana de investigación educativa16(48), 243-266.

Lomelí, D. G., Figueiras, S. C., Martínez, M. C. J., Noriega, M. D. L. A. M., Hernández, L. F. B., & Valenzuela, V. I. M. (2022). Perfil de Autorregulación, Estrategias de Aprendizaje y Ejecución Académica de estudiantes universitarios. Informes Psicológicos22(2), 253-268. https://doi.org/10.18566/infpsic.v22n2a15

Lona, M.B. & Marín, P.V. (2014). Análisis historiográfico de Psicología en FES Zaragoza. Tesis de licenciatura, Facultad de Estudios Superiores Zaragoza Universidad Nacional Autónoma de México]. Dirección General de Bibliotecas y Servicios Digitales de Información.

Martínez, D.D. (2019). Significado de la trayectoria escolar en la gestión institucional de dos carreras de la FES Zaragoza: la perspectiva de sus actores. (Licenciatura en psicología). Universidad Nacional Autónoma de México.

Mercado-Ruíz, A.A. (2016). Desarrollo de competencias de estudiantes de Psicología en FES Zaragoza. [Tesis de licenciatura, Facultad de Estudios Superiores Zaragoza Universidad Nacional Autónoma de México]. Dirección General de Bibliotecas y Servicios Digitales de Información.

Osterlind, J. S. (1989). Constructing test items. Kluwer American Publishers.

Pacheco, V. (2021). “Presencialidad y promoción de habilidades profesionales en psicología” conferencia magistral impartida en el XI Congreso Estudiantil y VI Congreso Internacional de Investigación en Psicología, 6, 7 y 8 de octubre de 2021.

Pedrosa, I., Suárez-Álvarez, J., & García-Cueto, E. (2014). Evidencias sobre la validez de contenido: Avances teóricos y métodos para su estimación [Content validity evidence: Theoretical advances and estimation methods]. Acción Psicológica, 10(2), 3-20. https://doi.org/10.5944/ap.10.2.11820

Peñalosa-Castro, E., & Castañeda-Figueiras, S. (2008). Generación de conocimiento en la educación en línea: un modelo para el fomento de aprendizaje activo y autorregulado. Revista mexicana de investigación educativa13(36), 249-281.

Peñalosa-Castro, E., Landa Durán, P., & Castañeda Figueiras, S. (2010). La pericia de los estudiantes como diferenciador del desempeño en un curso en línea. Revista mexicana de investigación educativa15(45), 453-486.

Radović, S., Seidel, N., Haake, J. M., & Kasakowskij, R. (2024). Analysing students' self‐assessment practice in a distance education environment: Student behaviour, accuracy, and task‐related characteristics. Journal of Computer Assisted Learning40(2), 654-666. https://doi.org/10.1111/jcal.12907

Ramírez, L. A., Sagarduy, J. L., & Reyes, D. (2019). Competencias en la práctica del psicólogo clínico en Tamaulipas. Revista de psicología y Ciencias del Comportamiento de la Unidad Académica de Ciencias Jurídicas y Sociales, 10(2), 145-155. https://doi.org/10.29059/rpcc.20190602-96 

Renn, B. N., Arean, P. A., Raue, P. J., Aisenberg, E., Friedman, E. C., & Popović, Z. (2021). Modernizing training in psychotherapy competencies with adaptive learning systems: Proof of concept. Research on Social Work Practice, 31(1), 90–100. https://doi.org/10.1177/1049731520964854

Ribes, E. (2011). El concepto de competencia: su pertinencia en el desarrollo psicológico y la educación. Bordón: Revista de Pedagogía63(1), 33-45.

Ribes, E. (2006). Competencias conductuales: su pertinencia en la formación y práctica profesional del psicólogo. Revista Mexicana de Psicología, 23, 19-26

Ribes, E. (2000). Las psicologías y la definición de sus objetos de conocimiento. Revista Mexicana de análisis de la conducta26(3), 367-383.

Ribes, E., & Varela, J. (1994). Evaluación interactiva del comportamiento inteligente: desarrollo de una metodología computacional. Revista Mexicana de Análisis de la Conducta20(1), 83-97.

Ribes, E. (1993). La práctica de la investigación científica y la noción de juego de lenguaje. Acta Comportamentalia, 1, 63-82.

Ribes, E. (1990). Problemas conceptuales en el análisis del comportamiento humano. México, Trillas.

Ribes, E. (1982) Reflexiones sobre una caracterización profesional de las aplicaciones clínicas del análisis conductual. Revista mexicana de análisis de la conducta. 8(2) pp. 87-96.

Ribes, E., Fernández, C., Rueda, M., Talento, M., & López-Valadez, F. (1980). Enseñanza, ejercicio e investigación de la psicología: un modelo integral. México: Trillas

Rogers, S. L., Hollett, R., Li, Y. R., & Speelman, C. P. (2022). An evaluation of virtual reality role-play experiences for helping-profession courses. Teaching of Psychology49(1), 78-84. https://doi.org/10.1177/0098628320983231

Rosenbluth, A., Cruzat-Mandich, C., & Ugarte, M. L. (2016). Metodología para validar un instrumento de evaluación por competencias en estudiantes de psicología. Universitas psychologica15(1), 303-314. https://doi.org/10.11144/Javeriana.upsy15-1.ppmp

Silva, R. B., & Méndez, I. I. (2022). Intelligent system for customizing evaluation activities implemented in virtual learning environments: Experiments and results. Computación y Sistemas, 26(1), 473–484. https://doi.org/10.13053/CyS-26-1-4182

Simms, J. (2011). Case formulation within a person-centred framework: An uncomfortable fit?Counselling Psychology Review26(2).

Yan, Z., Panadero, E., Wang, X., & Zhan, Y. (2023). A systematic review on students’ perceptions of self-assessment: usefulness and factors influencing implementation. Educational Psychology Review35(3), 81. https://doi.org/10.1007/s10648-023-09799-1

Yañez-Galecio, J. (2005). Competencias profesionales del psicólogo clínico: un análisis preliminar. Terapia psicológica23(2), 85-93.

Zabalza, M.A. & Lodeiro, L. (2019). El desafío de evaluar por competencias en la universidad. Reflexiones y experiencias prácticas. Revista iberoamericana de evaluación educativa12(2), 29-48. https://doi.org/10.15366/riee2019.12.2.002

Zalewski, B., Guziak, M., & Walkiewicz, M. (2023). Developing Simulated and Virtual Patients in Psychological Assessment–Method, Insights and Recommendations. Perspectives on Medical Education12(1), 455. https://doi.org/10.5334/pme.493

Zavala, J., Carpio, C., Cruz, E., Serrano, A., Trejo, A. & Pacheco, V. (2023). Evaluación y promoción de competencias profesionales: Identificación de casos pertinentes, evaluación diagnóstica e intervención. Modalidad presencial y a distancia. Revista Electrónica de Psicología Iztacala26(4).

Zavala, J., Cruz, E., Serrano, A., Barberena, I., Trejo, A. & Pacheco, V. (2023). Competencias profesionales del ámbito clínico de estudiantes de Psicología de la UNAM [Ponencia]. XI Congreso de la Sociedad para el Avance del Estudio Científico del Comportamiento sala Arapiles 16, Madrid, España.


 

Análisis exploratorio de las competencias psicológicas en el ámbito clínico de estudiantes universitarios desde una perspectiva interconductual.

 

 

RESUMEN

Antecedentes: Las Instituciones de Educación Superior (IES) buscan fomentar en los estudiantes el desarrollo de competencias profesionales, es decir, comportamientos que se ajusten al cumplimento de los criterios de sus respectivos campos disciplinares. Desde una perspectiva interconductual, la evaluación de estas competencias implica definir indicadores conductuales que permitan verificar su aprendizaje. El presente estudio se centra en tres competencias fundamentales que comprenden gran parte de las habilidades del psicólogo del ámbito clínico (identificación de casos pertinentes, evaluación diagnóstica, planeación de intervenciones). Objetivo: Evaluar las competencias profesionales del ámbito clínico en estudiantes de la UNAM. Método: Participaron 40 estudiantes de psicología (16 de quinto y 24 de séptimo semestre). Se utilizó una versión modificada del Entorno Virtual para el Desarrollo y Evaluación de Competencias Profesionales en Psicólogos (EVACOMPS; Cruz, 2022) compuesto de veintiún ejercicios, organizados en diferentes tipos de desempeño. Resultados: En promedio, los estudiantes de séptimo semestre obtuvieron un 60 % de respuestas correctas, y los de quinto un 50 %, diferencias que fueron estadísticamente significativas. En identificación de casos pertinentes se encontró el mayor porcentaje de respuestas incorrectas de ambos grupos. Conclusiones: Si bien es necesario ampliar la muestra, el EVACOMPS modificado resulta pertinente para evaluar el desempeño profesional en el ámbito clínico de la psicología. Los bajos porcentajes de respuestas correctas evidencian la necesidad de generar las estrategias para atender las deficiencias académicas que acarrean los estudiantes desde que inician la carrera.

Palabras claves: Competencias profesionales, desempeño, estudiantes universitarios, ámbito clínico, evaluación.