AEFIS for Engineering Programs
Science, technology, engineering, and mathematics (STEM) education faces unique challenges in assessment and accreditation. Accrediting bodies, including ABET, seek specific pieces of information every few years, but it can be difficult to sustain the culture of assessment between accreditation reviews.
Are your programs ready for accreditation review today? AEFIS will provide the framework for your assessment activities that will allow for accreditation reporting to be a byproduct of your continuous improvement initiatives.
How we help Engineering Programs
The development of the self-study document for each program is time consuming. It is often approached as a project to be completed within the year or months before it is due. The document is complex, made up of many reports, graphics, and statistics. AEFIS provides the assessment framework to phase the self-study from a project to an ongoing process.
Accreditors request that self-studies include full catalogs of course syllabi in a specific format. These formats are simplistic and require great edits from the strong syllabi developed by instructors. Multiple syllabus formats can be generated as PDFs from syllabus content entered in AEFIS. Check out an “ABET Syllabus Export” sample >>
The self-study includes summaries of the assessment activities on your campus. Accrediting groups, such as ABET, use standard terminology that may conflict with your program’s vocabulary. AEFIS can support the terminology that instructors and students are most comfortable with in a structure that meets assessment needs to organize goals, assess student performance, and evaluate collected data to make effective programmatic changes.
Student Outcomes may be adopted verbatim from accrediting bodies or redeveloped by individual programs. In either case, these outcomes can be associated with External Outcomes in the Student Learning Outcomes Management modules in AEFIS.
Student Outcomes may be too broad to assess student performance effectively. Breaking Student Outcomes down into Performance Indicators that relate to course specific objectives allow for instructors and students to better understand how outcomes are covered in the curriculum.
Evaluative rubrics can be established for each Performance Indicator to bring some standardization that allows for goal tracking and benchmarking over time.
Program Educational Objectives (PEOs)
PEOs are goals established for alumni to accomplish within a few years of graduation. These goals will be documented and presented in context to programs using the Program Design & Management Tools. Providing these expectations to instructors and students throughout a curriculum will support their accomplishment after graduation.
Administrators can implement alumni surveys using Survey & Course Evaluation Tools to learn more about graduate populations and their success in the STEM workforce.
Direct assessment is complex and requires careful consideration. Direct Assessment Measures Management provides the framework for scheduling collection points across terms in the future. The tools in AEFIS are flexible to allow for various data collection formats including: aggregate, individual student, and group evaluator settings.
Assessment data is collected in relationship to specific Performance Indicators and their evaluative rubrics. This data can be reported against the associated Student Outcomes.
Often students develop strong skills at elementary levels, but do not grasp higher level concepts because they are introduced to particular skills over and over again in their coursework. The development of Student Outcomes, Performance Indicators, and direct assessment schedule provides opportunities for discover redundancies in curricula. Applying taxonomy to curricular design encourages sequences to introduce, emphasize, and reinforce Performance Indicators throughout the educational process.
For any direct assessment data collected, there are opportunities to create Intervention Plans to address student and curricular centric issues. This evaluative structure provides documentation for how programs are closing the loop and finding means to improve on regular bases.