In an effort to evaluate computer science education using more modern, automated data science techniques, we consider Hattie's work in Visible Learning then define a comprehensive framework to provide the capability to automatically generate a quantitative meta-analysis using predefined moderators (e.g., age, grade, etc.) with data derived from multiple individual research studies. To define the initial criteria, we developed a list of critical questions that the framework must address, including what moderators are most important to include, how to address homogeneity across various studies, how to define categories of influencing factors, and how to compute summary effect size. This initial framework describes how the meta-analysis is derived from effect sizes that are calculated based on each mean and standard deviation reported in experimental and quasi-experimental studies. Since the goal of this foundational research is to create an auto-generated meta-analysis tool, we define a basic user experience that would allow users to select moderators and predefined levels of heterogeneity (such as "include only random control group studies" or "include studies reported in journal articles") for inclusion in the meta-analysis. We conducted a feasibility study of the framework using data (number of participants, mean, standard deviation) collected from 21 data samples curated from eight computer science education articles with a primary and secondary focus across ten venues (2012-2018). We consider what we learned conducting the study, including the need for full system transparency, issues related to data integrity, and issues related to defining and selecting appropriate formulas for differing sets of data.