Accepted Papers
Call for Papers
Aims and Scope
The 19th annual ACM Conference on International Computing Education Research (ICER) aims to gather high-quality contributions to the Computing Education Research discipline. The “Research Papers” track invites submissions describing original research results related to any aspect of teaching and learning computing, from introductory through advanced material. Submissions are welcome from across the research methods used in Computing Education Research and related fields. Each contribution will be assessed based on the appropriateness and soundness of its methods, its relevance to teaching or learning computing, and the depth of its contribution to the community’s understanding of the question at hand.
Research areas of particular interest include:
- design-based research, learner-centered design, and evaluation of educational technology supporting computing knowledge or skills development,
- discipline based education research (DBER) about computing, computer science, and related disciplines,
- informal learning experiences related to programming and software development (all ages), ranging from after-school programs for children, to end-user development communities, to workplace training of computing professionals,
- learnability of programming languages and tools,
- learning analytics and educational data mining in computing education contexts,
- learning sciences work in the computing content domain,
- measurement instrument development and validation (e.g., concept inventories, attitudes scales, etc) for use in computing disciplines,
- pedagogical environments fostering computational thinking,
- psychology of programming,
- rigorous replication of empirical work to compare with or extend previous empirical research results,
- teacher professional development at all levels.
While this above list is non-exclusive, authors are also invited to consider the call for papers for the “Lightning Talks & Posters” and “Work-in-Progress” tracks if in doubt about the suitability of their work for this track.
This year, ICER will include a new “clarification” step in the reviewing workflow: if reviewers need clarification on a few details in order to make a recommendation on a paper, concrete clarification questions will be sent to the authors, who will have 72 hours to submit responses. These responses will then be considered during the program committee meetings to finalize decisions.
Please see the Submission Instructions for details on how to prepare your submission. It includes links to the relevant ACM policies including the ACM Policy on Plagiarism, Misrepresentation, and Falsification as well as (new in 2022) the ACM Publications Policy on Research Involving Human Participants and Subjects.
All questions about this call should go to the ICER 2023 program committee chairs at pc-chairs@icer.acm.org.
Important Dates
All submission deadlines are “anywhere on Earth” (AoE, UTC-12).
What | When |
---|---|
Titles, abstracts, and authors due. (The chairs will use this information to assign papers to PC members.) | Friday, March 17th, 2023, AoE |
Full paper submission deadline | Friday, March 24th, 2023, AoE |
Clarification questions sent to authors | Saturday, April 29th, 2023, AoE |
Clarification responses due | Tuesday, May 2nd, 2023, AoE |
Decisions announced | Tuesday, May 16th, 2023 |
“Conditional Accept” revisions due | Thursday, May 25th, 2023 |
“Conditional Accept” revisions approval notification | Thursday, June 1th, 2023 |
Final versions due to TAPS | Thursday, June 8th, 2023, AoE |
Published in the ACM Digital Library | The official publication date is the date the proceedings are made available in the ACM Digital Library. This date will be the first day of the conference. The official publication date may affect the deadline for any patent filings related to published work. |
Tue 8 AugDisplayed time zone: Central Time (US & Canada) change
08:15 - 08:30 | Registration and set upCatering | ||
08:15 15mRegistration | Registration Catering |
08:30 - 09:00 | Opening RemarksCatering | ||
08:30 30mDay opening | Opening Remarks Catering |
09:00 - 10:15 | |||
09:00 25mTalk | The search for meaning: Inferential Strategic Reading Comprehension in Programming Research Papers Maria Kallia University of Glasgow | ||
09:25 25mTalk | Examples of Unsuccessful Use of Code Comprehension Strategies: A Resource for Developing Code Comprehension Pedagogy Research Papers Colleen M. Lewis University of Illinois at Urbana-Champaign | ||
09:50 25mTalk | Chronicles of Exploration: Examining the Materiality of Computational Artifacts Research Papers Michael J. Johnson Georgia Institute of Technology, Francisco Castro New York University, Betsy Disalvo Georgia Institute of Technology, Kayla DesPortes New York University |
10:40 - 11:30 | |||
10:40 25mTalk | Engagement and Anonymity in Online Computer Science Course Forums Research Papers Mrinal Sharma University of California, San Diego, Hayden McTavish University of California San Diego, Zimo Peng University of California, San Diego, Anshul Shah University of California, San Diego, Vardhan Agarwal University of California, San Diego, Caroline Sih University of California, San Diego, Emma Hogan University of California, San Diego, Ismael Villegas Molina University of California, San Diego, Adalbert Gerald Soosai Raj University of California, San Diego, Kristen Vaccaro University of California, San Diego | ||
11:05 25mTalk | Uncovering the Hidden Curriculum of University Computing Majors via Undergraduate-Written Mentoring Guides: A Learner-Centered Design Workflow Research Papers |
13:00 - 14:15 | |||
13:00 25mTalk | Thrilled by Your Progress! Large Language Models (GPT-4) No Longer Struggle to Pass Assessments in Higher Education Programming Courses Research Papers Jaromir Savelka Carnegie Mellon University, Arav Agarwal Carnegie Mellon University, Marshall An Carnegie Mellon University, Christopher Bogart Carnegie Mellon University, Majd Sakr Carnegie Mellon University | ||
13:25 25mTalk | Exploring the Responses of Large Language Models to Beginner Programmers’ Help Requests Research Papers Arto Hellas Aalto University, Juho Leinonen The University of Auckland, Sami Sarsa Aalto University, Charles Koutcheme Aalto University, Lilja Kujanpää Aalto University, Juha Sorva Aalto University | ||
13:50 25mTalk | From "Ban It Till We Understand It" to "Resistance is Futile": How University Programming Instructors Plan to Adapt as More Students Use AI Code Generation and Explanation Tools such as ChatGPT and GitHub Copilot Research Papers |
14:40 - 15:30 | |||
14:40 25mTalk | Exploring Models and Theories of Spatial Skills in CS though a Multi-National Study Research Papers Jack Parkinson University of Glasgow, Sebastian Dziallas University of the Pacific, Fiona McNeill University of Edinburgh, Jim Williams University of Wisconsin-Madison, USA | ||
15:05 25mTalk | Understanding Spatial Skills and Encoding Strategies in Student Problem Solving Activities Research Papers |
16:55 - 17:45 | |||
16:55 25mTalk | Navigating a Blackbox: Students' Experiences and Perceptions of Automated Hiring Research Papers Lena Armstrong University of Pennsylvania, Jayne Everson University of Washington, Amy Ko University of Washington | ||
17:20 25mTalk | Am I Wrong, or is the Autograder Wrong? Effects of AI Grading Mistakes on Learning Research Papers Tiffany Wenting Li University of Illinois at Urbana-Champaign, Silas Hsu University of Illinois at Urbana-Champaign, Max Fowler University of Illinois, Zhilin Zhang University of Illinois at Urbana-Champaign, Craig Zilles University of Illinois at Urbana-Champaign, Karrie Karahalios University of Illinois at Urbana-Champaign |
Wed 9 AugDisplayed time zone: Central Time (US & Canada) change
08:15 - 08:45 | Registration and set upCatering | ||
08:15 30mRegistration | Registration Catering |
09:00 - 10:15 | |||
09:00 25mTalk | Designing Ethically-Integrated Assignments: It's Harder Than it Looks Research Papers Noelle Brown University of Utah, Koriann South University of Utah, Suresh Venkatasubramanian Brown University, Eliane Wiese University of Utah | ||
09:25 25mTalk | Funds of Knowledge used by Adolescents of Color in Scaffolded Sensemaking around Algorithmic Fairness Research Papers Jean Salac University of Washington, Seattle, Alannah Oleson University of Washington, Lena Armstrong University of Pennsylvania, Audrey Le Meur University of Minnesota, Morris, Amy Ko University of Washington | ||
09:50 25mTalk | Using a sociological lens to investigate computing teachers’ culturally responsive classroom practices Research Papers Yujeong Hwang University of Cambridge, Anjali Das University of Cambridge, Jane Waite Raspberry Pi Foundation, Sue Sentance University of Cambridge |
10:40 - 11:30 | |||
10:40 25mTalk | How are Elementary Students Demonstrating Understanding of Decomposition within Elementary Mathematics? Research Papers Maya Israel University of Florida, Jiehan Li University of Florida, Wei Yan University of Florida, Noor A. Elagha University of Illinois Chicago, Corinne A. Huggins-Manley University of Florida, Feiya Luo University of Alabama, Diana Franklin University of Chicago | ||
11:05 25mTalk | An Analysis of Gallery Walk Peer Feedback on Scratch Projects from Bilingual/Non-Bilingual Fourth Grade Students Research Papers Jen Tsan WestEd, Chloe Butler Texas State University, David Gonzalez-Maldonado University of Chicago, Jonathan Liu University of Chicago, Cathy Thomas Texas State University, Diana Franklin University of Chicago |
15:15 - 16:05 | |||
15:15 25mTalk | Exploring Barriers in Productive Failure Research Papers Phil Steinhorst University of Münster, Germany, Andrew Petersen University of Toronto, Bogdan Simion University of Toronto Mississauga, Jan Vahrenhold Westfälische Wilhelms-Universität Münster | ||
15:40 25mTalk | Developing Novice Programmers' Self-Regulation Skills with Code Replays Research Papers Benjamin Xie Stanford University, Jared Ordona Lim University of Washington, Paul Pham University of Washington, Min Li University of Washington, Amy Ko University of Washington |
16:05 - 16:55 | |||
16:05 25mTalk | Evaluating the Utility of Notional Machine Representations to Help Novices Learn to Code Trace Research Papers Veronica Chiarelli Carleton University, Nadia Markova Carleton University, Kasia Muldner Carleton University | ||
16:30 25mTalk | Evaluating Beacons, the Role of Variables, Tracing, and Abstract Tracing for Teaching Novices to Understand Program Intent Research Papers Mohammed Hassan University of Illinois at Urbana-Champaign, Kathryn Cunningham University of Illinois Urbana-Champaign, Craig Zilles University of Illinois at Urbana-Champaign |
Thu 10 AugDisplayed time zone: Central Time (US & Canada) change
08:15 - 08:45 | Registration and set upCatering | ||
08:15 30mRegistration | Registration Catering |
09:00 - 10:15 | |||
09:00 25mTalk | Inequities of Enrollment: A Quantitative Analysis of Participation in High School Computer Science Coursework Across a 4-Year Period Research Papers Ryan Torbey American Institutes for Research | ||
09:25 25mTalk | "A field where you will be accepted": Belonging in student and TA interactions in post-secondary CS education Research Papers Leah Perlmutter University of Washington, Jean Salac University of Washington, Seattle, Amy Ko University of Washington | ||
09:50 25mTalk | CS Teaching and Racial Identities in Interaction: A Case for Discourse Analytic Methods Research Papers Aleata Hubbard Cheuoua WestEd |
10:40 - 11:30 | |||
10:40 25mTalk | How Do Computing Education Researchers Talk About Threats and Limitations? Research Papers Kate Sanders Rhode Island College, Robert McCartney University of Connecticut (Emeritus), Jan Vahrenhold Westfälische Wilhelms-Universität Münster | ||
11:05 25mTalk | Taking Stock of Concept Inventories in Computing Education: A Systematic Literature Review Research Papers Murtaza Ali University of Washington, Sourojit Ghosh University of Washington, Prerna Rao University of Washington, Raveena Dhegaskar University of Washington, Sophia Jawort University of Washington, Alix Medler University of Washington, Mengqi Shi University of Washington, Sayamindu Dasgupta University of Washington |
13:00 - 14:15 | |||
13:00 25mTalk | Say What You Meme: Exploring Memetic Comprehension Among Students and Potential Value of Memes for CS Education Contexts Research Papers Briana Bettin Michigan Technological University, Andrea Sarabia Michigan Technological University, Maritza Chiolino Gonzalez Michigan Technological University, Isabella Gatti Michigan Technological University, Chethan Magnan Michigan Technological University, Noah Murav Michigan Technological University, G. Vanden Heuvel Michigan Technological University, Duncan McBride Michigan Technological University, Sophia Abraham Michigan Technological University | ||
13:25 25mTalk | "I Don’t Gamble To Make My Livelihood": Understanding the Incentives For, Needs Of, and Motivations Surrounding Open Educational Resources in Computing Research Papers Max Fowler University of Illinois, David Smith University of Illinois at Urbana-Champaign, Binglin Chen University of Illinois, Craig Zilles University of Illinois at Urbana-Champaign | ||
13:50 25mTalk | An eye tracking study assessing the impact of background styling in code editors on novice programmers' code understanding Research Papers Kang-il Park University of Nebraska-Lincoln, Pierre Weill-Tessier King's College London, Neil Brown King's College London, Bonita Sharif University of Nebraska-Lincoln, USA, Nikolaj Jensen King's College London, Michael Kölling King's College London |
15:15 - 16:30 | |||
15:15 25mTalk | Investigating the Impact of On-Demand Code Examples on Novices' Open-Ended Programming Experience Research Papers Wengran Wang North Carolina State University, John Bacher North Carolina State University, Amy Isvik North Carolina State University, Ally Limke North Carolina State University, Sandeep Sthapit North Carolina State University, Yang Shi North Carolina State University, Benyamin Tabarsi North Carolina State University, Keith Tran North Carolina State University, Veronica Catete North Carolina State University, Tiffany Barnes North Carolina State University, Chris Martens North Carolina State University, Thomas Price North Carolina State University | ||
15:40 25mTalk | An Empirical Evaluation of Live Coding in CS1 Research Papers Anshul Shah University of California, San Diego, Emma Hogan University of California, San Diego, Vardhan Agarwal University of California, San Diego, John Driscoll University of California, San Diego, Leo Porter University of California San Diego, William G. Griswold University of California San Diego, Adalbert Gerald Soosai Raj University of California, San Diego | ||
16:05 25mTalk | Evaluating Distance Metrics for Program Repairs Research Papers Charles Koutcheme Aalto University, Sami Sarsa Aalto University, Juho Leinonen The University of Auckland, Lassi Haaranen Aalto University, Arto Hellas Aalto University |
16:30 - 17:00 | Closing SessionCatering | ||
16:30 30mAwards | Closing Session Catering |
Submission Instructions
Submission Process
Submit at the ICER 2023 HotCRP site.
When you submit the abstract or full version ready for review, you need to perform the following actions:
-
Check the checkbox “ready for review” at the bottom of the submission form. (Otherwise it will be marked as a draft).
-
Check the checkbox “I have read and understood the ACM Publications Policy on Research Involving Human Participants and Subjects”. Note: “Where such research is conducted in countries where no such local governing laws and regulations related to human participant and subject research exist, Authors must at a bare minimum be prepared to show compliance with the above detailed principles.”
-
Check the checkbox “I have read and understood the ACM Policy on Plagiarism, Misrepresentation, and Falsification; in particular, no version of this work is under submission elsewhere.”. Make sure to disclose possible overlap with your own previous work (“redundant publication”) to the ICER Program Committee co-chairs.
-
Check the checkbox “I have read and understood the ICER Anonymization Policy” (see below).
ICER Anonymization Policy
ICER research paper submissions will be reviewed using a double-anonymous process: the authors do not know the identity of the reviewers and the reviewers do not know the identity of the authors. To ensure this:
-
Avoid titles that indicate a clearly identifiable research project.
-
Remove author names and affiliations. (If you are using LaTeX, you can start your document declaration with
\documentclass[manuscript,review,anonymous]{acmart}
to easily anonymize these. -
Avoid referring to yourself when citing your own work.
-
Redact (just for review) portions of positionality statements that would identify you within the community (perhaps due to demographics shared by few others).
-
Avoid references to your affiliation. For example, rather than referring to your actual university, you might write “A Large Metropolitan University (ALMU)” rather than “Auckland University of Technology (AUT)”.
-
Redact any other identifying information such as contributors, course numbers, IRB names and numbers, grant titles and numbers, from the main text and the acknowledgements.
-
Omit author details from the PDF you generate, such as author name or the name of the source document. These are often automatically inserted into exported PDFs, so be sure to check your PDF before submission.
Do not simply cover identifying details with a black box, as the text can easily be seen from under the box by dragging the cursor over it, and will still be read by screen readers.
Work that is not sufficiently anonymized will be desk-rejected by the PC chairs without offering an option to redact and resubmit.
Authoring Guidelines
The ICER conference maintains an evolving author guide, full of recommendations about scope, statistics, qualitative methods, theory, and other concerns that may arise when drafting your submission. These guidelines are a ground truth for reviewers; study them closely as you plan your research and prepare your submission.
Conflict of Interests
The SIGCSE Conflict of Interest policy applies to all submissions. You can review how conflicts will be managed by consulting our reviewer training, which details our review process.
Submission Format and Publication Workflow
Papers submitted to the research track of ICER 2023 have to be prepared according to the ACM TAPS workflow system. Read this page carefully to understand the new workflow.
The most notable change from ICER conferences prior to 2023 is that we have introduced a “clarification” step into the reviewing process. If reviewers need clarification on a few details in order to make a recommendation on a paper, concrete clarification questions will be sent to the authors, who will have 72 hours to submit responses. These responses will then be considered during the program committee meetings to finalize decisions.
Starting in 2021, ICER switched to a publication format (called TAPS) that separates content from presentation in support of accessibility. This means that the submission format and the publication format differ. For submission, we standardize on a single-column presentation.
- The submission template is either the single column Word Submission Template or the single column LaTeX (using the “manuscript,review,anonymous” style available in template, which you can see an example of in the
sample-manuscript.tex
example in the LaTeX master template samples). Reviewers will review in this single column format. You can download these templates on the ACM Master Article Templates page - The publication template is either the single column Word Submission Template or LaTeX template using “sigconf” style in acmart. You can download the templates on the ACM TAPS workflow page page, where you can also see example papers using the TAPS-compatible Word and LaTeX templates. If your paper is accepted, you will use the TAPS system to generate your final publication outputs. This will involve more than just submitting a PDF, requiring you to instead submit your Word or LaTeX source files and fix any errors in your source before the final version deadline listed above. The final published versions will be the ACM two-column conference PDF format (as well as XML, HTML, and ePub formats in the future).
For LaTeX users, be aware that there is a list of approved LaTeX packages for use with ACM TAPS. Not all packages are allowed.
This separation of submission and publication format results in several benefits:
- Improved quality of paper metadata, improving ACM Digital Library search.
- Multiple paper output formats, including PDFs, responsive HTML5, XML, and ePub.
- Improved accessibility of paper content for people with disabilities.
- Streamlined publication timelines.
One consequence of this new publication workflow is that it is no longer feasible to limit papers by page count, as the single column formats and final two-column formats result in hard-to-predict differences in length. When this workflow was introduced in 2021, the 2021 PC chairs and ICER Steering Committee considered several policies for how to manage length, and decided to continue to limit length using word count instead. As there is no established way to count words, ICER uses the following process: authors may submit papers up to 11,000 words in length, excluding acknowledgements, references, figures, but including all other text, including tables. The PC chairs will use the following procedures for counting words for TAPS approved formats:
- For papers written in the Microsoft Word template, Word’s built-in word-count mechanism will be used, selecting all text except acknowledgements and references.
- For papers written in the LaTeX template, the document will be converted to plain text using the “ExtractText” functionality of the Apache pdfbox suite (see here) and then post processed with a standard command-line word count tool (“wc -w”, to be precise). Line numbers added by the “review” class option for LaTeX will be removed prior to counting by using “grep -v -E ‘^[0-9]+$’” (thanks to N. Brown for this).
- We acknowledge that many authors may want to use Overleaf to avoid dealing with command-line tools and, consequently, may be less enthusiastic about using another command-line tool for assessing the word count. As it is configured by default, Overleaf does not count text in tables, captions, and math formula and, thus, is very likely to significantly underestimate the number obtained through the tool described above. To obtain a more realistic word count during the writing of the manuscript, authors need to take these additional steps:
- Add the following lines at the very beginning of your Overleaf LaTeX document:
%TC:macro \cite [option:text,text] %TC:macro \citep [option:text,text] %TC:macro \citet [option:text,text] %TC:envir table 0 1 %TC:envir table* 0 1 %TC:envir tabular [ignore] word %TC:envir displaymath 0 word %TC:envir math 0 word %TC:envir comment 0 0
- Make sure to write math formulae delimited by
\begin{math} \end{math}
for in-line math and\begin{displaymath} \end{displaymath}
for equations. Do not use dollar signs or\[ \]
; these will result in Overleaf not counting math tokens (unlike Word and pdfbox) and thus underestimate your word count.
- The above flags will ensure that in-text citations, tables, and math formulae will be counted but that comments will be ignored.
- The above flags do not cover more advanced LaTeX environments, so if authors use such environments, they should interpret the Overleaf word count with care (then again, if authors know how to work with such environments it is very reasonable to assume that they also know how to work with command-line tools such as pdfbox).
- Authors relying on Overleaf word count should be advised that the submission chairs will not have access to the source files and cannot re-run or verify any counting mechanism done by the submitting authors. To provide a fair treatment across all submission types, only the approved tools mentioned above will be used for word count. That said, submission chairs will operate under a bona fide assumption when it comes to extreme borderline cases.
- We acknowledge that many authors may want to use Overleaf to avoid dealing with command-line tools and, consequently, may be less enthusiastic about using another command-line tool for assessing the word count. As it is configured by default, Overleaf does not count text in tables, captions, and math formula and, thus, is very likely to significantly underestimate the number obtained through the tool described above. To obtain a more realistic word count during the writing of the manuscript, authors need to take these additional steps:
-
Papers in either format may not use figures to render text in ways that work around the word count limit; papers abusing figures in this way will be desk-rejected.
A paper under the word count limit with either of the above approved tools is acceptable. The submissions chairs will evaluate each submission using the procedures above, notify the PC chairs of papers exceeding the limit, and desk-reject any papers that do.
We expect papers to vary in word count. Abstracts may vary in length, less than 300 words is a good guideline for conciseness. Submission length should be commensurate with its contributions; we expect most papers to be less than 9,000 words according to the rules above, though some may use up to the limit in order to convey details authors deem necessary to evaluate the work. Papers may be judged as too long if they are repetitive, verbose, violate formatting rules, or use figures to save on word count. Papers may be judged as too short if they omit critical details or ignore relevant prior work. See the reviewer training for more on how reviewers will be instructed to assess conciseness.
All of the procedures above, and the TAPS workflow, will likely undergo continued iteration in partnership with ACM, the ICER Steering Committee, and the SIGCSE board. Notify the chairs of questions, edge cases, and other concerns to help improve this new workflow.
Clarifications Prior to Review
Sometimes, reviewers wish for answers to clarifying questions prior to recommending a decision on a paper. In cases where such questions arise during the committee’s discussion period, the PC chairs will send concrete clarification questions to the authors. Authors will have 72 hours within which to submit written responses (through HotCRP); the reviewers, Senior Program Committee, and PC chairs will consider these responses while making recommendations and decisions on papers.
Only submissions for which the committee has clarifying questions will receive them. Many papers will be accepted or rejected without the need for such questions. The clarification round is NOT a rebuttal period: authors will receive only specific questions—not full reviews—as part of the clarification round.
Acceptance and Conditional Acceptance
All papers recommended for acceptance after the Senior PC meetings are either accepted or conditionally accepted. For accepted papers, there is no resubmission required other than the final camera-ready version. For conditionally-accepted papers, meta-reviews will indicate one or more minor revisions that are necessary for final acceptance; authors are responsible for submitting these revisions to HotCRP prior to the “Conditional Accept revisions due” deadline in the Call for Papers. The Senior PC and Program Chairs will review the final revisions; if they are acceptable, the paper will be officially accepted, and authors will have one week to submit an approved camera-ready version to TAPS for publication. If the Senior PC and Program Chairs judge that the request for revisions were not suitably addressed, the paper will be rejected.
Because the turnaround time for conditional acceptance is only one week, requested revisions will necessarily be minor: they may include presentation issues or requests for added clarity or details helpful for future readers of the archived paper. New results, new methodological details that change the interpretation of the results, or other substantially new content will neither be asked for nor allowed to be added.
Conditional Acceptance is independent of the clarification round, though some authors who receive clarifying questions may be asked to address them during the conditional acceptance period.
Kudos
After a paper has been accepted and uploaded into the ACM Digital Library, authors will receive an invitation from Kudos to create an account and add plain-language text into Kudos on its platform. The Kudos “Shareable PDF” integration with ACM will then allow an author to generate a PDF to upload to websites, such as author homepages, institutional repositories, and preprint services, such as ArXiv. This PDF contains the author’s plain-text summary of the paper as well as a link to the full-text version of an article in the ACM Digital Library, adding to the DL download and citation counts there, as well as adding views from other platforms to the author’s Kudos dashboard.
Using Kudos is entirely optional. Authors may also use the other ACM copyright options to share their work (retaining copyright, paying for open access, etc.).
Author Guidelines
If you are reading this page, you are probably considering submitting to ICER. Congratulations! We are excited to review your work. Whether your research is just starting or nearly finished, this guide is intended to help authors meet the expectations of the computing education research community. It reflects a community-wide perspective on what constitutes rigorous research on the teaching and learning of computing.
Read on for our community’s current guidelines, and if you like, read our reviewer guidelines to understand our review process and review criteria.
What’s in scope at ICER?
ICER’s goal is to be an inclusive conference, both with respect to epistemology (how we know we know things) and with respect to phenomena (who is learning and in what context). Therefore, any research related to the teaching and learning of computing is in scope, using any definition of computing, and using any methods. We particularly encourage work that goes beyond the community’s past focus on introductory programming courses in post-secondary education: such as work on primary and secondary education, work on more advanced computing concepts, informal learning in any setting or learning amongst adults. (However, note that simply using computing technology to perform research in an educational setting is not in itself enough, the focus must be on the teaching or learning of computing topics.) If you have not seen a particular topic published on a topic at ICER, or you have not seen a particular method be used, that is okay. We value new topics, new methods, new perspectives, and new ideas, just as much as more broadly accepted ones.
That said, under the current review process, we cannot promise that we have recruited all the necessary expertise to our program committee to fairly review your work. Check who is on the program committee this year, and if you do not see a lot of expertise on your methods or phenomena, make sure your submission spends a bit of extra time explaining theories or methods that reviewers are unlikely to know. If you have any questions regarding this, email the program chairs (pc-chairs@icer.acm.org).
Note that we used the word “research” above. Research is hard to define, but we can say that ICER is not a place to submit practical descriptions of courses, curriculum, or instruction materials you want to share. If you’re looking to share your experiences at a conference, consider submitting to the SIGCSE Technical Symposium’s Experience Report or Position and Curricula Initiatives tracks. Research, in contrast, should meet the criteria presented throughout this document.
What makes a good computing education research paper?
It’s impossible to anticipate every kind of paper that might be submitted. The current ICER review criteria are listed in the reviewer guidelines. These will evolve over time as the community grows. There are many other criteria that reviews could discuss in relation to specific types of research contributions, but the criteria listed there are generally inclusive to many epistemologies and contribution types. This includes empirical studies that answer research questions, replicate prior results, or present negative research results as well as other, non-empirical types of research that provide novel or deepened insights into the teaching and learning of computer science content.
What prior work should be cited?
As with any research work, your submission should cite all significant publications that are relevant to your research questions. With respect to ICER submissions, this may include not only work that has been published in ACM-affiliated venues like ICER, ITiCSE, SIGCSE, Koli Calling, but also the wide range of conferences and journals in the learning sciences, education, educational psychology, HCI, and software engineering. If you are new to research, consider guides on study design and surveys of prior work like the 2019 Cambridge Handbook of Computing Education Research, which attempts to survey most of what we know about computing education up to 2018.
Papers will be judged on how adequately they are grounded in prior work published across academia. They will also be assessed regarding their accuracy of citing related work: read what you cite closely and ensure the discoveries in published work are supporting your claims; many of the authors of the works you are likely to cite are members of the computing education research community and may be your reviewers. Finally, papers will also be expected to return to prior work in a discussion of a paper’s contributions. All papers should explain how the paper’s contributions advance upon prior work, cause us to reinterpret prior work, or reveal conflicts with prior work.
How might theory be used?
Different disciplines across academia vary greatly on how they use and develop theory. At the moment, the position of the community is that theory can be a useful tool for framing research, connecting it to prior work, and interpreting findings. Papers can also contribute new theories, or refine them. However, it may also be possible for papers to be atheoretical, discovering interesting new relationships or interventions that cannot yet be explained. All of these uses of theory are appropriate.
It is also possible to misuse theory. Sometimes the theories used are too general for a question, where a theory more specific to computing education might be appropriate. In other cases, a theory might be wrongly applied to some phenomena, or a paper might use a theory that has been discredited. Be careful when using theory to understand its history, its body of evidence in support of and against its claims, and its scope of relevance.
Note that our community has discussed the role of theory multiple times, and that conversations about how to use theory are evolving:
-
Nelson and Ko (2018) argued that there are tensions between expectations of theory building and innovative exploration of design ideas, and that our field’s theory building should focus on theories specific to computing education.
-
Malmi et al. (2019) found that while computing education researchers have widely cited many dozens of unique theoretical ideas about learning, behavior, beliefs, and other phenomena, the use of theory in the field remains somewhat shallow.
-
Kafai et al. (2019) argued that there are many types of theories, and that we should more deeply leverage their explanatory potential, especially theories about the sociocultural and societal factors at play in computing education, not just the cognitive factors.
In addition to using theories when appropriate, ICER encourages the contribution of new theories. There is not a community-level consensus on what constitutes a good theory contribution, but there are examples you might learn from. Papers proposing a new theoretical model should consider including concrete examples of said model.
How should educational contexts be described?
If you’re reporting empirical work in a specific education context or set of contexts, it is important to remember that our research community is global, and that education systems across the world are structured differently. This is of particular importance when describing research that took place in primary and secondary schools. Keep in mind that not all readers can be familiar with your educational context. Describe the structure of the educational system. Define terminology related to your education system. Characterize who is teaching, and what prior knowledge and preparation they have. When describing learners, at a minimum, describe their gender, race, ethnicity, age, level in school, and prior knowledge (assuming collecting and publishing this type of data is legal in the context in which the study was conducted, see also the ACM Publications Policy on Research Involving Human Participants and Subjects). Include information about other structural factors that might affect how the results are interpreted, including whether courses are required or elective, what incentives students have to enrol in courses, how students in courses vary. For authors in the United States, common terminology to avoid include “elementary school”, “middle school”, “high school”, and “college”, which do not have well-defined meanings elsewhere. Use the more common globally inclusive phrases “primary”, “secondary”, and “post-secondary”. Given the broad spectrum of, e.g., introductory computing courses that run under the umbrella of “CS1”, make sure to provide enough information on the course content rather than relying on an assumed shared understanding.
What details should we report about our methods?
ICER values a wide range of methods of all kinds, including quantitative, qualitative, design, argumentation, and more. It is critical to describe your methods in detail, both so that reviewers and readers can understand how you arrived at your conclusions, and so they can evaluate the appropriateness of your methods both to the work and, for readers, to their own contexts.
Some contributions might benefit from following the Center for Open Science’s recommendations to ensure replicable, transparent science. These include practices such as:
-
Data should be posted to a trusted repository.
-
Data in that repository is properly cited in the paper.
-
Any code used for analysis is posted to a trusted repository.
-
Results are independently reproduced.
-
Materials used for the study are posted to a trusted repository.
-
Studies and their analysis plans are pre-registered prior to being conducted.
Our community is quite far from adopting any of these standards as expectations. Additionally, pursuing many of these goals might impose significant barriers to conducting research ethically, as educational data can often not be sufficiently anonymized to prevent disclosing identity. Therefore, these supplementary materials are not required for review, but we encourage you to include them where feasible and ethical.
The ACM has adopted a new policy on Research Involving Human Participants and Subjects that requires research to be conducted in accordance with ethical and legal standards. In accordance with the policy, your methods description should briefly describe how these standards were met. This can be as simple as a sentence that your study design was reviewed by a local review board (IRB), or a few sentences with key details if you engaged with human subjects and an IRB review was not appropriate to your context or work. Read the ACM policy for additional details.
How should we report statistics?
The world is moving beyond p-values, but computing education, like most of academia, still relies on them. When reporting the results of statistical hypothesis tests, it is critical to report:
-
The test used
-
The rationale for choosing the test, including a discussion of the data characteristics that allowed this test to be used
-
The test statistic computed
-
The actual p-value (not just whether it was greater than or less than an arbitrary threshold)
-
An effect size and its confidence intervals.
Effect sizes are especially relevant, as they indicate the extent to which something impacts or explains some phenomena in computing education; small effect sizes might not be that significant to learning. The above data should be reported regardless of whether a hypothesis test was significant. Chapters that introduce statistical methods can be found in the Cambridge Handbook of Computing Education Research.
Do not assume that reviewers or future readers have a deep understanding of statistical methods (although they might). If you’re using more advanced or non-standard techniques, justify them in detail, so that the reviewers and future readers understand your choice of methods. We recognize that length limits might prevent a detailed explanation of methods for entirely unfamiliar readers; reviewers are expected to not criticize papers for excluding extensive explanations when there was not space to include them.
How should we report on qualitative methods?
Best practices in other fields for addressing the reliability of qualitative methods suggest providing detailed arguments and rationale for qualitative approaches and analyses. Some fields that rely on qualitative methods have moved toward a recoverability criterion, which like replicability in quantitative methods, aims to ensure a study’s core methods are available for inspection and interpretation; however, recoverability does not imply repeatability, as qualitative methods rely on interpretation, which may not be repeatable.
When qualitative data is counted and used for quantitative methods, authors should report on the inter-rater reliability (IRR) of the qualitative judgements underlying those counts. There are many ways of calculating inter-rater reliability, each with tradeoffs. However, note that IRR analysis is not ubiquitous across social sciences, and not always appropriate; authors should make a clear soundness argument for why it was or was not performed.
Another challenge in reporting qualitative results is that they require more space in a paper; an abundance of quotes, after all, may take considerably more space than a table full of aggregate statistics. Be careful to provide enough evidence of your claims, while being mindful with your use of space.
What makes a good abstract?
A good abstract should summarize the question your paper asks and what answers it found. It is not enough to just say “We discuss our results and their implications”; say what you actually discovered, so future readers can learn that from your summary.
If your paper is empirical in nature, ICER recommends (but does not require) using a structured abstract that contains the following sections, each 1-2 sentences:
-
Background and Context. What is the problem space you are working in? Which phenomena are you considering and why are they relevant and important for an ICER audience?
-
Objectives. What research questions were you trying to answer?
-
Method. What did you do to answer your research questions?
-
Findings. What did you discover? Both positive and negative results should be summarized.
-
Implications. What implications does your discovery have on prior and future research, and on the practice of computing education?
Not all papers may fit this structure, but if yours does, it will greatly help reviewers and future readers understand your paper’s research design and contribution.
What counts as plagiarism?
Read ACM’s policy on Plagiarism, Misrepresentation, and Falsification; these criteria will be applied during review. In particular, attention will be paid to avoiding redundant publication.
Who should be an author on my paper?
ICER follows ACM’s Authorship Policy and Publications Policy on the Withdrawal, Correction, Retraction, and Removal of Works from ACM Publications and ACM DL. These state that any person listed as an author on a paper must (1) have made substantial contributions to the work, (2) have participated in drafting/revising the paper, (3) be aware that the paper has been submitted, and (4) agree to be held accountable for the content of the paper. Note that this policy allows enforcement of plagiarism sanctions, but it could impact people who work in large, collaborative research groups, and on postgraduate advisors who have not contributed directly to a paper.
Must submissions be in English?
At the moment, yes. Our reviewing community’s only lingua franca is English, and any other language would greatly limit the pool of expert reviewers to evaluate your work. We recognize that this is a challenging barrier for many authors globally, and that it greatly limits the diversity of voices in global discourse on computing education. Therefore, we wish to express our support of other computing education conferences around the world that you might consider submitting papers to. To mitigate this somewhat, papers will not be penalized for minor English spelling and grammar errors that can easily be corrected with minor revisions.
Resources
American Educational Research Association. (2006). Standards for reporting on empirical social science research in AERA publications. Educational Researcher, 35(6), 33–40. http://edr.sagepub.com/content/35/6/33.full.pdf+html.
Decker, A,, McGill, M. M., & Settle, A (2016). Towards a Common Framework for Evaluating Computing Outreach Activities. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (SIGCSE ’16). ACM, New York, NY, USA, 627-632. DOI: https://doi.org/10.1145/2839509.2844567.
Fincher, S. A., & Robins, A. V. (Eds.). (2019). The Cambridge Handbook of Computing Education Research. Cambridge University Press. DOI: https://dx.doi.org/10.1017/9781108654555.
Petre, M., Sanders, K., McCartney, R., Ahmadzadeh, M., Connolly, C., Hamouda, S., Harrington, B., Lumbroso, J., Maguire, J., Malmi, L., McGill, M.M., Vahrenhold, J. (2020). Mapping the Landscape of Peer Review in Computing Education Research, In: ITiCSE-WGR ’20: Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, ACM. New York, NY, USA, 173–209. DOI: https://doi.org/10.1145/3437800.3439207.
Reviewer Guidelines
ICER 2023 Review Process and Guidelines
Version 2.0 - March 15, 2023
Kathi Fisler & Paul Denny ICER 2023 Program Co-Chairs
This document is a living document intended to capture the reviewing policies of the ICER community. Please email the Program Co-Chairs at pc-chairs@icer.acm.org with comments or questions; all will be taken into account when updating this document for ICER 2024.
Based on the ICER 2020/2021 Reviewing Guidelines (Amy Ko & Anthony Robins & Jan Vahrenhold) as well as the ICSE 2022 Reviewing Guidelines (Daniela Damian & Andreas Zeller). We are thankful for the input on these earlier documents provided by members of the ICER community.
Table of Contents
- Goals of the ICER Reviewing Process
- Action Items
- Submission System
- Roles in the Review Process
- Principles Behind ICER Reviewing
- Conflicts of Interest
- The Reviewing Process
- Review Criteria
- Award Recommendations
- Possible Plagiarism, Misrepresentation, and Falsification
- Practical Suggestions for Writing Reviews
1. Goals of the ICER Reviewing Process
The ICER Reviewing Process as outlined in this document is designed to support reaching the following goals:
- Accept high quality papers
- Give clear feedback to papers of insufficient quality
- Evaluate papers consistently
- Provide transparency in the review process
- Embrace diversity of perspectives, but work in an inclusive, safe, collegial environment
- Drive decisions by consensus among reviewers
- Strive for manageable workload for PC members
- Do our best on all of the above
2. Action Items
Prior to continuing to read this document, please do the following:
- Read the call for papers at https://icer2023.acm.org/track/icer-2023-papers. This is the ground truth for scope and submission requirements. We expect you to account for these in your reviews.
- Read the author guidelines at https://icer2023.acm.org/track/icer-2023-papers#Author-Guidelines. We expect your reviews and meta-reviews to be consistent with these guidelines. After having read this document, please block off a number of time slots in your calendar:
- [Reviewers and Meta-Reviewers:] Saturday, March 18, 2023 through Friday, March 24, 2023: Reserve at least two hours to read all abstracts and bid for papers to review (see Step 2: Reviewers and Meta-Reviewers Bid for Papers).
- [Reviewers:] Wednesday, March 29, 2023 through Friday, April 21, 2023: Reserve enough time to review 5-6 papers (see Step 6a: Reviewers Review Papers). In general, it is highly recommended to spread the reviews over the full four weeks instead to trying to write them just in time. Notify the PC chairs immediately in case of emergencies that might prevent you from submitting reviews by the deadline.
- [Reviewers and Meta-Reviewers:] Friday, April 21, 2023 through Friday, April 28, 2023: Reserve one one-hour slot during the weekend and 20-minutes slot each day of the week to log into HotCRP, read the other reviews, check on the discussion status of each of your papers, comment where appropriate, and determine whether clarifications are needed from the authors (see Step 7: Reviewers and Meta-Reviewers Discuss Reviews).
- [Meta-Reviewers:] Friday, April 28, 2023 through Wednesday, May 3, 2023: Reserve three hours in total to you to prepare (and update, as necessary) the meta-reviews for your assigned papers, including accounting for author clarification responses (see Step 8: Meta-Reviewers Write Meta-Reviews).
- [Meta-Reviewers:] Wednesday, May 10, 2023 through Friday, May 12, 2023: Reserve two two-hour slots for synchronous SPC meetings (see Step 9: PC Chairs and Meta-Reviewers Discuss Papers; the PC chairs will be reaching out to schedule these meetings).
- [Meta-Reviewers:] Thursday, May 25, 2023 through Thursday, June 1, 2023: Reserve two hours for checking any “conditional accept” revisions that may affect your papers (see Step 13: Meta-Reviewers Check Revised Papers).
If you are new to reviewing in the Computing Education Research community, the following ITiCSE Working Group Report may serve as an introduction:
- Petre M, Sanders K, McCartney R, Ahmadzadeh M, Connolly C, Hamouda S, Harrington B, Lumbroso J, Maguire J, Malmi L, McGill MM, Vahrenhold J. 2020. “Mapping the Landscape of Peer Review in Computing Education Research.” In ITiCSE-WGR ’20: Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, edited by Rφίling G, Krogstie B, 173-209. New York, NY: ACM Press. doi: 10.1145/3437800.3439207.
3. Submission System
ICER 2023 uses the HotCRP platform for its reviewing process. If you are unfamiliar with this, you will find a basic tutorial below. But first, make sure you can sign in, then bookmark it: http://icer2023.hotcrp.com If you have trouble signing in, or you need help with anything, contact Dastyni Loksa dloksa@towson.edu and Rodrigo Duran rodrigo.duran@ifms.edu.br, the ICER 2023 submission chairs, for help. Make sure that you can log in to HotCRP and that your name and other metadata is correct. Check that emails from HotCRP are not marked as spam and that HotCRP email notifications are enabled.
4. Roles in the Review Process
Program Committee (PC) Chairs
Each year there are two program committee co-chairs. The PC chairs are solicited by the ICER steering committee and appointed by the SIGCSE board to serve a two-year term. One new appointment is made each year so that in any given year there is always a continuing program chair from the prior year and a new program chair. Appointment criteria include prior attendance and publication at ICER, past service on the ICER Program Committee, research excellence in Computing Education, collaborative and organizational skills to share oversight of the program selection process. The ICER Steering Committee solicits and selects candidates for future PC chairs.
Program Committee (PC) Members / Reviewers
PC members write reviews of submissions, evaluating them against the review criteria. The PC chairs invite and appoint the reviewers. The committee is sized so that each reviewer will serve for 5-6 paper submissions, or more depending on the size of the submissions pool. Each reviewer will serve a one-year term, with no limits on reappointment. Appointment criteria include expertise in relevant areas of computing education research and past reviewing experience in computing education research venues. Together, all reviewers constitute the program committee (PC). The PC chairs are responsible for inviting returning and new members of the PC, keeping in mind the various forms of diversity that are present at ICER.
Senior Program Committee Members (SPC) / Meta-Reviewers
SPC members review the PC members’ reviews, ensuring that the review content is constructive and aligned with the review criteria, as well as summarizing reviews and making recommendations for a paper’s acceptance and rejection. They also moderate discussions about each paper and provide feedback on reviews if necessary, asking reviewers to improve the quality of reviews. Finally, they participate in a synchronous SPC meeting to make final recommendations about each paper, and review authors’ minor revisions. The PC chairs invite and appoint Senior PC members, with the approval of the steering committee, again, keeping in mind the various forms of diversity that are present at ICER. Each Senior PC member can be appointed for up to three years in a row; after a hiatus of at least one year, preferably two years, re-appointment is possible. The committee is sized so that each meta-reviewer will handle 8-10 papers, depending on the submission pool.
5. Principles Behind ICER Reviewing
The ICER review process is designed to work towards these goals:
- Maximize the alignment between a paper and expertise required to review it.
- Minimize conflicts of interests and promoting trust in the process.
- Maximize our community’s ability to make excellent, rigorous, trustworthy contributions to the science of computing education.
The call for papers and author guide should make this clear, but ICER is broadly scoped. The conference publishes research on teaching and learning of computer science content that happens in any context. In consequence, reviewers should not downgrade papers for being about a topic they personally perceive to be less important to computing education. If the work is sufficiently ready for publication and reviewers believe it is of interest to some part of the computing education community, it should be published such that the community can decide its importance over time.
6. Conflicts of Interest
ICER takes conflicts of interest, both real and perceived, quite seriously. The conference adheres to the ACM conflict of interest policy (https://www.acm.org/publications/policies/conflict-of-interest) as well as the SIGCSE conflict of interest policy (https://sigcse.org/policies/COI.html). These state that a paper submitted to the ICER conference is a conflict of interest for an individual if at least one of the following is true:
- The individual is a co-author of the paper
- A student of the individual is a co-author of the paper
- The individual identifies the paper as a conflict of interest, i.e., that the individual does not believe that he or she can provide an impartial evaluation of the paper.
The following policies apply to conference organizers:
- The chairs of any track are not allowed to submit to that track.
- All other conference organizers are allowed to submit to any track.
- All reviewers (PC members) and meta-reviewers (SPC members) are allowed to submit to any track.
No reviewer, meta-reviewer, or chair with a conflict of interest in the paper will be included in any evaluation, discussion, or decision about the paper. It is the responsibility of the reviewers, meta-reviewers, and chairs to declare their conflicts of interest throughout the process. The corresponding actions are outlined below for each relevant step of the reviewing process. It is the responsibility of the chairs to ensure that no reviewer or meta-reviewer is assigned a role in the review process for any paper for which they have a conflict of interest.
7. The Reviewing Process
Step 1: Authors Submit Abstracts
Authors will submit a title and abstract one week prior to assigning papers. Authors are allowed to revise their title and abstract before the full paper submission deadline.
Step 2: Reviewers and Meta-Reviewers Bid for Papers
Reviewers and meta-reviewers will be asked to bid on papers for which they have sufficient expertise (in both phenomena and methods) and then the PC chairs will assign papers based on these bids. The purpose of bidding is not to express interest in papers you want to read. It is to express your expertise and eligibility for fairly evaluating the work. These are subtly but importantly different purposes.
- Specify all of your conflicts of interest. Conflicts are any situation where you have any connection with a submission that is in tension with your role as an independent reviewer (you advised an author, you have collaborated with an author, you are at the same institution, you are close friends, etc.). After declaring conflicts, you will be excluded from all future evaluation, discussion, and decisions of that paper. Program chairs and submissions chairs will also specify conflicts of interest at this time.
- Bid on all of the papers you believe you have sufficient expertise to review. Sufficient expertise includes knowledge of research methods used and prior research on the phenomena. Practical knowledge of a topic is helpful, but insufficient.
- Do not bid on papers about topics, techniques, or methods that you strongly oppose. That precludes authors from being fairly reviewed by authors with negative bias; see below for positive biases and how to control for them.
Step 3: Authors Submit Papers
Submissions are due one week after the abstracts are due. As you read in the submission instructions (https://icer2023.acm.org/track/icer-2023-papers#Submission-Instructions), submissions are supposed to be sufficiently anonymous that a reader cannot determine the identity or affiliation of the authors. The main purpose of ICER’s anonymous reviewing process is to reduce the influence of potential (positive or negative) biases on reviewers’ assessments. You should be able to review the work without knowing the authors or their affiliations. Do not try to find out the identity of authors. (Most guesses will be wrong anyway.) See the submission instructions for what constitutes sufficient anonymization. When in doubt, write the PC chairs for clarity at pc-chairs@icer.acm.org.
Step 4: PC Chairs Decide on Desk-Rejects
The PC chairs, with the help of the submissions chairs, will review each submission for papers that violate anonymization requirements, length restrictions, or plagiarism policies. Authors of desk rejected papers will be notified immediately. The PC chairs may not catch every issue. If you see something during review that you believe should be desk rejected, contact the chairs before you write a review; the PC chairs will make the final judgement about whether something is a violation, and give you guidance on whether and if so how to write a review.
Managing Conflicts of Interest
PC chairs with conflicts are excluded from deciding on desk rejected papers, leaving the decision to the other program chair.
Step 5: PC Chairs Assign Reviewers
Based on the bids and their judgement, the PC chairs will collaboratively assign at least three reviewers (PC members) and one meta-reviewer (SPC member) for each submission. The PC chairs will be advised by HotCRP’s assignment algorithm, which depends on all bids being high quality. Remember, for these assignments to be fair and good, your bids should only be based on your expertise and eligibility. Interest alone is not sufficient for bidding on a paper. The chairs will review the algorithm’s assignments to identify potential misalignments with expertise. Managing Conflicts of Interest PC chairs with conflicts are excluded from assigning reviewers to any papers for which they have a conflict. Assignments in HotCRP can only be made by a PC chair without a conflict.
Step 6a: Reviewers Review Papers
Assigned reviewers submit their anonymous reviews through HotCRP by the review deadline, evaluating each of their papers against the review criteria (see Review Criteria). The time allocated for reviews is four weeks in which 5-6 reviews need to be written. Due to the internal and external (publication) deadlines, there cannot be any extensions.
Managing Conflicts of Interest
Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the reviews of the papers they are conflicted on during this process.
Step 6b: Meta-Reviewers and PC Chairs Monitor Progress
Meta-reviewers and PC chairs will periodically check in to ensure that progress is being made.
Step 7: Reviewers and Meta-Reviewers Discuss Reviews
After the reviewing period, the assigned meta-reviewer asks the reviewers to read the other reviewers’ reviews and begin a discussion about any disagreements that arise. All reviewers are asked to do the following:
- Read all the reviews of all papers assigned (and re-read your own reviews).
- Engage in a discussion about sources of disagreement.
- Use the review criteria to guide your discussions.
- Be polite, friendly, and constructive at all times.
- Be responsive and react as soon as new information comes in.
- Remain open to other reviewers shifting your judgements.
- Explicitly state any clarifying questions that could change your evaluation of the paper
If your judgement does shift, update your review to reflect your new views. There is no need to indicate to the authors that you changed your review but do leave a comment for the other reviewers and the meta-reviewer indicating what you changed and why (HotCRP does not track changes). Discussing a paper is not about who wins or who is right. It is about how, in the light of all information, a group of reviewers can find the best decision on a paper. All reviewers (and the authors!) have their unique perspective and competence. It is perfectly normal that they may have seen things you have not, just as you may have seen things they have not. The important thing is to accept that the group will see more than the individual. Therefore, you can always (and are encouraged to!) shift your stance in light of the extra knowledge. Starting in 2023, we have added a brief (72-hour) period during which authors can respond to clarifying questions that could impact the recommendation on their paper. Reviewers and meta-reviewers should articulate these questions (if any) during the review process. Questions are appropriate only for papers for which the reviewers and meta-reviewer are unable to make a decision without additional clarifying information. The chairs will send the questions and report back on the responses. The time allocated for this discussion is one week. As discussions about disagreeing reviews may take several (asynchronous) rounds, it is important to check in daily to see whether any new discussion items warrant attention. PC chairs will periodically check in. If you have configured HotCRP notifications correctly, you will be notified as soon as new information (another review or a new discussion item) about your paper comes in. It is important that you react to these, and as soon as possible. Do not let your colleagues wait for days when all that is needed is some short statement from your side.
Managing Conflicts of Interest
Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the discussions of the papers they are conflicted on during this process.
Step 8: Meta-Reviewers Write Meta-Reviews
After the discussion phase, meta-reviewers use the reviews, the discussion, and their own evaluation of the work to write a meta-review and recommendation. If clarifying questions were posed to authors, the PC chairs will provide the responses within the first 2-3 days of this period. A meta-review should summarize the key strengths and weaknesses of the paper, in light of the review criteria, and explain how these led to the decision. The summary and explanation should help the authors in revising their work where appropriate. A generic meta-review (“After long discussion, the reviewers decided that the paper is not up to ICER standards, and therefore rejected the paper”) is not sufficient. There are four possible meta-review recommendations: reject, discuss, conditional accept, and accept. The recommendation needs to be entered in the meta-review.
- Reject. Ensure that the meta-review constructively summarizes the reviews and the rationale for rejection. The PC chairs will review all meta-reviews to ensure that reviews are constructive, and may request meta-reviewers to revise their meta-reviews as necessary. The PC chairs will make the final rejection decision based on the meta-review rationale; if necessary, this paper will be discussed at the SPC meeting.
- Discuss. Ensure that the meta-review summarizes the open questions that need to be resolved at the SPC meeting discussion, where the paper will either be recommended as reject, conditional accept, or accept. Papers marked discussed will be scheduled for discussion at the SPC meeting. All papers for which the opinion of the meta-reviewer and the majority of reviewer recommendations do not align should be marked “discuss” as well.
- Conditional Accept. Ensure that the meta-review explicitly and clearly states the conditions that must be met with minor revisions before the paper can be accepted. To accept with conditions, the conditions must be feasible to make within the one-week revision period, so they must be minor. The PC chairs will make the final decision on whether the requested revisions are minor enough to warrant conditional acceptance; if necessary, this paper will be discussed at the SPC meeting.
- Accept. These papers will be accepted, assuming authors deanonymize the paper and meet the final version deadline. For technical reasons, “accept” recommendations are recorded internally as “conditional accept” recommendations that do not state any conditions for acceptance other than submitting the final version. The PC chairs will make the final acceptance decision based on the meta-review rationale; if necessary, this paper will be discussed at the SPC meeting.
Managing Conflicts of Interest
Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the recommendations or meta-reviews of the papers they are conflicted on during this process.
Step 9: PC Chairs and Meta-Reviewers Discuss Papers
The PC chairs will host synchronous SPC meetings with all available meta-reviewers (SPC members) to discuss and decide on all “discuss” and “conditional accept” papers. Before this meeting, a second meta-reviewer will be assigned to each such paper, ensuring that there are at least two meta-reviewers to facilitate discussion. Each meta-reviewer assigned to a paper should come prepared to present the paper, its reviews, and the HotCRP discussion. Each meta-reviewer’s job is to present their recommendation, and/or if they requested discussion, present the uncertainty that prevents them from making one. All meta-reviewers who are available to attend a SPC meeting session should, at a minimum, skim each of the papers to be discussed and their reviews (excluding those for which they are conflicted), so they are familiar with the papers and their reviews prior to the discussions. At the meeting, the goal is to collectively reach consensus, rather than relying on the PC chairs alone to make final decisions. Papers may move from “discuss” to either “reject”, “conditional accept”, or “accept”; if there are conditions, they must be approved by a majority of the non-conflicted SPC and PC chairs at the discussion. After a decision is made in each case, the original SPC member will add a summary of the discussion at the end of their meta-review, explaining the rationale for the final decision, as well as any conditions for acceptance, and updating the recommendation tag in HotCRP.
Managing Conflicts of Interest
Meta-reviewers conflicted on a paper will not be assigned as a second reader. Any meta-reviewer or PC chair conflicted on a paper will be excluded from the paper’s discussion, returning after the discussion is over.
Step 10: PC Chair Review
Before announcing decisions, the non-conflicted PC chairs will review all meta-reviews to ensure as much clarity and consistency with the review process and its criteria as possible.
Managing Conflicts of Interest
PC chairs cannot change the outcome of an accept or reject decision after the SPC meeting.
Step 11: Notifications
After the SPC meeting, the PC chairs will notify all authors of the decisions about their papers; these notification will be via email through HotCRP. Papers that are (unconditionally) accepted will be encouraged to make any changes that may have been suggested but not required; papers that are conditionally accepted will be reminded of the revision evaluation deadline.
Step 12: Authors of Conditionally Accepted Papers Revise their Papers
Authors of conditionally accepted papers have one week to incorporate the requested revisions and to submit their final versions for review by the assigned meta-reviewer.
Step 13: Meta-Reviewers Check Revised Papers
Meta-reviewers will check the revised papers against the required revisions. Based on the outcome of this, they will change their recommendation to either “accept” or “reject” and will update their meta-reviews to reflect this.
Managing Conflicts of Interest
Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the recommendations or meta-reviews of the papers they are conflicted on during this process.
Step 14: Notifications
PC chairs will sanity-check all comments on those papers for which revisions were submitted. Conditionally accepted papers for which not revisions were received will be marked as “reject”. PC chairs then finalize decisions. After this review, all recommendations will be converted to official accept or reject decisions in HotCRP and authors will be notified of these final decisions via email sent through HotCRP. Authors will then have one week to submit to ACM TAPS for final publication.
Managing Conflicts of Interest
Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the recommendations or meta-reviews of the papers they are conflicted on during this process. PC chairs with conflicts cannot see or edit any final decision on these papers.
8. Review Criteria
ICER currently evaluates papers against the following reviewing criteria, as independently as possible. These have been carefully chosen to be inclusive to many phenomena, epistemologies, and contribution types.
- Criterion A: The submission is grounded in relevant prior work and leverages available theory when appropriate.
- Criterion B: The submission describes its methods and/or innovations sufficiently for others to understand how data was obtained, analyzed, and interpreted, or how an innovation works.
- Criterion C: The submission’s methods and/or innovations soundly address its research questions.
- Criterion D: The submission advances knowledge of computing education by addressing (possibly novel) questions that are of interest to the computing education community.
- Criterion E: Discussion of results clearly summarizes the submission’s contributions beyond prior work and its implications for research and practice.
- Criterion F: The submission is written clearly enough to publish.
To be published at ICER, papers should be positively evaluated on all of these. The summary of this is another criterion:
Below, we discuss each criterion in turn.
Criterion A: The submission is grounded in relevant prior work and leverages available theory when appropriate.
Papers should draw on relevant prior work and theories, and explicitly show how they are tied to the questions addressed. After reading the paper, one should feel more informed about prior literature and how that literature is related to the paper’s contributions. Such coverage of related work might come before a work’s contributions, or it might come after (e.g, connecting a new theory derived from observations to prior work. Note that not all types of research will have relevant theory to discuss, nor do all contribution types need theory to make significant advances. For example, a surprisingly robust but unexplained correlation might be an important discovery that later work could develop theory to explain. Reviewers should identify related work the authors might have missed and include pointers. Missing a paper that is relevant, but would not dramatically change the paper, is not sufficient grounds for rejecting a paper. Such citations can be added upon reviewers’ request prior to publication. Instead, criticism in reviews that leads to downgrading a paper should focus on missing prior work or theories that would significantly alter research questions, analysis, or interpretation of results.
Guidelines for (Meta-)Reviewers
Since prior work and theories needs to be covered sufficiently and in a meaningful way but not necessarily completely, (meta-)reviewers are asked to do the following:
- Refrain from downgrading work based on missing one or two peripherally related papers. Just note them, helping the authors to broaden their citations.
- Refrain from downgrading work based on not citing the reviewer’s own work, unless it really is objectively highly relevant.
- Refrain from downgrading work based on where in a paper they address prior work. Sometimes a dedicated section is appropriate, sometimes it is not. Sometimes prior work is better addressed at the end of a paper, not at the beginning.
- Make sure to critically note if work simply lists papers without meaningfully addressing their relevance to the paper’s questions or innovations.
- Refrain from downgrading work based on making discoveries inconsistent with theory. The point of empirical work is to test and refine theories, not conform to them.
- Refrain from downgrading work based on not building upon theory when there is no sufficient theory available that can be pointed out in the review. Conversely, if there is a missing and relevant theory, it should be named.
- Refrain from downgrading work based on not using the reviewer’s interpretation of a theory. Many theories have multiple competing interpretations and multiple distinct facets that can be seen from multiple perspectives.
Criterion B: The submission describes its methods and/or innovations sufficiently for others to understand how data was obtained, analyzed, and interpreted, or how an innovation works.
An ICER paper should be self-contained in the sense that readers should be able to understand most of the key details about how the authors conducted their work or made their innovation possible. This is key for replication and meta-analysis of studies that come from positivist or post-positivist epistemologies. For interpretivist works, it is also key for what Checkland and Howell called “recoverability” (See Tracy et al. 2010 for a detailed overview for evaluating qualitative work). Reviews thus should focus on omissions of research process or innovation details that would significantly alter your judgment of the paper’s validity.
Guidelines for (Meta-)Reviewers
Since ICER papers have to adhere to a word count limit and since there are always more details a paper can describe about methods, (meta-)reviewers are asked to do the following: * Refrain from downgrading work based on not describing every detail. * Refrain from asking authors to write substantially new method details unless you can identify content for them to cut, or there is space to add those details within the length restrictions. * Refrain from asking authors of theory contributions for a traditional methods section; such contributions do not require them, as they are not empirical in nature. * Feel free to ask authors for minor revisions that would support replication or meta-analysis for positivist or post-positivist works, and recoverability for interpretivist works using qualitative methods.
Criterion C: The submission’s methods and/or innovations soundly address its research questions.
The paper should answer the questions it poses, and it should do so with rigor, broadly construed. This is the single most important difference between research papers and other kinds of knowledge sharing in computing education (e.g., experience reports), and the source of certainty researchers can offer. Note that soundness is relative to claims. For example, if a paper claims to have provided evidence of causality, but its methods did not do that, that would be grounds for critique. But if a paper only claimed to have found a correlation, and that correlation is a notable discovery that future work could explain, downgrading it for not demonstrating causality would be inappropriate.
Guidelines for (Meta-)Reviewers
Since soundness is relative to claims and methods, (meta-)reviewers are asked to do the following:
- Refrain from applying criteria for quantitative methods to qualitative methods (e.g., critiquing a case study for a “small N” makes no sense; that is the point of a case study).
- Refrain from downgrading work based on a lack of a statistically significant difference if the study demonstrates sufficient power to detect a difference. A lack of difference can be discovery, too.
- Refrain from asking for the paper to do more than it claims if the demonstrated claims are sufficiently publishable (e.g., “I would publish this if it had also demonstrated knowledge transfer”).
- Refrain from relying on inexpert, anecdotal judgments (e.g., “I don’t know much about this but I played with it once and it didn’t work”).
- Refrain from assuming that because a method has not been used in computing education literature that it is not standard somewhere else. The field draws upon methods from many communities. Look for evidence that the method is used elsewhere.
Criterion D: The submission advances knowledge of computing education by addressing (possibly novel) questions that are of interest to the computing education community.
A paper can meet the previous criteria and still fail to advance what we know about the phenomena. It is up to the authors to convince you that the discoveries advance our knowledge in some way, e.g., by confirming uncertain prior work, adding a significant new idea, or making progress on a long-standing open question. Secondarily, there should be someone who might find the discovery interesting. It does not have to be interesting to a particular reviewer, and a particular reviewer does not have to be absolutely confident that an audience exists. As the PC cannot possibly reflect the broader audience of all readers, a probable audience is sufficient for publication.
Guidelines for (Meta-)Reviewers
Since advances can come in many forms, there are many criticisms that are inappropriate in isolation (if, however, many of these apply, they may justify rejection), and, thus, (meta-)reviewers are asked to do the following:
- Refrain from downgrading work because another, single paper was already published on the topic. Discoveries accumulate over many papers, not just one.
- Refrain from downgrading work that contributes a really new idea for not yet having everything figured out about it. Again, new discoveries may require multiple papers.
- Refrain from downgrading work because the results do not appear generalizable or were only obtained at a specific institution. Many papers explicitly discuss such limitations and possible remedies. Also, generalizability takes time, and, by their very nature, some qualitative methods do not lead to generalizable results.
- Refrain from downgrading work based on “only” being a replication. Replications, if done with diligence, are important.
- Refrain from downgrading work based on investigating phenomena you personally do not like (e.g., “I hate object-oriented languages, this work does not matter”).
Criterion E: Discussion of results clearly summarizes the submission’s contributions beyond prior work and its implications for research and practice.
It is the authors’ responsibility to help interpret the significance of a paper’s discoveries. If it makes significant advances, but does not explain what those advances are and why they matter, the paper is not ready for publication. That said, it is perfectly fine if you disagree with the paper’s interpretations or implications. Readers will vary on what they think a discovery means or what impact it might have on the world. All that is necessary is that the work presents some reasonably sound discussion of one possible set of interpretations.
Guidelines for (Meta-)Reviewers
Because there is no single “right” interpretation or discussion of implications, (meta-)reviewers are asked to do the following
- Refrain from downgrading work because you do not think the idea would work in your institution.
- Refrain from downgrading work because you think that the impact is limited. Check the discussion of limitations and threats to validity and evaluate the paper with respect to the claims made.
- Make sure to critically note if work makes interpretations that are not grounded in evidence or proposes implications that are not grounded in evidence.
Criterion F: The submission is written clearly enough to publish.
Papers need to be clear and concise, both to be comprehensible to diverse audiences, but also to ensure the community is not overburdened by verboseness. We recognize that not all authors are fluent English writers; if, however, the paper requires significant editing to be comprehensible to fluent English readers, or it is unnecessarily verbose, it is not yet ready for publication.
Guidelines for (Meta-)Reviewers
Since submissions should be clear enough, (meta-)reviewers are asked to do the following:
- Refrain from downgrading work based on having easily fixed spelling and grammar issues.
- Refrain from downgrading a sufficiently clear paper because it could be clearer. All writing can be clearer in some way.
- Refrain from downgrading work based on not using all of the available word count. It is okay if a paper is short but significant.
- Refrain from asking for more detail unless you are certain there is space or - if there is not space - you can provide concrete suggestions for what to cut.
Summary: Based on the criteria above, this paper should be published at ICER.
Based on all of the previous criteria, decide how strongly you believe the paper should be accepted or rejected, assuming authors make any modest, straightforward minor revisions you and other reviewers request before publication. Papers that meet all of the criteria should be strongly accepted (though this does not imply that they are perfect). Papers that fail to meet most of the criteria should be strongly rejected. Each paper should be reviewed independently of others, as if it were a standalone journal submission. There are no conference presentation “slots”; there is no target acceptance rate. Neither should be a factor in reviewing individual submissions.
Guidelines for (Meta-)Reviewers
Because each paper should be judged on its own, (meta-)reviewers are asked to do the following:
- Refrain from recommending to accept a paper because it was the best in your set. It is possible that none of your papers sufficiently meet the criteria.
- Refrain from recommending to reject a paper because it should not take up a “slot”. The PC chairs will devise a program for however many papers sufficiently meet the criteria, whether that is 5 or 50. There is no need to preemptively design the program through your review; focus on the criteria.
9. Award Recommendations
On the review form, reviewers may signal to the meta-reviewer and PC chairs that they believe the submission should be considered for a best paper award. Selecting this option in the review form is visible to the other (meta-)reviewers as part of your review, but it is not disclosed to the authors. Reviewers should recognize papers that best illustrate the highest standards of computing education research, taking into account the quality of its questions asked, methodology, analysis, writing, and contribution to the field. This includes papers that meet all of the review criteria in exemplary ways (e.g., research that was particularly well designed, executed, and communicated), or papers that meet specific review criteria in exemplary ways (e.g., discoveries are particularly significant or sound). The meta-review form for each paper includes an option to officially nominate a paper to the Awards Committee for the best-paper award. Reviewers may flag papers for award consideration during review, but meta-reviewers are ultimately responsible for nominating papers for the best paper award. Each meta-reviewer may nominate at most two papers for the best paper award. Nominated papers may or may not have been flagged by one or more reviewers. Nominations should be recorded in HotCRP and be accompanied by a paragraph outlining the rationale for nomination. NOTE: Whether a paper has been nominated and the accompanying rationale are not disclosed to the authors as part of the meta-review.
Meta-reviewers are encouraged to review and finalize their nominations at the conclusion of the SPC meeting to allow for possible calibration. Once paper decisions have been sent, the submission chair will make PDFs and the corresponding rationales for all nominated papers available to the Awards Chair. Additionally, a list of all meta-reviewers that have handled any nominated paper or have one or more conflicts of interest with any nominated paper will be disclosed to the Awards Chair, as those members are not eligible to serve on the Awards Committee.
10. Possible Plagiarism, Misrepresentation, and Falsification
If after reading a submission, you suspect that it has in some way plagiarized from some other source, do the following:
- Read the ACM guidelines on Plagiarism, Misrepresentation, and Falsification.
- If you think there is a potential issue, write the PC chairs at pc-chairs@icer.acm.org to escalate the potential violation, and share any information you have about the case. Authors are required to disclose any potentially overlapping work to the PC chairs upon submission.
The chairs will investigate and decide as necessary prior to the acceptance notification deadline. You should not mark the paper for rejection based on suspected plagiarism. Mark it based on the paper as it stands, while the PC chairs investigate.
11. Practical Suggestions for Writing Reviews
The following suggestions may be helpful when reviewing papers:
- Before reading, remind yourself of the preceding reviewing criteria.
- Read the paper, and as you do, note positive and negative aspects for each of the preceding reviewing criteria.
- Use your notes to outline a review organized by the seven criteria, so authors can understand your judgments for each criterion.
- Draft your review based on your outline.
- Edit your review, making it as constructive and clear as possible. Even a very negative review should be respectful to the author(s), helping to educate them. Avoid comments about the author(s) themselves; focus on the document.
- Based on your review, choose scores for each of the criteria.
- Based on your review and scores, choose a recommendation score and decide whether to recommend the paper for consideration for a best paper award.
Thank you very much for reading this document and thank you very much for being part of the ICER reviewing process. Do not hesitate to email the Program Co-Chairs at pc-chairs@icer.acm.org if you have any questions.